Role Playbook
SaaS
200-500 employees
Head of Data · Data & Analytics Leader
Every exec wants a dashboard. Every exec defines the metric differently. That's the Head of Data OKR trap at 200-500 SaaS.
Three dashboards, three numbersCFO sees $13.8M. CRO sees $14.2M. Marketing sees $15.1M. Same week, same revenue, three different definitions.
Data incidents discovered by execs, not by youSlack at 9am: "Why does the dashboard show 0 deals?" — and you're learning about it from the CRO.
ML models stuck in "almost deployed"The forecast model has been at 95% for 4 months. Pre-production. Pre-decision-impact. Pre-shipping.
Every team has its own data analyst — and its own truthDecentralized data work means decentralized definitions. Reconciliation is now a quarterly tax.
Eng changes a schema
→
Three exec dashboards break
Marketing redefines a metric
→
Trust in your BI layer drops
Finance pulls a one-off report
→
The single-source-of-truth claim erodes
The job isn't building dashboards. It's making data-source ownership auditable before exec trust breaks.
THE SCORECARD
Three Head of Data OKRs that defend the seat at 200-500 SaaS.
You don't run the BI team day-to-day. You don't run the data engineering. You don't run the ML platform. You own the three bets that turn the seat from cost-center help desk into strategic peer — decision velocity, strategic insights, and institutional trust. Three objectives below.
| Objective | Key Result | Benchmark / Threshold | Target |
| Stakeholders go from question to data-supported answer in hours, not days O1 · Outcome state. The seat is defined by how fast the org can act on data — not by the size of the data team. |
p75 question-to-answer cycle ≤ 4 hours for self-serve; ≤ 24 hours for analyst-supported4h self-serve because most exec questions are dashboard-answerable; 24h analyst-supported because deeper questions need analyst time but should land same-day |
2-5 days typical Threshold | ≤ 4h / ≤ 24h |
| Self-serve answer rate ≥ 70% — questions answered without filing a request70% because below 50% the team is a help desk; above 80% means dashboards are well-built and definitions are clear |
25-40% typical Threshold | ≥ 70% |
| Ad-hoc-request tax ≤ 30% of team time — measured weekly30% because every hour above that is one fewer hour on platform, governance, or strategic insight work |
55-70% typical Threshold | ≤ 30% |
| Strategic insights from the data team change at least 3 exec decisions per quarter O2 · Outcome state. The team has to drive decisions, not just answer them. Insights that don't change a decision are reports. |
≥ 3 unsolicited strategic insights surfaced per quarter — not requested, generated by the data team3 because below that the team is reactive only; insights surface from looking at the data without being asked |
0-1/qtr typical Threshold | ≥ 3/qtr |
| ≥ 60% of strategic insights influence a documented exec decision within 60 days60% because not every insight is right or actionable; tracking influence-rate forces rigor on what gets surfaced |
~15% typical Threshold | ≥ 60% |
| Quarterly retrospective with CFO, CRO, CMO — what data did/didn't change about decisions madeRetro because without it the team optimizes for outputs, not outcomes; the retro is what surfaces whether insight quality is improving |
Rarely held Threshold | Quarterly |
| Stakeholders trust the numbers because institutional infrastructure holds O3 · Outcome state. Trust is the foundation; placing it as O3 not O1 because trust is the floor, not the ceiling. Without it, O1 and O2 don't matter. |
Top 12 exec dashboards reconcile to ±2% on weekly revenue numbers — checked monthly±2% accounts for legitimate timing differences; above ±5% means definitions are drifting, not just timing |
±5-15% drift typical Threshold | ≤ ±2% |
| Top 10 metric definitions (revenue, MQL, SQL, Opp, Won, Churn, NRR, GRR, ACV, CAC) signed by CFO + CRO + CMO at FY start, refreshed mid-yearSigned because verbal agreement breaks within 2 weeks; mid-year refresh because product-market-fit shifts definitions over time |
Verbal/drift typical Threshold | Signed doc |
| ≥ 95% of data incidents detected by monitoring before stakeholder report95% because below 90% the exec team experiences your team as reactive; 95% means proactive caught is the norm |
40-60% typical Threshold | ≥ 95% |
1 Data-incident MTTR benchmarks from
Monte Carlo 2024 State of Data Quality; deployment-cycle benchmarks from public-facing data-leadership writing (dbt Labs, Locally Optimistic community surveys). Specific company benchmarks limited.
How to start in week 1 of the quarter
Don't migrate Snowflake. Don't hire 3 analytics engineers. Do these five things:
→ Pull last 4 weeks of ad-hoc requests. For each: time logged, time to answer, was it self-serve-able, did it drive a decision. The gaps are your O1 + O2 baselines.
→ List the top 30 recurring exec questions. How many are answered by existing dashboards? Where the dashboards aren't working, that's the self-serve gap.
→ List ≥3 strategic insights your team surfaced unsolicited last quarter. If the count is 0-1, the insights pipeline doesn't exist yet — that's the O2 starting point.
→ Get CFO + CRO + CMO in a 60-minute meeting to sign top-10 metric definitions. Document, share, never debate again. That's the O3 trust foundation.
→ Audit AI-querying tool readiness — Hex AI, Looker AI, ThoughtSpot Sage. The path to ≤4h self-serve runs through these tools layered on a clean semantic model.
Why O3 sits last but is the foundation
O1 makes you fast. O2 makes you strategic. O3 is what makes either possible. If institutional trust isn't there — definitions drift, dashboards don't reconcile, incidents found by execs — no decision velocity will be trusted and no insight will be heard. Trust is the floor, not the ceiling.
STRATEGIC BETS
The three bets inside every Head of Data OKR stack — and the dozen your team runs without you.
Your BI lead runs dashboard delivery. Your senior data engineer runs the data infrastructure. Your ML platform owner runs deployment infrastructure. You don't. Your job is the three bets that turn data trust from a quarterly negotiation into an enforceable cadence.
Strategy 1 — Replace ad-hoc-request bottleneck with self-serve-by-default analytics
→ O1
1.1
Self-serve dashboard library: top 30 recurring exec questions answered by 10-12 well-built dashboards with clear definitions; questions outside that scope get the analyst queue
BI lead
1.2
AI-assisted natural-language querying (Hex AI, Looker AI, ThoughtSpot Sage) layered on the semantic model — non-technical stakeholders ask in English, the tool returns a query against governed data
CTO + Eng
1.3
Question-routing protocol: every ad-hoc request goes through a 5-min triage — self-serve possible? template-able? genuinely new? — measured weekly
Internal
1.4
Question-to-answer cycle-time tracking — every request logged with timestamps; p75 cycle reviewed weekly; outliers dissected for pattern
Internal
Strategy 2 — Replace reactive analytics with an unsolicited insights pipeline
→ O2
2.1
Insights cadence: 1 senior analyst protected for 1 day/week to surface patterns nobody asked about — segment shifts, cohort drifts, conversion anomalies, NRR composition changes
Internal
2.2
Insight-quality discipline: every surfaced insight has a one-page brief — what we saw, why it matters, what decision it suggests; tracked through to influence outcome
Internal
2.3
Quarterly retro with CFO + CRO + CMO — what data did/didn't change about decisions made; insight-influence rate reviewed and adjusted
CFO + CRO + CMO
2.4
Decision-impact tracking: every insight that influenced a decision tagged with the decision and its outcome; closes the loop on whether the insight was right
All function heads
Strategy 3 — Replace verbal definition agreements with institutional definition discipline
→ O3
3.1
Annual signed metric-definition document: top 10 metrics signed by CFO + CRO + CMO at FY start; mid-year refresh; change requests reviewed quarterly
CFO + CRO + CMO
3.2
Monthly top-12 dashboard reconciliation cadence: ±2% drift on weekly revenue numbers; drift attributed to function owner
All function analysts
3.3
Monitoring coverage on critical pipelines — every exec dashboard's upstream pipeline has freshness, volume, and schema-drift alerts; ≥95% incidents detected proactively
Data engineers
3.4
Upstream change-management coordination — any schema or definition change in source systems goes through a 2-business-day data review
Eng + RevOps + IT
ENFORCEMENT LAYER
Enforcement for Head of Data OKRs — the cadence layer above your data tools.
Snowflake stores data. dbt transforms it. Looker visualizes. Monte Carlo monitors quality. Each runs in one lane. None enforces whether dashboards actually reconcile, whether incidents resolved within MTTR, or whether the insights pipeline produced anything this quarter. That's the cadence layer above your stack.
How this works in practice
→ Your team enters KR values weekly — reconciliation drift, MTTR, ad-hoc tax, insights-surfaced count
→ Each becomes a tracked KR with an owner
→ ShiftFocus runs the cadence and fires triggers when KRs bend
We don't pull from Snowflake or dbt. We make the data KRs your team already maintains catch drift at week 1, not audit time.
Two triggers define daily pain: Trigger 6 (Dependency SLA Breach) when an upstream system change breaks a data pipeline, and Trigger 4 (KPI Drift) when reconciliation or detection-rate KRs cross threshold.
The two that fire hardest at the Head of Data layer
Trigger 6 · Dependency SLA Breach — when an upstream system change breaks a data pipeline
⚡ Fires whenA tracked upstream dependency — Salesforce schema change, marketing-tool field rename, source-system version upgrade — happens without going through the 2-business-day data review SLA. Threshold
▎ Why this matters
Most data incidents trace to upstream system changes nobody coordinated with the data team. When the change-management process includes data dependencies, incidents drop ~60%. When it doesn't, the data team patches reactively forever.
▎ Why ShiftFocus catches it
Monte Carlo detects incidents after they happen. ShiftFocus tracks the upstream change-management dependency BEFORE the incident — when an Eng or RevOps change is scheduled without data review, the trigger fires preemptively.
▎ Example scenario
RevOps schedules Salesforce field rename for Tuesday. Data-review dependency not cleared. Trigger 6 fires Monday: "field rename without data review — 2-day SLA breached." Tuesday's exec meeting opens with "let's clear this before deploy" — not Wednesday's "why is the dashboard broken?"
Trigger 4 · KPI Drift — when MTTR or deployment-cycle KRs cross threshold
⚡ Fires whenMTTR on critical incidents crosses 4-hour threshold for 2 incidents in a row, OR deployment cycle exceeds 21d p75 for 2 quarters in a row, OR data-freshness KR drops below 99%. Threshold
▎ Why this matters
Single threshold breaches happen. Two-in-a-row means the underlying process is degrading — and exec trust is eroding faster than the metrics show.
▎ Why ShiftFocus catches it
Monitoring tools track operational metrics. ShiftFocus tracks them as KRs with directional thresholds — not just "is it on" but "is the trend bending wrong?"
▎ Example scenario
Q3 week 5: 2 critical incidents took 6h and 8h MTTR. Trigger 4 fires before incident #3. Root cause: alerting noise causing the on-call to miss real signals. Process fix happens before exec trust collapses.
The other 4 that also fire on your KRs
Trigger 1 · Missed Cadence
⚡ WhenMonthly dashboard reconciliation skipped, OR weekly incident-review skipped, OR quarterly model-adoption review missed.
▎ Example scenario
Reconciliation cadence skipped 2 months. Trigger fires to data-platform lead.
Trigger 2 · Velocity Drop
⚡ WhenModel deployment velocity drops — more models in pipeline than reaching production for 2 quarters running.
▎ Example scenario
Q2: 5 built, 2 deployed. Q3: 6 built, 1 deployed. Trigger fires — review with platform team.
Trigger 3 · Momentum Decay
⚡ WhenDashboard-reconciliation drift trending up, OR ad-hoc-request load trending up.
▎ Example scenario
Drift goes from ±2% to ±3% to ±4% over 3 months. Trigger fires before threshold breach.
Trigger 5 · Owner Absence
⚡ WhenExec dashboard without named owner, or ML model without named stakeholder-owner.
▎ Example scenario
Audit: 3 of 12 exec dashboards have "data team" as owner-by-default. Trigger fires.
Why this works alongside your existing data stack
Snowflake stores data. dbt transforms. Looker visualizes. Monte Carlo monitors quality. Each does its job. ShiftFocus is the cadence layer above them — every upstream change becomes a tracked SLA, threshold breaches fire before exec trust erodes, and data KRs run on one weekly review.
ESCALATION DESIGN
The Head of Data escalation chain — 5 levels, all on a 48-hour clock.
Below: an upstream-change-management dependency breach (RevOps planning a Salesforce schema change without data review) threaded through the ladder.
L1
Auto-Nudge — to RevOps + scheduled change owner
Monday morning: schema change scheduled for Tuesday without data-review dependency cleared. Trigger 6 fires. RevOps + Eng owner get Slack: 2-business-day SLA breached.
Immediate
L2
Peer Flag — Head of Data + Head of RevOps see it
Monday afternoon: still unresolved. Tracked dependency surfaces in Head of Data and Head of RevOps dashboards.
+4h
L3
Direct ask — Head of RevOps to change owner
Monday EOD: still stuck. Head of RevOps directly asks the change owner for a re-schedule or expedited data review.
+24h
L4
Pattern Brief — recurring breaches surface
Q3 audit: 6 upstream changes deployed without data review this quarter. Pattern goes to CTO + Head of RevOps + Head of Data — process problem, not data-team problem.
Week 7
L5
Intervention — operating-cadence review
Quarter close. Dashboard reconciliation drift up to ±6% from changes the data team didn't get to review. Full Eng + Data + RevOps exec team in the room. Decision: enforce 2-day data-review SLA at the change-management gate, or accept the structural drift.
Quarter-end
What this kills
The failure mode where you spend Q3 patching dashboards after every upstream change, present a clean reconciliation Monday that's broken Tuesday by a Salesforce field rename, and absorb the trust damage at QBR. Trigger 6 fires before the change deploys — at the change owner, not your team.
EXECUTION INTELLIGENCE
How the 5 ShiftFocus metrics read on your Head of Data KRs.
ShiftFocus runs five health metrics on every KR — same five whether the KR is "Dashboard reconciliation ≤ ±2%" or "MTTR ≤ 4h" or "Deployment cycle ≤ 21d p75." Here's what each tells you on a Head of Data KR.
What this looks like at week 6 of Q3
$40M ARR SaaS, 320 employees, 9-person data team. Head of Data has three OKRs running mid-quarter:
What the data-trust gap actually costs
The primary case is operating quality. Dollar leakage varies by ARR — but three costs reliably stack the same year:
→ Exec time absorbed in reconciliation — every meeting has a "which number is right?" tax
→ Decisions revisited because they ran on wrong data — strategy work redone two quarters later
→ Data-team capacity burned on ad-hoc requests — platform and governance work never gets shipped
Each costs more than the governance investment that prevents it.
The case to make to your CFO and CEO
Convert "data is unreliable" into "of 4 reconciliation breaches this quarter, 3 trace to upstream Salesforce schema changes deployed without data review, 1 to a CMO/CRO definition disagreement; here's the change-management fix." That's what shifts the seat from cost-center to strategic peer.