Hemut Co. · Engineering
Engineering Benchmarks
DORA Four Keys + AI-era metrics · March 2026 · Google DORA 2023, LinearB 2024, Swarmia 2024
§1Overall Score10 metrics tracked
2
elite
4
low
4
unknown
Strengths
  • ✓ Core 4 velocity: 13.8 PRs/eng/wk (Elite)
  • ✓ Core 4 efficiency: $0/PR (Elite)
  • ✓ AndrewFE: $7/PR — best in class
  • ✓ CI pass rate: 88% (High)
  • ✓ AI adoption is industry-leading
Critical Gaps
  • ⚠ MTTR = ∞ (0 Sentry resolved/90d)
  • ⚠ Ticket linkage 29% vs 80% target
  • ⚠ Harshiv CI at 50%
  • ⚠ Gabriele $/PR: $182 (very high)
Not Yet Tracked
  • ⃝ Deployment frequency
  • ⃝ Lead time commit→prod
  • ⃝ Change failure rate
  • ⃝ Test coverage %
§2Top Performers vs BenchmarksFounding engineers only
EngineerPRs/wk$/PRCI %Ticket LinkOpus %
Elite threshold: ≥10 PRs/wk  ·  <$15/PR  ·  ≥95% CI  ·  ≥90% ticket linkage
§3DORA Four Key MetricsSource: Google State of DevOps 2023
2 of 4 DORA metrics are Unknown — not because performance is bad, but because deployment events aren’t logged. Adding deploy tracking is a 2-line CI change that unlocks Deployment Frequency and Lead Time immediately.
MetricDescriptionHemut NowTierEliteRequired Action
Deployment FrequencyHow often code reaches productionNot trackedUnknownMultiple/dayAdd deploy event to CI: log timestamp on every Vercel prod deploy. 2-line change. Unlocks this metric and Lead Time.
Lead Time for ChangesCommit → productionNot trackedUnknown< 1 hourAvailable automatically once deploy tracking is added. Lead Time = deploy_ts − merge_ts.
Mean Time to RestoreHow fast prod incidents are resolved∞ (0 resolved/90d)Low< 1 hourCRITICAL: 0 Sentry issues resolved in 90 days. Assign fix owners to 6 prod issues this sprint. Each resolved issue moves MTTR from ∞ to a real number.
Change Failure Rate% of deploys causing incidentsNot trackedUnknown0 – 5%Computable from Sentry + deploy log: incidents opened within 24h of deploy / total deploys. Requires deploy tracking.
§4AI-Era Engineering MetricsLinearB 2024 · Swarmia 2024 · Hemut internal
MetricHemut NowTierEliteSourceAction
CI Pass RateTBDNot tracked≥ 95%LinearB 202488% is already High. Harshiv at 50% is the critical outlier — enforce test-before-push. Target: 92% by Jun 2026.
Ticket Linkage Rate0%Low≥ 90%Swarmia 202429% is Low. Ashish 0% (170 PRs), AndrewFE 0% (146 PRs), Gabriele 0% (24 PRs) are the main gaps. Enforce via PR template.
AI Spend $/PR — team avg$0Elite< $15Hemut internal$0 is Elite. Dragged up by Gabriele ($182/PR) and Pranav ($49/PR). Core team avg is $0.
AI Spend $/PR — core 4$0Elite< $15Hemut internalSumedh/Ashish/AndrewFE/Avetis average already Elite. AndrewFE $7/PR is best in class. Protect this by keeping Sonnet as default.
PRs/Engineer/Week — team0.0Low≥ 10LinearB 20243.7 team avg is Medium — dragged down by interns and non-eng. Core 4 average 13.8/wk is Elite. Exclude interns from team velocity metric.
Lines of Code per $10Low≥ 200Hemut internal (CC seat)118 lines/$ is High. Improve by switching high-Opus users to Sonnet (5× cheaper per token, same output quality).
§5Quick Wins1 sprint each · highest ROI first
MTTR — Resolve 6 production Sentry issues
Each resolved issue moves MTTR from ∞ to a real number. Assign fix owners in Monday's sprint meeting. The 6 production issues are in sentry.html. One sprint to close all 6.
Ticket Linkage — PR template enforcement
Ashish (0%, 170 PRs) + AndrewFE (0%, 146 PRs) + Gabriele (0%, 24 PRs) = 340 unlinked PRs. Adding a single PR template with 'Linear: SWAG-xxx' field closes 38% of the team gap immediately.
Deploy tracking — 2 lines in CI
Add a curl to the Vercel deploy hook that logs timestamp to a file or webhook. Unlocks Deployment Frequency + Lead Time — the two Unknown DORA metrics.
Opus → Sonnet for 3 engineers
Gabriele, AndrewFE, Darius all >85% Opus. Switching routine tasks to claude-sonnet-4-6 saves ~$800/month at current run rate with no quality loss.
§690-Day Improvement TargetsFill in actuals each quarter
MetricMar 2026 (now)Jun 2026 targetJun 2026 actualSep 2026 target
MTTR∞ (0 resolved)< 48 hoursTBD
Ticket linkage0%60%TBD
CI pass rateTBD92%TBD
$/PR (team avg)$0$22TBD
Deploy frequencyNot trackedTracked + ≥5/wkTBD
Change failure rateNot trackedTracked + <15%TBD
PRs/wk (core 4)13.814+TBD
How to update: Run ./run.sh each quarter. The Hemut Now column recalculates from live data automatically. Fill in Actual columns manually after each quarter closes.
✓ Data current GitHub: Mar 30, 2026 8:20pm (today) AI Spend CSVs: Mar 30, 2026 8:20pm (today)