engineering management
Rethinking Delivery Performance: Lessons from DORA 2023 and Developer Experience
DORA 2023 isn’t just about speed—it’s about reliability, feedback loops, and developer experience. This post breaks down the five core DORA metrics, adds supporting DevEx indicators, and shows how to spot hidden friction in your CI/CD pipeline.
Rethinking Delivery Performance: Lessons from DORA 2023 and Developer Experience
In 2023, the DevOps Research and Assessment (DORA) team released its latest State of DevOps report, highlighting the evolving landscape of software delivery. The findings were clear: high-performing teams aren’t just faster—they’re safer, more resilient, and more developer-friendly.
As engineering leaders, it's time we stop treating performance as just a speed game. Modern software delivery is a balance of velocity, reliability, and developer experience.
This post reframes key engineering metrics through the lens of DORA 2023—and introduces a few complementary metrics that expose the hidden friction in your pipeline.
The Core Five: DORA 2023 Metrics
These metrics form the foundation of performance analysis in elite engineering organizations.
1. Lead Time for Changes
Definition: Time from code committed to code successfully running in production.
This reflects the speed of innovation and feedback loops. DORA 2023 reaffirms it as one of the strongest indicators of delivery performance.
- What to track: Time from PR creation to successful deployment
- Why it matters: Long lead times discourage iteration, delay feedback, and frustrate teams
2023 Benchmark:
- Elite: <1 hour
- Low: >6 months
2. Deployment Frequency
Definition: How often your team deploys code to production.
This measures how continuously you can deliver value. Frequent deploys typically correlate with small batch sizes, less risk, and healthier teams.
- What to track: Daily or weekly prod/staging deployments per team
- Why it matters: High deployment frequency enables fast fixes, feature flags, and continuous validation
2023 Benchmark:
- Elite: Multiple times per day
- Low: Fewer than once every 6 months
3. Change Failure Rate
Definition: The percentage of deployments causing degraded service, bugs, or rollbacks.
This represents the cost of change. DORA teams emphasize failure rate as a critical counterbalance to deployment speed.
- What to track: Failed deployments / total deployments
- Why it matters: High rates indicate poor test quality, integration issues, or insufficient validation
2023 Benchmark:
- Elite: 0–15%
- Low: 46–60%
4. Mean Time to Recovery (MTTR)
Definition: Time from incident detection to full restoration of service.
Failures happen. What matters is how quickly you can detect, diagnose, and recover.
- What to track: Incident start to incident resolution
- Why it matters: Fast MTTR builds trust in your system and your team
2023 Benchmark:
- Elite: <1 hour
- Low: >6 months
5. Reliability
Definition: The degree to which your system meets operational expectations (uptime, latency, etc.).
Added in recent years, reliability reflects customer-facing system health, not just internal delivery metrics.
- What to track: SLAs, SLOs, incident frequency, alert fatigue
- Why it matters: A fast pipeline is meaningless if your system is down or unstable
Summary of DORA 2023 Core Metrics
| Metric | Definition | What to Track | Elite Benchmark |
|---|---|---|---|
| Lead Time for Changes | Time from commit to production | PR open to prod deploy | < 1 hour |
| Deployment Frequency | How often code is deployed to prod | Prod deploys per day/week | Multiple times per day |
| Change Failure Rate | % of deployments causing incidents | Failed/rolled-back deployments | 0–15% |
| Mean Time to Recovery | Time to restore service after failure | Time from incident to resolution | < 1 hour |
| Reliability | System’s ability to meet uptime and performance expectations | SLOs, uptime, incident rate | SLO-compliant & stable |
Beyond DORA: Metrics That Capture Developer Experience
While DORA gives us delivery health, it doesn’t tell the whole story. Developer Experience (DevEx) has emerged as a critical success factor—and these supporting metrics help illuminate it.
6. Build Time
Definition: Time to compile, test, and package a change.
While not in DORA, build time directly affects lead time and developer flow. Long builds waste time, reduce confidence in the CI system, and lead to risky batch commits.
- What to track: CI job duration (cold and hot builds)
- Why it matters: Developers won’t iterate quickly if builds punish them for trying
Tip: Watch for variance—flaky builds are worse than slow ones.
7. Queue Time
Definition: Time a CI/CD job waits before it starts executing.
This is the invisible bottleneck in scaling engineering teams. When everyone’s building at once, even the best pipelines slow down.
- What to track: Time from job submission to execution start
- Why it matters: Queue time reflects infrastructure limits and resource contention
Insight: DORA emphasizes speed, but speed can’t happen when runners are exhausted.
8. Frequency of Broken Main
Definition: How often the main branch is in a non-deployable state.
This is a culture and process smell. A consistently broken main branch means poor test coverage, weak gating, or rushed reviews.
- What to track: % of time main is red / incident count from merges
- Why it matters: Main should be sacred. If you can’t trust it, everything slows down.
Actionable Tip: Set up merge protection, fast revert paths, and daily "green checks".
9. Engineering Overhead
Definition: Time engineers spend on infra, CI tooling, and environment management instead of feature development.
This metric ties directly to platform maturity and DevEx quality—themes DORA 2023 heavily emphasizes.
- What to track: % of sprint hours spent on pipeline bugs, test flakiness, tooling fixes
- Why it matters: Engineers drowning in overhead are engineers not shipping value
How to measure: Use retros, dev satisfaction surveys, or tag internal tickets. Patterns emerge quickly.
Closing Thoughts: DORA + DevEx Is the Future
DORA’s metrics provide a science-backed foundation for measuring delivery performance. But they’re not the end of the story.
In 2023, DORA explicitly emphasized:
- Developer Experience (DevEx)
- Platform engineering
- Psychological safety
- Sustainable delivery culture
To build elite teams, you must measure what matters and remove what hurts. That includes invisible friction: long waits, flaky builds, overloaded tools, and broken feedback loops.
Your goal isn’t just faster code—it’s a system that enables developers to move confidently, safely, and joyfully.
That’s real velocity.
References:
- 2023 State of DevOps Report – DORA
- Accelerate: The Science of Lean Software and DevOps by Forsgren, Humble, and Kim