ArticlesProjectsCredentialsAbout
estimationagileproject-management

Estimating Software Work: Why We're Always Wrong and How to Be Less Wrong

·4 min read

Estimating Software Work: Why We're Always Wrong and How to Be Less Wrong

Every software team has the same experience: the estimate was two weeks, the delivery was six weeks. This is so universal that it has a name — Hofstadter's Law: "It always takes longer than you expect, even when you take into account Hofstadter's Law."

By 1997 I had been on enough late projects to start asking why, and what could be done about it.

Why Estimates Are Systematically Too Low

We estimate the work we can see. The first hour of thinking about a task reveals the obvious subtasks. The second hour reveals the non-obvious ones. Most estimates are made after the first hour.

We confuse optimistic outcomes with likely outcomes. If everything goes well — no blockers, no interruptions, requirements stay stable, dependencies deliver on time — the task takes two weeks. We estimate as if this is the likely outcome. It is not; it is the best case.

We anchor on past velocity without adjusting for current complexity. This feature took two weeks last quarter; this one seems similar, so two weeks. But "similar" is assessed by surface characteristics, not by underlying complexity.

Integration takes longer than we predict. A component that works in isolation always requires some additional work to integrate with the rest of the system. This cost is systematically underestimated because it is invisible until you start the integration.

Rework is not in the estimate. The first implementation is rarely the final one. Code that passes review still gets changed. Requirements that seemed clear get clarified. The estimate covers first implementation; the calendar also contains the rework.

Approaches That Actually Help

Decompose until each task is one to three days. Estimates for tasks longer than three days are unreliable because the task is not fully understood. If you cannot describe a two-week task as a sequence of one-to-three-day subtasks, you do not understand it well enough to estimate it.

Estimate in ranges, not point values. "Two to four weeks" is more honest than "two weeks" and more useful — it communicates uncertainty. Point estimates anchor on the optimistic end; ranges force the estimator to think about what could go wrong.

Use historical data. Track actuals. Compare them to estimates. Over time, if your two-week estimates consistently take four weeks, apply a factor. Do not assume the next estimate will be different without understanding why the previous ones were wrong.

Separate the estimate from the commitment. The estimate is what you think it will take. The commitment is what you agree to deliver. These are not the same. Estimates can be uncertain; commitments must be reliable. When asked for a date, give a range and explain the risks, rather than a confident date you cannot believe in.

Add a buffer for integration and testing. However long you think the implementation will take, add 30% for integration and 20% for testing. Not as a fudge factor — as an honest recognition that these phases always exist and are always underestimated.

The Planning Fallacy

Daniel Kahneman named the "planning fallacy" — the tendency to make plans based on the best-case scenario while ignoring the distribution of past outcomes.

The remedy he recommends is "reference class forecasting": instead of asking "how long will this specific project take?", ask "how long did similar projects take?" Find comparable past work, look at the actual durations, and use the distribution to calibrate your estimate.

In practice: before committing to a six-month estimate for a major feature, look at the last three major features. How long did they take? Was this one delivered faster or slower than expected? Use the empirical data, not the feeling.

What Good Estimation Culture Looks Like

No single-point estimates for anything over a week. Ranges are required.

Estimates are revised as information improves. An estimate made in January with 20% of the requirements known is different from an estimate made in March with 80% known. Re-estimating is not a failure; it is how accurate estimates work.

Late estimates are investigated, not blamed. When a project runs long, the question is "what did our estimating process miss?" not "who gave the bad estimate?" You cannot improve estimation without understanding why it went wrong.

Estimates are separated from deadlines. A deadline imposed before estimation is not a derived date — it is a constraint. Treat it honestly: "we have a fixed deadline of X; given our estimates, here is what we can and cannot deliver by that date."