They Don't Know What They Want! and a Few Ruthless Questions About Estimation in Corporate IT

Estimating how much effort is required for digital transformation projects is not an easy task, especially with incomplete information in your hands. If one doesn’t know in sufficient detail what the business solution to be built has to do, how can they estimate correctly?  In face of such an unchallengeable truth, my only recommendation is to look at the problem from another angle by asking these simple but ruthless questions: 

Q1: Why are there so many unknowns about the requirements when estimation time comes?

Instead of declaring that requirements are too vague for performing reliable estimation, couldn’t we simply get better requirements? My observations are that technical teams that need clear requirements aren’t pushing enough on the requesting parties. This could be rooted in a lack of direct involvement in the core business affairs, an us-and-them culture, an order-taker attitude, or all of the above. Whatever the reason, there is a tendency to take this lack of clarity as an ineluctable fact of life rather than asking genuine questions and doing something about it.

Q2: Why do IT people need detailed requirements for estimation?

There are industries where they get pretty good estimates with very rough requirements. In the construction world, with half a dozen questions and square footage number, experts can give a range that’s pretty good — that is compared to IT projects. I can hear from a distance that IT projects are far more complex, that “it’s not comparable”, etc. These are true facts, but they do not justify the laxity with which your corporate IT teams tackle the estimation process. In the construction industry, they have worked hard to get to that point and they relentlessly seek to improve their estimation performance.

Couldn’t IT teams develop techniques to assess what has to be done with rough requirements, then refine those requirements, re-assess estimates, and then learn from the discrepancies between rough and detailed to improve their estimation technique?  Read carefully the last sentence: I did not write ‘improve their estimates’ but rather ‘improve their estimation techniques’. Digital teams are good at the former but mediocre at the latter. IT staffs know how to re-assess when more detailed requirements are known, but they are clueless about refining their estimation techniques.

Q3: Is IT the only engineering field where customers don’t know in details what they want at some point? 

Of course not!  All engineering fields where professionals have to build something that works face the challenge of customers not knowing what they want, especially at the early stages.  Rough requirements can be as vague as “A new regional hospital”,  “ A personal submarine”, “A multi-sports stadium”, “A log home”, “A wedding gown”.  Professionals in these other fields genuinely work at improving their estimation skills and techniques even with sketchy requirements. But no so in corporate IT.

Q4: Who’s accountable for providing the requirements? 

The standard answer is that it should come from the user or the paying customer, and that’s fair. The problem is that IT folks have pushed too far such a statement and distorted it to a point where requirements should fall from the skies and be detailed enough for precise estimation, or else be rejected! Which has led to an over-used statement that “Users don’t know what they want!”  And that’s not fair, especially when it is used to declare that estimating is a useless practice.  Which leads to the next question.

Q5: Who’s accountable for getting clear requirements?

That’s the most interesting question.  The query is different from the previous question, read carefully.  It’s about getting the requirements and being accountable for getting clear requirements.  Digital systems are not wedding gowns or log homes.  Non-IT people often have a hard time understanding how and what to ask for.  Whose responsibility is it to help them? If the requirements aren’t clear enough, who’s accountable for doing something about it?  The answer to all these questions should be those that have the knowledge, and that’s generally the IT folks.  What I observe in the field is that IT staff are too often nurturing an us versus them culture where they don’t know what they want.  Let’s turn for a moment that statement around to: “We don’t know what to do”.  Isn’t that an interesting way to see things? It’s not anymore that they don’t know what they want, but rather that the IT teams don’t know what to build to provide the outcome that the organization needs.

Q6: Who’s accountable for knowing what to do? 

We all know who they are. Seeing the problem from that end and with another lighting may substantially reduce the cases when “they don’t know what they want” is a valid point.

Agile™ and Iterative Development to the Rescue! Or is it?

The clarity of requirements issue has lead smart IT people to use iterative prototyping to solve it for good.  The idea is ingenious and simple: let’s build smaller pieces of the solution within a short period of time, show that portion to the users and let them determine if that’s what they thought they wanted.  That’s great, and that’s one reason why the Agile™ methods have had such a widespread acceptance.  However, iterative prototyping doesn’t solve everything, and it certainly avoids a few important issues:

Q7: Are users getting better at understanding their requirements with Agile™?

Are sponsors and users getting any better at knowing what they need before they get any technical team involved? Of course not. Things haven’t improved on that front with Agile™ methods or any iterative prototyping technique for that matter.

Q8: Could prototyping be used as a means for improving how people define requirements? 

It certainly could, but that is not being taken care of.  Worse, it encourages laxity in the understanding of the requirements.  After all if we’re going to get something every 3 weeks that we can show our sponsor, why should we spend time comprehending the requirements and detailing them?  That’s a tempting path of least effort for any busy fellow.  The problem is that thinking a bit more, asking more questions, writing down requirements, having others read them and provide comments takes an order of magnitude less effort than mobilizing a whole team to deliver a working prototype in 3 weeks. The former option is neglected at the expense of having fun building something on the patron’s cuff.

The False Innovation Argument

Iterative prototyping is used across the board for all kinds of technology-related change endeavors, including those that have little to no innovation at all.  Do not get fooled into thinking that all what the IT teams are doing is cutting edge innovation. 

In fact, I posit that for the vast majority of the work done, the real innovation has occurred in the very early stages, often at a purely business level, totally detached from technology.  What I see for most endeavors, is IT teams building mainstream solutions that have been done dozens or hundreds of times within your organization or in others. Why then is iterative prototyping required? In those cases, using iterative development methods is less for clarifying requirements than to manage the uncertainty around teams not knowing how to build the solution or not understanding the systems they work on.

In many cases, using Agile™ is a means for managing the uncertainty around IT folks not knowing how to do it.

Did I ask this other cruel question: who’s accountable for knowing the details of the systems and technologies in place? You know the answer, so it’s not in the list of questions. It’s more like a reminder.

And finally, the most important question related to estimation:

Q10: Is iterative prototyping helping anyone get better at estimating?

Of course not.   The whole topic is tossed on the side as irrelevant when not squarely labelled as evil by those that believe that precious time should be taken to develop a new iteration of the product rather than guessing the future.

The Rachitic (or Dead) Estimation Practice

The consequence is that there is no serious estimation practice developed within corporate IT.  Using the above impediments about ‘not knowing what they want’ to explain why estimations are so often off-mark is one thing.  Using these hurdles as an excuse to not get better at estimating is another.  IT projects are very good at counting how much something actually costed and comparing it to how much was budgeted.  But no-one in IT as any interest in comparing actual costs with what was estimated with the genuine intent of getting better estimations the next time. 

This flabbiness in executing what should be a continuous and relentless quest for improvement in the exercise of estimating takes its root in a very simple reality:  corporate IT is the one and only serving your needs, providing to your organization everything under the IT sun.  While in the infrastructure side of IT, competition has been around and aggressively trying to offer similar services to your organization as alternatives to your in-house function, the other portion of corporate IT –the one driving change endeavors and managing your application systems—operates in a dream business model: one locked-in customer that pays for all expenses, wages and bonuses, and pays by the hour.  When wrong estimates neither make you lose your shirt nor any future business opportunity, the effort for issuing better ones can safely be put elsewhere, where the risks imminent.

Don’t Ask for Improvement, Induce It

These behaviors cannot be changed or improved without providing incentives for betterment. Unfortunately, the current, typical engagement model of corporate IT in your organization is a major blocker. Don’t ask your IT teams to fix it: they’re stuck in the model. The ones that can change the game are not working on the IT shop floor.

Want some sustainable improvement? Start your journey by understanding the issues, and their true root causes.