• rm@rmbastien.com

Project-Oriented IT

“They Don’t Know What They Want!” and a Few Ruthless Questions About Estimation in Corporate IT

Estimating how much effort is required for digital transformation projects is not an easy task, especially with incomplete information in your hands. If one doesn’t know in sufficient detail what the business solution to be built has to do, how can they estimate correctly?  In face of such an unchallengeable truth, my only recommendation is to look at the problem from another angle and ask these simple but ruthless questions: 

Q1: Why are there so many unknowns about the requirements when estimation time comes?

Instead of declaring that requirements are too vague for performing reliable estimation, couldn’t we simply get better requirements? My observations are that technical teams that need clear requirements aren’t pushing enough on the requesting parties. This could be rooted in a lack of direct involvement in the core business affairs, an us-and-them culture, an order-taker attitude, or all of the above. Whatever the reason, there is a tendency to take it as an ineluctable fact of life rather than asking genuine questions and doing something about it.

Q2: Why do IT people need detailed requirements for estimation?

There are industries where they get pretty good estimate with very rough requirements. In the construction world, with half a dozen questions and square footage, experts can give a range that’s pretty good —compared to IT projects. I can hear from a distance that IT projects are far more complex, that “it’s not comparable”, etc. These are valid arguments that do not justify the laxity with which your corporate IT teams tackle the estimation process. In the construction industry, they have worked hard to get to that point and they relentlessly seek to improve their estimation performance.

Couldn’t IT teams develop techniques to assess what has to be done with rough requirements, then refine those requirements, re-assess estimates, and then learn from the discrepancies between rough and detailed to improve their techniques?  Read carefully the last sentence: I did not write ‘improve their estimates’ but rather ‘improve their techniques’. IT staffs know how to re-assess when more detailed requirements are known, but they are clueless about refining their estimation techniques.

Q3: Is IT the only engineering field where customers don’t know in details what they want at some point? 

Of course not!  All engineering fields where professionals have to build something that works face the challenge of customers not knowing what they want, especially at the early stages.  Rough requirement can be as vague as “A new regional hospital”,  “ A personal submarine”, “A multi-sports stadium”, “A log home”, “A wedding gown”. Professionals in these other fields genuinely work at improving their estimation skills and techniques even with sketchy requirements. But no so in corporate IT.

Q4: Who’s accountable for providing the requirements? 

The standard answer is that it should come from the user or the paying customer, and that’s fair. The problem is that IT folks have pushed too far such a statement and distorted it to a point where requirements should fall from the skies and be detailed enough for precise estimation. Which has led to an over-used and over-written statement that “Users don’t know what they want!”  And that’s not fair, especially when it is used to declare that estimating is a useless practice.  Which leads to the next question.

Q5: Who’s accountable for getting clear requirements?

That’s the most interesting question.  The query is different from the previous question, read carefully.  It’s about getting the requirements and being accountable for getting clear requirements.  Digital systems are not wedding gowns or log homes.  Non-IT people often have a hard time understanding how and what to ask for.  Whose responsibility is it to help them? If the requirements aren’t clear enough, who’s accountable for doing something about it?  The answer to all these questions should be those that have the knowledge, and that’s generally the IT folks.  What I observe is that IT staff are too often nurturing an us versus them culture where they don’t know what they want.  Let’s turn for a moment that statement around to: “We don’t know what to do”.  Isn’t that an interesting way to see things? It’s not anymore that they don’t know what they want, but rather that the IT teams don’t know what to build to provide the outcome that the organization needs.

Q6: Who’s accountable for knowing what to do? 

We all know who they are. Seeing the problem from that end and with another lighting may substantially reduce the cases when “they don’t know what they want” is a valid point.

Agile™ and Iterative Development to the Rescue! Or is it?

The clarity of requirements issue has lead smart IT people to use iterative prototyping to solve it for good.  The idea is ingenious and simple: let’s build smaller pieces of the solution within a short period of time, show that portion to the users and let them determine if that’s what they thought they wanted.  That’s great, and that’s one reason why the Agile™ methods have had such a widespread acceptance.  However, iterative prototyping doesn’t solve everything, and it certainly avoids a few important issues:

Q7: Are users getting better at understanding their requirements with Agile™?

Are sponsors and users getting any better at knowing what they need before they get any technical team involved? Of course not. Things haven’t improved on that front with Agile™ methods or any iterative prototyping technique for that matter.

Q8: Could prototyping be used as a means for improving how people define requirements

It certainly could, but that is not being taken care of.  Worse, it encourages laxity in the understanding of the requirements.  After all if we’re going to get something every 3 weeks that we can show our sponsor, why should we spend time comprehending the requirements and detailing them?  That’s a tempting path of least effort for any busy fellow.  The problem is that thinking a bit more, asking more questions, writing down requirements, having others read them and provide comments takes an order of magnitude less effort than mobilizing a whole team to deliver a working prototype in 3 weeks. The former option is neglected at the expense of having fun building something on the patron’s cuff.

The False Innovation Argument

Iterative prototyping is used across the board for all kinds of technology-related change endeavors, including those that have little to no innovation at all.  Do not get fooled into thinking that all what the IT teams are doing is cutting edge innovation. 

In fact, I posit that for the vast majority of the work done, the real innovation has occurred in the very early stages, often at a purely business level, totally detached from technology.  What I see for most endeavors, is IT teams building mainstream solutions that have been done dozens or hundreds of times within your organization or in others. Why then is iterative prototyping required? In those cases, using iterative development methods is less for clarifying requirements than to manage the uncertainty around teams not knowing how to build the solution or not understanding the systems they work on.

In many cases, using Agile™ is a means for managing the uncertainty around IT folks not knowing how to do it.

Did I ask this other cruel question: who’s accountable for knowing the details of the systems and technologies in place? You know the answer, so it’s not in the list of questions. It’s more like a reminder.

And finally, the most important question related to estimation:

Q10: Is iterative prototyping helping anyone get better at estimating?

Of course not.   The whole topic is tossed on the side as irrelevant when not squarely labelled as evil by those that believe that precious time should be taken to develop a new iteration of the product rather than guessing the future.

The Rachitic (or Dead) Estimation Practice

The consequence is that there is no serious estimation practice developed within corporate IT.  Using the above impediments about ‘not knowing what they want’ to explain why estimations are so often off-mark is one thing.  Using these hurdles as an excuse to not get better at estimating is another.  IT projects are very good at counting how much something actually costed and comparing it to how much was budgeted.  But no-one in IT as any interest in comparing actual costs with what was estimated with the genuine intent of getting better estimations the next time. 

This flabbiness in executing what should be a continuous and relentless quest for improvement in the exercise of estimating takes its root in a very simple reality:  corporate IT is the one and only serving your needs, providing to your organization everything under the IT sun.  While in the infrastructure side of IT, competition has been around and aggressively trying to offer similar services to your organization as alternatives to your in-house function, the other portion of corporate IT –the one driving change endeavors and managing your application systems—operates in a dream business model: one locked-in customer that pays for all expenses, wages and bonuses, and pays by the hour.  When wrong estimates neither make you lose your shirt nor any future business opportunity, the effort for issuing better ones can safely be put elsewhere, where the risks imminent.

Don’t Ask for Improvement, Change the Game

These behaviors cannot be changed or improved without providing incentives for betterment. Unfortunately, the current, typical engagement model of corporate IT in your organization is a major blocker. Don’t ask your IT teams to fix it: they’re stuck in the model. The ones that can change the game are not working on the IT shop floor.

Want some sustainable improvement? Start your journey by understanding the issues, and their true root causes.

Silo Generator #3


The two previous silo types could be labeled as structural silos. They are almost permanent and vary only after major reorgs or when applications are introduced or retired. The third one, the project silo, is the most damageable type of silo.
Although projects and their rigorous management are an absolute must for any organization to govern change endeavors, their very nature and the absence of strong counterbalancing mechanisms make projects temporal silos.
Because all projects are temporary endeavors with a start and an end date, anything that happened before is not managed within the project, and all that happens after is not either taken into account.

Learn more about how silos of all sorts that hamper business agility: https://rmbastien.com/book-summary-the-new-age-of-corporate-it-volume-1/
#CorporateITGameChange #ITMeasures #ITQuality #Sustainability #BusinessAdaptability

The Inconsequential Repercussions of Poor Estimation in Project-Oriented IT

Estimating – the art of practicing educated guesses on how much time and money are required to perform something – is a difficult task, particularly in corporate IT.  I have provided them, collected them, validated them, compiled them, suffered from them and abided by them, and let me assure you that this whole estimation business is far from trivial.  Being a difficult task is one thing, but it should not be a reason to push the subject aside.

So let’s look at a classic scenario that I have seen in all corporate IT projects that I’ve been involved with:

  • The first estimations are made with very little knowledge about the requirements during the IT investment budgeting cycle, starting six months to more than a year before the project is effectively launched.
  • The budgeting cycle directly involves the IT managers who will be responsible for building the solution. It is their opinion that carries the most weight in the balance.
  • In the best-case scenario, technical experts, designers and architects will be involved in a quick tour of the requirements and a high-level design of the solution. In other, less ideal cases, the managers will make the estimates.
  • Estimates are made with very little time allotted for the exercise, with managers and experts busy delivering current-year projects and dozens of other projects to evaluate within just a few weeks.
  • No quantitative method is used because the IT team has never developed such methods. There is little usable historical data, apart from the actuals of past projects. The identification of analogous projects is left to the memory of people rather than a rigorous classification of past expenses.
  • After several rounds of investment prioritization, the remaining investment projects will be challenged on estimates.
  • Based on the same limited knowledge of the requirements and with still very little quantitative data to back-up their argument, IT managers, sometimes with the help of their experts, will come up with more stringent assumptions in order to reduce the estimates and fit the expected budget.
  • At this point, the fear of having a given project cut from the investment list will have a definite effect on the level of optimism of the involved parties, both on the business sponsoring side and the IT team.
  • If the project makes it through the cuts, then in the next fiscal year a project team will be assembled. Only then will the true requirements be fleshed-out with the help of business experts, leading to a more complete IT architecture.
  • This detailed knowledge will lead to re-estimation of the cost and schedule. Most of the time, the new estimates will be higher than the ones from the budgeting cycle estimates. If the budget cannot be trimmed, then features will be cut.
  • In some organizations, a gating process may be put in place to reassess the net business value of the IT investment in view of the more accurate costs and schedule. The project may not pass the gate, at which point it is cancelled.
  • However, in many organizations, IT investment gating is avoided – or is nonexistent – and the business sponsor, project manager and IT managers will work on the expected scope and schedule in order to deliver something of value within the current year.
  • If the business value cannot be achieved within the available budget/schedule, a change request may be issued, frequently justified by the falsehood of one or more of the original estimation assumptions.
  • Since there is no formal quantitative estimation model in place, there is no process to assess if the change requests are caused by flaws in the estimation practice, nor is there a way to address how it could be improved for future projects.
  • Upon completion, the project may deliver fewer functions or less business value than expected, but since the original requirements were pretty vague, it is difficult to assess the delta.

This typical and classical sequence of events is one of the many variations that occur in IT organizations.  Estimation-wise, the most important characteristic of this scenario is that the estimation duty and its accompanying tools and data suffer from little rigor, no repeatability, absence of relevant data collection, and archaic tools.

In short, the corporate IT estimation discipline is so immature that it can’t be called a practice.  Things are mostly left to good intentions and experience.   

Even the Agile™ tidal wave isn’t bringing much improvement in that area.  An iterative development method is a blessing for avoiding large projects to become white elephants.  It is also a benediction for eliciting requirements when complexity, unknowns, or ignorance significantly raise the risk levels.  But the Agile deployments I have seen are misleading many actors into thinking that the need for knowing in advance how much something is going to cost has suddenly become obsolete.  There is always someone investing some amount to get some result.  I have yet to see, read or hear about any improvement in the rigor and effectiveness of the estimation process and its results provided by any development method, Agile or other.  The agile way of tackling IT-related change has taken the ignominious waterfall method and sliced it to shorten delivery times, and allow to reorient work.  But still, work has to be estimated before action and calling it Poker Planning or T-shirt Sizing doesn’t make it more rigorous than any other technique I’ve witnessed in the past 30 years.

Agile™ methods have brought tangible improvements in corporate IT’s delivery effectiveness.  But from an estimation point of view, apart from cool names, the techniques are still based on good intentions and experience.

Corporate IT is nowhere close to being mature in the estimation practice. If someone in your IT function ever tries to talk you into the difficulties of building a reliable estimation process due to the newness of IT, spare your tears and start with this interesting quote:

False scheduling to match the patron’s desired date is much more common in our discipline than elsewhere in engineering. It is difficult to make a vigorous, plausible, and job-risking defense of an estimate that is derived by no quantitative method, supported by little data, and certified chiefly by hunches of the managers. […] Until estimating is on a sounder basis, individual managers will need to stiffen their backbones, and defend their estimates with the assurance that their poor hunches are better than wish-derived estimates.

This may look like an excerpt from a blog or a recent report from one of the IT observatories, and may appear quite apropos and contemporary. But here’s the embarrassment: this quote is from a landmark book, The Mythical Man-Month[1], published in 1975!

Does this mean that the estimation practice in corporate IT has been at a standstill for 40 years?  I’m afraid so. 

This standstill has occurred despite research on the subject, text books and the development of estimation software. It’s happened in the face of the pitiful track record of corporate IT for being on-time and on-budget.  All of this while some organizations spend hundreds of millions of dollars on IT projects over multiple investment cycles.  To make it short: accuracy of estimates is secondary, and it explains the generalized laxity on this topic across organizations and over decades.

How can such a serious weakness with such considerable monetary consequences not be the driver of a relentless quest for improvement? The answer is simple: there are no incentives to get any better.

There are very little consequences in corporate IT for bad estimates.  Worse, there are tangible benefits not to improve.   As I explain in my first book  there is no such thing as an IT Machiavellian plan to entrench in your organization a system to milk your hard earned funds.  There is simply an engagement model that doesn’t foster improvement in several key areas, estimation being one of them.  By changing the game, IT will need to improve, will adapt and develop what it needs to get much better at estimating.


[1] F.P. Brooks Jr., The Mythical Man-month: Essays on Software Engineering, Addison-Wesley, 1975.

Perennial IT Memory Loss

There is a strange thing happening in corporate IT functions; a recurring phenomenon that makes the IT organization lose its memory. I’m not talking about a total amnesia, but rather a selective one afflicting corporate IT’s ability to deal with the current state of the technical assets it manages. This condition becomes especially acute at the very beginning of a project focussed on implementing technical changes to drive business evolution. Here’s how it happens:

It all starts with project-orientation. As we discussed in another article, the management of major changes in your internal IT organization is probably project oriented. Projects are a proven conduit for delivering change. Thanks to current education and industry certification standards of practice, managed projects are undoubtedly the way to go to ensure that your IT investment dollars and the resulting outputs are tightly governed. Unfortunately, things start to slip when project management practices become so entrenched that they overshadow all other types of sound management, until the whole IT machine surrenders to project-orientation.

The Constraints of Project Scope

As you may know, by definition, and as taught to hundreds of thousands of project managers (PMs) worldwide, a project is a temporary endeavor. It has a start date and an end date. Circumstantially, what happens before kickoff and after closure is not part of the project.

The scope of the project therefore excludes any activity leading to the current state of your IT portfolio. The strengths or limitations of the foundational technical components that serve as the base matter from which business changes are initiated are considered project planning inputs. The estimation of the work effort to change current assets, or the identification and quantification of risks associated with the state of the IT portfolio, will always be considered nothing more than project planning and project risk management.

Further excluded from project management are considerations that will apply after the project finish date. These factors encompass effects on future projects or consequences for the flexibility of platforms in face of subsequent changes. Quality assessments are common project related activities, likely applied as part of a quality management plan. But a project being a project, any quality criteria with impact exclusively beyond the project boundaries will have less weight than those within a project’s scope – and by a significant margin. Procedures directly influencing project performance – that is, being on-time and on-budget (OTOB)– will be treated with diligence. All other desired qualities, especially those that have little to do with what is delivered within the current project, become second-class citizens.

Any task to control a quality criterion that does not help achieving project objectives (OTOB) becomes a project charge like any other one, and an easy target for cost avoidance.

This ranking is more than obvious when a project is pressured by stakeholder timelines or in cases of shortages of all sorts become manifest. Keep in mind that the PM is neck-deep into managing a project, not managing the whole technology assets lifecycle. Also remember that the PM has money for processes happening within the boundaries of the project. After the project crosses the finish line, the PM will work on another project, or may look for a new job or contract elsewhere.

When all changes are managed by a PM within a project, with little counter-weight from any other of type of management, corporate IT surrenders to project-orientation.  When no effective cross-cutting process exists independently from project management prerogatives, your IT becomes project oriented.  I confidently suspect that your corporate IT suffers from this condition unless you already have made a shift to the new age of corporate IT.

Project Quality vs. Asset Quality

Project orientation has a very perverse effect on how technology is delivered: all radars are focussed on projects, with their start and end dates, and as such the whole machine becomes bounded by near term objectives. The short term project goals in turn directly impact quality objectives and the means put in place to ascertain compliance. Again, since quality control is project funded and managed, the controls that directly impact project performance will always be favored, especially when resources are scarce.

In project-oriented IT, quality criteria such as the ability of a built solution to sustain change, or the complexity of the resulting assets don’t stand a chance.

The result is patent: a web of complex, disjointed, heterogeneous, and convoluted IT components which become a burden to future projects.

It’s here that the amnesia kicks in.

All IT Creations Are Acts of God

When the next project dependent on the previously created or updated components commences, everyone acts as if the state of these assets was just a fact of life.

Whatever the state of the assets in place, at the beginning of a new project, it’s as if some alien phenomena had put them place; as if they were the result of an uncontrollable godly force external to IT.

Everyone in IT has suddenly forgotten that the complexity, heterogeneity, inferior quality, inflexibility, and any other flaws come from their own decisions, made during the preceding projects.

This affliction, like the spring bloom of perennial plants, repeats itself continuously. At the vernal phase of IT projects, when positivism and hopes are high, everybody looks ahead; no one wants to take a critical look behind. This epidemic has nothing to do with skills or good faith, but can instead be traced to how accountabilities are assigned and the measurement of performance.

When all changes are subject to project-oriented IT management, the assets become accessory matter. Your corporate IT team delivers projects, not assets.

What Drives Quality

Making parallels between corporate IT work products and those of other fields is adventurous. Nevertheless, I need to find a way to explain what quality control means for corporate IT without getting technical.

Imagine for a moment that your corporate IT team was not delivering technology solutions to your business, but rather automobiles.  Also assume, for the sake of the parallel, that your usual corporate IT quality controls would be applied to these cars.

The car would be put on a tarmac track and a test driver would start the car, accelerate, turn right, turn left, and brake. She would also open all doors and windows, check the fuel gauge, engage the lights, turn on the radio, open windows, tilt seats – everything.  In short, all features would be tested for their practical effectiveness. The car would then be handed off to its owner. That’s it.

Are you tempted to say that it’s enough? If all features and functions are operational, then the quality is where it should be? Of course not.

Fortunately for car makers and owners, some important points are missing from the quality control plan I’ve outlined above; especially determining how well the car is built and assess its ability to handle sustained use over a period of time – long after its sale to a customer.  In the automotive industry, these procedures will address a world of additional concerns such as: will the car be plagued with rust holes in 12 months?  Will the brakes require changing every 1000 miles? Will the corner garage mechanic need to drill a hole in the engine pan to make an oil change?

Carmakers have understood long ago that features are not enough if the product does not show many other qualities like longevity, safety, maintainability or reliability. But corporate IT is a strange beast whose behaviors often defy common sense.

So strange that the IT equivalent of drilling a hole in the oil pan is not that farfetched.

Project-Oriented IT and Quality Control

The scope of quality control on technology solutions can be qualified as business requirements centric. Far be it from me to downplay the extent of the tests required to ensure that all requirements are fulfilled, but that’s far from enough. The resulting output can only suffer from inferior levels of excellence when certain areas aren’t duly inspected. It’s true for cars and applies universally to any situation where there’s a mix of human beings and tight schedules.

How simple will it be to expand the solution? How much effort will it take to retire that solution? Will future generations of IT staff have crystal clear technical documentation at their finger tips? Can this solution easily integrate with other systems or technologies? These questions cannot be answered by controlling the correctness of features and functions.

To understand the dynamics responsible for deficient quality control of corporate IT output, one must first recognize that any change to existing assets or new asset creations are made within the context of a project. This makes sense, since nobody wants multi-million endeavors governed by anything less than good project management practices.

The issue doesn’t lie with the use of project management wisdom. The problem is that corporate IT decision-making processes are heavily skewed toward the use of project management logic, even in cases where different rationales should be applied. I call this ubiquitous pattern Project-Oriented IT.

Remember that a project is, by definition, a temporary endeavor[1]; it must have a start date and an end date, or else it’s not a project. This also means that anything happening before the project start or after its finish will not be considered part of the project.

So, within our carmaker analogy, the project end date will be when the automobile is delivered to the customer with all promised features functional.  An IT project will be deemed complete when the solution and all of its components are successfully tested to make sure that every feature works properly.

These tests do not acknowledge issues that may (or may not) arise months or years later. A few moons after the IT solution is delivered, the project will have been closed for a long time. Long-term quality does not fit easily into a project.  In project-oriented IT, considerations equivalent of car maintenance costs, body rust, or the premature wearing of parts are rarely a concern.

QA Skills and Independence

“Aren’t corporate IT quality control processes intended to check all these things?” You might be tempted to ask.  The sad but true answer is: not really. For all aspects of quality to be checked systematically and consistently, there needs to be a certain degree of separation between those that build quality and those that control its presence. In most cases, the independent quality controls cover only business features and are being carried out by the only unconnected parties in the equation: non-IT folks working for the business sponsor.

These individuals will conduct checks according to their skill sets, which don’t include the technical knowledge required for looking under the hood.  Those that have the skills to inspect the engine and the cabling are probably busy welding another car (working on another project). Even when the internals of the solution are checked, the reviewers are rarely independent enough because they are working under the auspices of project-oriented IT where many quality concerns are of a lesser importance.

In an upcoming article, I show that conflicting roles lead stakeholders to quickly pushback against any quality criterion that doesn’t directly help a project within its immediate lifecycle. You will also discover that these same accountability issues are killing the independence required to perform quality controls covering all aspects of the value of what is delivered.

Your takeaway from this article is simple: when it comes to controlling the quality of what you get from your IT investment, you hardly get anything better than a test drive.

To change this, the distribution of measured accountabilities must change in such a way that all aspects of quality are evaluated, not just those that directly impact a project’s delivery. In a soon-to-be-published book, I will dive in all aspects of IT that impede the creation of quality assets, all of them being rooted in the roles distribution, the accountability given to these roles, and the associated measures of performance.

[1] As defined by the Project Management Institute and applied by its hundreds of thousands of certified professionals.
1