• rm@rmbastien.com

Measures

“They Don’t Know What They Want!” and a Few Ruthless Questions About Estimation in Corporate IT

Estimating how much effort is required for digital transformation projects is not an easy task, especially with incomplete information in your hands. If one doesn’t know in sufficient detail what the business solution to be built has to do, how can they estimate correctly?  In face of such an unchallengeable truth, my only recommendation is to look at the problem from another angle and ask these simple but ruthless questions: 

Q1: Why are there so many unknowns about the requirements when estimation time comes?

Instead of declaring that requirements are too vague for performing reliable estimation, couldn’t we simply get better requirements? My observations are that technical teams that need clear requirements aren’t pushing enough on the requesting parties. This could be rooted in a lack of direct involvement in the core business affairs, an us-and-them culture, an order-taker attitude, or all of the above. Whatever the reason, there is a tendency to take it as an ineluctable fact of life rather than asking genuine questions and doing something about it.

Q2: Why do IT people need detailed requirements for estimation?

There are industries where they get pretty good estimate with very rough requirements. In the construction world, with half a dozen questions and square footage, experts can give a range that’s pretty good —compared to IT projects. I can hear from a distance that IT projects are far more complex, that “it’s not comparable”, etc. These are valid arguments that do not justify the laxity with which your corporate IT teams tackle the estimation process. In the construction industry, they have worked hard to get to that point and they relentlessly seek to improve their estimation performance.

Couldn’t IT teams develop techniques to assess what has to be done with rough requirements, then refine those requirements, re-assess estimates, and then learn from the discrepancies between rough and detailed to improve their techniques?  Read carefully the last sentence: I did not write ‘improve their estimates’ but rather ‘improve their techniques’. IT staffs know how to re-assess when more detailed requirements are known, but they are clueless about refining their estimation techniques.

Q3: Is IT the only engineering field where customers don’t know in details what they want at some point? 

Of course not!  All engineering fields where professionals have to build something that works face the challenge of customers not knowing what they want, especially at the early stages.  Rough requirement can be as vague as “A new regional hospital”,  “ A personal submarine”, “A multi-sports stadium”, “A log home”, “A wedding gown”. Professionals in these other fields genuinely work at improving their estimation skills and techniques even with sketchy requirements. But no so in corporate IT.

Q4: Who’s accountable for providing the requirements? 

The standard answer is that it should come from the user or the paying customer, and that’s fair. The problem is that IT folks have pushed too far such a statement and distorted it to a point where requirements should fall from the skies and be detailed enough for precise estimation. Which has led to an over-used and over-written statement that “Users don’t know what they want!”  And that’s not fair, especially when it is used to declare that estimating is a useless practice.  Which leads to the next question.

Q5: Who’s accountable for getting clear requirements?

That’s the most interesting question.  The query is different from the previous question, read carefully.  It’s about getting the requirements and being accountable for getting clear requirements.  Digital systems are not wedding gowns or log homes.  Non-IT people often have a hard time understanding how and what to ask for.  Whose responsibility is it to help them? If the requirements aren’t clear enough, who’s accountable for doing something about it?  The answer to all these questions should be those that have the knowledge, and that’s generally the IT folks.  What I observe is that IT staff are too often nurturing an us versus them culture where they don’t know what they want.  Let’s turn for a moment that statement around to: “We don’t know what to do”.  Isn’t that an interesting way to see things? It’s not anymore that they don’t know what they want, but rather that the IT teams don’t know what to build to provide the outcome that the organization needs.

Q6: Who’s accountable for knowing what to do? 

We all know who they are. Seeing the problem from that end and with another lighting may substantially reduce the cases when “they don’t know what they want” is a valid point.

Agile™ and Iterative Development to the Rescue! Or is it?

The clarity of requirements issue has lead smart IT people to use iterative prototyping to solve it for good.  The idea is ingenious and simple: let’s build smaller pieces of the solution within a short period of time, show that portion to the users and let them determine if that’s what they thought they wanted.  That’s great, and that’s one reason why the Agile™ methods have had such a widespread acceptance.  However, iterative prototyping doesn’t solve everything, and it certainly avoids a few important issues:

Q7: Are users getting better at understanding their requirements with Agile™?

Are sponsors and users getting any better at knowing what they need before they get any technical team involved? Of course not. Things haven’t improved on that front with Agile™ methods or any iterative prototyping technique for that matter.

Q8: Could prototyping be used as a means for improving how people define requirements

It certainly could, but that is not being taken care of.  Worse, it encourages laxity in the understanding of the requirements.  After all if we’re going to get something every 3 weeks that we can show our sponsor, why should we spend time comprehending the requirements and detailing them?  That’s a tempting path of least effort for any busy fellow.  The problem is that thinking a bit more, asking more questions, writing down requirements, having others read them and provide comments takes an order of magnitude less effort than mobilizing a whole team to deliver a working prototype in 3 weeks. The former option is neglected at the expense of having fun building something on the patron’s cuff.

The False Innovation Argument

Iterative prototyping is used across the board for all kinds of technology-related change endeavors, including those that have little to no innovation at all.  Do not get fooled into thinking that all what the IT teams are doing is cutting edge innovation. 

In fact, I posit that for the vast majority of the work done, the real innovation has occurred in the very early stages, often at a purely business level, totally detached from technology.  What I see for most endeavors, is IT teams building mainstream solutions that have been done dozens or hundreds of times within your organization or in others. Why then is iterative prototyping required? In those cases, using iterative development methods is less for clarifying requirements than to manage the uncertainty around teams not knowing how to build the solution or not understanding the systems they work on.

In many cases, using Agile™ is a means for managing the uncertainty around IT folks not knowing how to do it.

Did I ask this other cruel question: who’s accountable for knowing the details of the systems and technologies in place? You know the answer, so it’s not in the list of questions. It’s more like a reminder.

And finally, the most important question related to estimation:

Q10: Is iterative prototyping helping anyone get better at estimating?

Of course not.   The whole topic is tossed on the side as irrelevant when not squarely labelled as evil by those that believe that precious time should be taken to develop a new iteration of the product rather than guessing the future.

The Rachitic (or Dead) Estimation Practice

The consequence is that there is no serious estimation practice developed within corporate IT.  Using the above impediments about ‘not knowing what they want’ to explain why estimations are so often off-mark is one thing.  Using these hurdles as an excuse to not get better at estimating is another.  IT projects are very good at counting how much something actually costed and comparing it to how much was budgeted.  But no-one in IT as any interest in comparing actual costs with what was estimated with the genuine intent of getting better estimations the next time. 

This flabbiness in executing what should be a continuous and relentless quest for improvement in the exercise of estimating takes its root in a very simple reality:  corporate IT is the one and only serving your needs, providing to your organization everything under the IT sun.  While in the infrastructure side of IT, competition has been around and aggressively trying to offer similar services to your organization as alternatives to your in-house function, the other portion of corporate IT –the one driving change endeavors and managing your application systems—operates in a dream business model: one locked-in customer that pays for all expenses, wages and bonuses, and pays by the hour.  When wrong estimates neither make you lose your shirt nor any future business opportunity, the effort for issuing better ones can safely be put elsewhere, where the risks imminent.

Don’t Ask for Improvement, Change the Game

These behaviors cannot be changed or improved without providing incentives for betterment. Unfortunately, the current, typical engagement model of corporate IT in your organization is a major blocker. Don’t ask your IT teams to fix it: they’re stuck in the model. The ones that can change the game are not working on the IT shop floor.

Want some sustainable improvement? Start your journey by understanding the issues, and their true root causes.

missing

Anything Missing When Measuring Corporate IT Performance?

Let me provide some reassurance about corporate IT: all the accountabilities that are linked to quantitatively gauged measures of performance are subject to rigorous management and are never neglected.

The two broad categories of clearly defined and clearly measured performance objectives are KTLO and OTOB, acronyms for Keep The Light On, and On-Time On Budget, respectively.

The first category relates to IT operations. Corporate IT’s first and foremost responsibility is to make sure that what has been purchased, leased, built, installed, and has proven to work the first time, actually continues to do so, continuously and as long as your business runs. IT operations are less glamorous from an innovation point-of-view.  IT Ops – as it is often called – doesn’t invent new customer experiences. Neither does it re-architect your organization through radical business design.

But Ops is by far the most critical information technology function because its failure directly impacts the survival of your business in the very short term. If your organization cannot deliver the services to your customers and partners, it literally ceases to exist.  As such, IT operations should be taken very seriously; everything IT does or manages is monitored and measured quantitatively, down to fractions of a digit. Expectations on the quality, stability, and performance of operations are quantitatively defined up-front. Failure happens, but if the frequency or length of missteps is above the agreed-upon performance levels, some people will get seriously nervous about their jobs.

“With the quantitatively measured performance objectives of IT Operations, if failure happens too often, people get  nervous about their career.”

The second category, OTOB, relates to the execution of business change endeavors. Over the past few decades, there have been many scholar and trade discussions about the measurement of project performance, and how adequate – or not – the traditional triple evaluation scheme of cost-schedule-scope actually is. The model may have its limitations for those that are intimately involved in the execution of the endeavors that result in business change, but for those that command the change, assume the risks and reap the benefits – that is you, the paying customer – this performance measurement triad makes a lot of sense. The cost is how much money you need to spend to get what you want or need. The schedule is the time required to get it. And the scope is the extent of what you get for your money.

Scope can be subject to much discussion since the knowledge of what you want and what you really need in the end may differ quite a bit between the pre-project and end-of-project phases. To further complicate matters, there are as of yet no universal units of measure for scope for IT change projects. This imprecision contrasts with the universally understood measuring of cost and schedule.

That’s why many business people fall back on the sole use of on-time-on-schedule as a comprehensive tool for assessing the performance levels of IT in delivering change, assuming that what is delivered (scope) should be roughly what it ought to be for some business value stream to transmute to its new state.

“Scope of what is delivered by digital change projects is hard to measure and compare.  That’s why most business people will fall back on what they can grasp: on time and on budget.”

The importance of managing change is not an acute necessity for IT operations. Failure to be on-time or on-budget doesn’t have the same impact on personal and team performance evaluations, but performance is fathomed nonetheless and delivery dates are being managed.

So What’s Missing?

The major issue is that there are very few other quantitatively measured signs of excellence. The rest of IT is either subject to non-standard and qualitative evaluations or simply not measured at all. Non-quantified evaluations are debatable and easy to challenge on contextual differences. Non-standardized gauges are hard to compare.

In the end, IT measures itself for only a portion of what it does, focusing on improving what literally counts: where there are unchallengeable numbers with universally understandable units of measure. The rest is left to good intentions, or to how it is believed to positively impact OTOB or KTLO.

Notice that both KLTO and OTOB are measures of either immediate (KLTO) and short-term performance (OTOB). ‘Keeping the lights on’ means continuous operations, or short transactional tasks. Change projects are by definition temporary endeavors with a beginning and an end. What happens after the project is finished is completely irrelevant. Even the major transformation programs are split into manageable chunks that often fit into a civil year.

The IT management repercussion of short-termism is that the lasting impact of ITs work on your organization is veiled by short-range prerogatives.

The IT aspects that get the hit on the flank by short-term measures are quality and assets. More precisely, it is a hit on the quality of the work done that impacts the quality of the assets you get as a result.

The impact on your organizational capacity to adapt itself or respond quickly to changes in its environment is highly dependent on the quality of the assets. Asset readiness for change will suffer from lower quality work done in previous projects.

Get the bigger picture in this book about things executives need to know about IT – it will help you understand how most IT teams are evaluated today. These typical metrics have a direct impact on what gets improved, but also, what isn’t being taken care of.  Enjoy!

NYC Marathon

Is You Corporate IT Good Enough? You Get What You Measure For.

I jog.

But honestly, I’ve never liked jogging; I only do it because I need to exercise and there are times when other physical activities aren’t feasible or require too much engagement to get started. Cycling when it’s pouring rain or cross-country skiing with bad snow conditions are not good ideas. Jogging, on the other hand, usually requires no more preparation than a quick change of clothes and lacing up running shoes. Since I don’t really like jogging, I just run and that’s it. Don’t ask me how many minutes it takes me to run a kilometer. Don’t ask me how many Ks I ran this morning or this week. Don’t ask me if I’m improving either. I don’t know because that’s not important to me. I don’t bring my smartphone running. My watch doesn’t calculate my heart rate. I just run for forty-some minutes until it makes me feel good and then I’m done.

If ever someone succeeded in convincing me to enroll in a marathon running club where members have to successfully finish at least one official 42.2-kilometer race per year—or else they lose their membership— things would be different. I would wear a smartwatch, track all my runs, plot my progress on a chart and be very serious about measuring.

“When it’s really important, you measure it. The same thing can be said about paid work.”

If the attainment of a certain goal is critical enough to be linked to a bonus, a promotion or keeping your employment, chances are it is appraised with numbers —or will be soon enough. Conversely, failure to attain an expected performance level that is gauged quantitatively is more likely to be subject to a performance problem. We all get the hidden message when a boss is giving us numerically metered objectives: these goals are undoubtedly important.

This is universal enough to be appropriate to your digital teams as well.  Applying this common wisdom to corporate IT, three questions should be asked and answered:

  1. What are the outcomes expected from the work performed by corporate IT? Can you associate a set of objectives to these outcomes?
  2. Is the attainment of these objectives assessed? And if so, is it gauged quantitatively?
  3. And finally, do these measures of performance relate to the actual work performed and lead to empowered improvements?

The first question is the most crucial one because it directly impacts the answers provided to the next two. There’s nothing wrong in defining strategic objectives such as “driving business value through digital excellence” or any other objective that can be shared between IT folks and the rest of the organization. Bridging the great divide between technical teams and business stakeholders is certainly an objective that many —including yours truly— are craving for.

“At a very high level, business objectives for IT teams to spur a culture of cooperation is fine. But to drive performance improvement, that’s far from enough.”

Given the huge distance between business objectives and the actual services provided by your IT function, such a link becomes arbitrary and out of reach from an IT viewpoint.  Although it may be wise to link CIO performance to the organization’s success as a whole, the chosen business criteria would have to be translated into other measures of performance that IT teams can relate to and know they can improve upon. Market share or customer satisfaction index do not provide IT staff any clue for betterment. T

hen whatever the chosen set of objectives, is it measured quantitatively? It’s not jogging, but it nevertheless needs to be metered. And as discussed above, metrics that are too far from what technical staffs are actually working on, it won’t drive much improvement, and you may well get debatable numbers.

In this previous article —or better by looking at the bigger picture in this book about things executives need to know about IT— you will understand how today most IT teams are evaluated on. These typical metrics have a direct impact on what gets improved, but also, what isn’t being taken care of.  Enjoy!

Complex But Not-So-Adaptive

How the Usual Engagement Model Doesn’t Foster Quick Self-Adjustment in Corporate IT

Your organization is a complex, open system[1]. Open, because it needs to interact with its environment to exist. Complex because it is made of a great number of interacting components, is hard to understand, is difficult to change and often yields unpredictable results. 

The General Systems Theory and its adaptive cousin Cybernetics have been around since the mid-20th century and still provide a useful, high-level understanding of what systems are and how they work. At the most distilled level possible, adaptive systems —such as your organization— can be viewed with as little as a box and a few arrows, as shown in figure 1.  

Your organization is a complex, open system[1].  Open, because it needs to interact with its environment to exist, and complex because it is made of a great number of interacting components that are hard to understand and change with unpredictable results.  The General Systems Theory and its adaptive cousin Cybernetics have been around since the mid-20th century and still provide a useful, high-level understanding of what systems are and how they work.

At the most distilled level possible, adaptive systems —such as your organization— can be viewed with as little as a box and a few arrows, as shown in figure 1.

Figure 1

The grey box is your organization, a complex system.  What’s inside the grey box is of little importance at this point; it simply represents daily interactions happening within your business.  Your organization is a system, with its inputs, its outputs. Inputs can be anything that gets in, from the obvious resources (e.g. financial, natural, human, etc.) required to produce the output, but also, all other environmental inputs that your organization requires to take into account (e.g. legal, social, cultural, and more).  Outputs are what your organization aims at providing to justify its existence.  These outputs are targeted at customers or whomever benefits from what you are producing.    

This highly-simplified view of your organization is to allow me to draw your attention to something very important: your organization is an adaptive system.  It adjusts itself, as most adaptive systems do, or it would have died long ago.   Adapting means changing the internals of the system —the grey box— so that it continues to receive its inputs and to produce its outputs.  In order to survive through adaptation, your organization benefits from a feedback loop, which provides useful data about how well the system is doing. 

For most organizations, this response takes the form of ‘customer feedback’.  Feedback is sought through mechanisms such as surveys, focus groups and other tools to get better information on how happy the customer is about what your organization is doing.  But the most important feedback information type of all —the most effective one at triggering actual adaptation of your organization— is sales. 

Sales can be called market segment penetration, gross revenue, property taxes, or traveler miles, but let’s use that word to represent revenues coming back from your organization’s outputs. Positive feedback of this nature will signal your system to continue to operate in the same fashion.  Negative results in sales will trigger rapid change.  An important detail about sales: not only does it provide a very effective feedback, it also has a short to mid-term impact on one crucial input to your system —namely, financial resources.  If sales plummet, so do revenues, and revenues from sales represent, for most organizations, the sole source of financial inputs. Sales feedback is directly linked to the survival of the system, to the viability of your organization. This is one of the reasons why this type of feedback is so effective at affecting change.

The feedback loop allows your business to adapt.  One of the most effective types of response is revenue (or any other variation), since it has a direct impact on the existence and survival of your organization.

What about corporate IT in this simple but effective scheme? One could rightfully point out that the IT function of the organization is simply one more component within the complex system.  That’s true, but it would not serve the demonstration well.  Furthermore, IT’s business is not your business —unless of course your business is IT, in which case this whole demonstration does not apply to you.  Corporate IT’s business is to provide to your organization goods and services that compose the technological solutions that support the value streams and capabilities that allow your organization to still be in business.  Corporate IT has always been and still is a support function to the greater whole, regardless of the levels of collaboration and teamwork between IT and non-IT staff in your organization.  This statement might look hard or even outdated in light of the current trends of meshing business and IT and declaring out-loud that it sums up only ‘one team’.  This has great value at the shop-floor level to create processes and instill a culture that promotes efficiency. Whether you’re in the road construction industry and make very little use of information technologies, or you’re in the banking industry and your operations are totally meshed with IT, I nevertheless insist: it does not change IT’s position as a support function to the greater whole.  Corporate IT teams are composed of individuals with clearly different education, training, work culture and career paths than the rest of your organization.  They are not in the business of off-shore drilling, commodity investment, communications, banking, or social caretaking.

Now let’s add to this simple scheme another complex system representing corporate IT.  This function of yours can be viewed as a provider to your business within a larger value chain.  IT’s inputs are likewise resources comprised of technical apparatus, skilled individuals, and projects that provide requirements and funds for the IT function to create as outputs the technological solutions that will mesh into your bigger business’s value streams. This interaction is depicted in figure 2.  

Figure 2

So far so good, right? The latter figure looks like a copy-paste of figure 1. It is indeed very similar, at least from a cybernetics point of view. But there is a catch, and it hides itself in the feedback loop.  The most effective feedback mechanism, sales, is not part of the equation. 

In the case of your own business, customer feedback through focus groups or surveys is collected, analyzed and leads to action because it will help improve sales, or at least help you understand why sales are not what they should be.  If you don’t make money, your business imperatively has to change, or will soon die.   But in the case of the IT function, something crucial is absent for there is no survival component; no presence of the ultimate incentive for improvement.  

Figure 3

There isn’t any survival component that could keep your IT staff on the alert. But there is more: they witness, from year to year, a steady flow of IT investments coming from your business. Projects come and go, priorities fluctuate, business strategies evolve, but the level of IT investment is in general correlated to the financial health of your business as a whole.  It is certainly not tied in any way to the level of satisfaction you have towards the corporate IT subsystem, or to the feedback given.

Corporate IT’s business success is totally dependent on your own business success.  In three decades of working in corporate IT, I’ve seen budgets vary, waves of layoffs, staff optimizations, outsourcing, offshoring and nearshoring.  But never have I seen IT-budget variations based on pure IT performance.

That is why the IT complex system can delay adaptation for quite a lengthy period of time. As long as the mother system —your business— can adapt itself and survive in its respective industry, corporate IT has little to no survival twist to the feedback it receives.

How effective would your organization be at listening to your customers, collecting feedback, or analyzing their behavior if it wasn’t linked, directly or indirectly, to sales increase?  How good would you be at rapidly adapting your business if you had witnessed, for the past decades, an uninterrupted flow of yearly funding, always commensurate with your client’s financial health, regardless of its satisfaction towards your products or services? Probably a pale imitation of what your organization can do today to improve customer experience or optimize revenue streams. 

And pale it is, indeed.   In this article, I describe the truly important measures of performance in corporate IT.  You will discover that job-keeping accountabilities are always paired with quantitative measures of performance.  There are two broad categories of accountabilities for corporate IT teams. One represents the run-of-the-mill responsibilities for which there is little space left for interpretation. These include the availability of systems, application response time, pace of deployment of new versions, call-center wait time, new employee set-up delay, etc.  This category is definitely the one for which corporate IT is best equipped, tooled and prepared. It is no coincidence that the run-of-the-mill chores of IT are also the ones that are best supported by cross-industry standards, are regularly purchased from third parties, can be outsourced more easily, are the most easily auditable and are supported by the most comprehensive set of benchmark services.  When performance issues in your work can directly lead to dismissal or when the provided service can be purchased from external sources, you’ve got the survival component mentioned above.

The second category of corporate IT accountabilities is related to its capacity to provide business change through new or revamped IT-based solutions.  This includes the realm of investment projects, deployment of new platforms, digitalization, and all the names given to the endeavors that require mobilizing business and IT to deliver something new that will make your business strive. This category is supported by little-to-no cross-industry standards, is highly customized to your company, doesn’t get outsourced easily, is costly and difficult to audit, and is supported by benchmarks that are too high-level or don’t fit with the peculiarities of your organization. That’s why most business people fall back on the only quantitative feedback they can understand: on-time and on-budget metrics. But there is a severe limitation to the effectiveness of these enshrined measures; namely the budgetary and temporal targets are set by the same team that is being measured for their attainment.  IT people do lose their jobs for not attaining such targets but it is fair to say that these are the rare cases. The impact of failure in this second category is nowhere near the acuteness of the repercussions of a faux pas in the first category.   There is no survival component in the latter.  

The rest of the feedback for change comes in the form of qualitative appreciations that are admittedly useful but do not make the cut since their impact on triggering adaptation within the IT function rarely represents a threat for people or teams. Moreover, because of the huge knowledge gap that exists between tech-savvy team members and the rest of your organization, most of these qualitative feedback items require substantial effort to be translated into actionable improvements at the technical level. No-one is against improvement, but when time comes to put the effort, the exertion is in direct competition with project priorities and the short term objectives of on-time and on-budget delivery.

The feedback loop that helps corporate IT to adapt and improve the delivery of change projects is weakened by the absence of a component that links it to true survival. Compared to most businesses, corporate IT has little skin in the game and not much to lose by perpetuating the status quo or making only small changes to its modes of operation.

The engagement model between corporate IT and the rest of your business is at the deepest root of many issues that impede their ability to provide more value to your organization. It is also one of the fundamental reasons why you may have the impression that the IT function is in an everlasting state of immaturity.  To get a better understanding how your corporate IT works (or isn’t working), I invite you to take a quick read of this book: What You Should Know About Corporate IT But Were Never Told.

You will realize that changing these patterns requires radical change in the way corporate IT engages with the rest of the business, and more specifically, how accountabilities are distributed and measured.  Nothing less than a major revolution, triggered by business people, will allow IT to become a true adaptive system that can change itself to provide what you deserve.

————

References:

[1] The working definition at MIT for complex systems is: “A system with numerous components and interconnections, interactions or interdependencies that are difficult to describe, understand, predict, manage, design, and/or change.”
– Magee, C.L., de Weck, O.L., Complex System Classification, Fourteenth Annual International Symposium of the International Council On Systems Engineering (INCOSE), June, 2004

 

The Inconsequential Repercussions of Poor Estimation in Project-Oriented IT

Estimating – the art of practicing educated guesses on how much time and money are required to perform something – is a difficult task, particularly in corporate IT.  I have provided them, collected them, validated them, compiled them, suffered from them and abided by them, and let me assure you that this whole estimation business is far from trivial.  Being a difficult task is one thing, but it should not be a reason to push the subject aside.

So let’s look at a classic scenario that I have seen in all corporate IT projects that I’ve been involved with:

  • The first estimations are made with very little knowledge about the requirements during the IT investment budgeting cycle, starting six months to more than a year before the project is effectively launched.
  • The budgeting cycle directly involves the IT managers who will be responsible for building the solution. It is their opinion that carries the most weight in the balance.
  • In the best-case scenario, technical experts, designers and architects will be involved in a quick tour of the requirements and a high-level design of the solution. In other, less ideal cases, the managers will make the estimates.
  • Estimates are made with very little time allotted for the exercise, with managers and experts busy delivering current-year projects and dozens of other projects to evaluate within just a few weeks.
  • No quantitative method is used because the IT team has never developed such methods. There is little usable historical data, apart from the actuals of past projects. The identification of analogous projects is left to the memory of people rather than a rigorous classification of past expenses.
  • After several rounds of investment prioritization, the remaining investment projects will be challenged on estimates.
  • Based on the same limited knowledge of the requirements and with still very little quantitative data to back-up their argument, IT managers, sometimes with the help of their experts, will come up with more stringent assumptions in order to reduce the estimates and fit the expected budget.
  • At this point, the fear of having a given project cut from the investment list will have a definite effect on the level of optimism of the involved parties, both on the business sponsoring side and the IT team.
  • If the project makes it through the cuts, then in the next fiscal year a project team will be assembled. Only then will the true requirements be fleshed-out with the help of business experts, leading to a more complete IT architecture.
  • This detailed knowledge will lead to re-estimation of the cost and schedule. Most of the time, the new estimates will be higher than the ones from the budgeting cycle estimates. If the budget cannot be trimmed, then features will be cut.
  • In some organizations, a gating process may be put in place to reassess the net business value of the IT investment in view of the more accurate costs and schedule. The project may not pass the gate, at which point it is cancelled.
  • However, in many organizations, IT investment gating is avoided – or is nonexistent – and the business sponsor, project manager and IT managers will work on the expected scope and schedule in order to deliver something of value within the current year.
  • If the business value cannot be achieved within the available budget/schedule, a change request may be issued, frequently justified by the falsehood of one or more of the original estimation assumptions.
  • Since there is no formal quantitative estimation model in place, there is no process to assess if the change requests are caused by flaws in the estimation practice, nor is there a way to address how it could be improved for future projects.
  • Upon completion, the project may deliver fewer functions or less business value than expected, but since the original requirements were pretty vague, it is difficult to assess the delta.

This typical and classical sequence of events is one of the many variations that occur in IT organizations.  Estimation-wise, the most important characteristic of this scenario is that the estimation duty and its accompanying tools and data suffer from little rigor, no repeatability, absence of relevant data collection, and archaic tools.

In short, the corporate IT estimation discipline is so immature that it can’t be called a practice.  Things are mostly left to good intentions and experience.   

Even the Agile™ tidal wave isn’t bringing much improvement in that area.  An iterative development method is a blessing for avoiding large projects to become white elephants.  It is also a benediction for eliciting requirements when complexity, unknowns, or ignorance significantly raise the risk levels.  But the Agile deployments I have seen are misleading many actors into thinking that the need for knowing in advance how much something is going to cost has suddenly become obsolete.  There is always someone investing some amount to get some result.  I have yet to see, read or hear about any improvement in the rigor and effectiveness of the estimation process and its results provided by any development method, Agile or other.  The agile way of tackling IT-related change has taken the ignominious waterfall method and sliced it to shorten delivery times, and allow to reorient work.  But still, work has to be estimated before action and calling it Poker Planning or T-shirt Sizing doesn’t make it more rigorous than any other technique I’ve witnessed in the past 30 years.

Agile™ methods have brought tangible improvements in corporate IT’s delivery effectiveness.  But from an estimation point of view, apart from cool names, the techniques are still based on good intentions and experience.

Corporate IT is nowhere close to being mature in the estimation practice. If someone in your IT function ever tries to talk you into the difficulties of building a reliable estimation process due to the newness of IT, spare your tears and start with this interesting quote:

False scheduling to match the patron’s desired date is much more common in our discipline than elsewhere in engineering. It is difficult to make a vigorous, plausible, and job-risking defense of an estimate that is derived by no quantitative method, supported by little data, and certified chiefly by hunches of the managers. […] Until estimating is on a sounder basis, individual managers will need to stiffen their backbones, and defend their estimates with the assurance that their poor hunches are better than wish-derived estimates.

This may look like an excerpt from a blog or a recent report from one of the IT observatories, and may appear quite apropos and contemporary. But here’s the embarrassment: this quote is from a landmark book, The Mythical Man-Month[1], published in 1975!

Does this mean that the estimation practice in corporate IT has been at a standstill for 40 years?  I’m afraid so. 

This standstill has occurred despite research on the subject, text books and the development of estimation software. It’s happened in the face of the pitiful track record of corporate IT for being on-time and on-budget.  All of this while some organizations spend hundreds of millions of dollars on IT projects over multiple investment cycles.  To make it short: accuracy of estimates is secondary, and it explains the generalized laxity on this topic across organizations and over decades.

How can such a serious weakness with such considerable monetary consequences not be the driver of a relentless quest for improvement? The answer is simple: there are no incentives to get any better.

There are very little consequences in corporate IT for bad estimates.  Worse, there are tangible benefits not to improve.   As I explain in my first book  there is no such thing as an IT Machiavellian plan to entrench in your organization a system to milk your hard earned funds.  There is simply an engagement model that doesn’t foster improvement in several key areas, estimation being one of them.  By changing the game, IT will need to improve, will adapt and develop what it needs to get much better at estimating.


[1] F.P. Brooks Jr., The Mythical Man-month: Essays on Software Engineering, Addison-Wesley, 1975.

The Unmeasured and Inconsequential Aren’t Getting Any Better

In part 1 of this article, we saw that what really counts in corporate IT is not only measured, but also metered quantitatively, with standardized gauges that leave as little space as possible for misinterpretations. Through exploring a parallel with the pizza delivery business, I attempted to show that anyone can be assigned conflicting accountabilities, such as delivery speed on one hand, and driving regulation compliance or fuel consumption mindfulness on the other. The only way to juggle these clashing devoirs is through the application of control measures, and the establishment of personal or team-based incentives linked to the resulting indices.

Incentive-Based Performance

Now, if one of the controlled expectations is quantified and directly linked to next year’s bonus, but the other anticipated behavior is not numerically evaluated, what will happen? The result will be the same as it would within our pizza delivery example. If you don’t measure the time it takes for each driver in your team to deliver pizza, then respecting driving rules (because the controls are already in place) and minimizing fuel burnt (assuming this is metered) become the top priorities. When the time comes for yearly performance reviews, delivery time will be left to the manager’s memory of the past 12 months and the driver’s ego. You already know that the manager’s memory will be focused on the most recent weeks, and that the drivers will naturally overstate their delivery speed.  This just wouldn’t work; you would get safe, low-carbon footprint, legally respectful driving, but slower delivery times that would jeopardize customer experience –and your competitive edge.

What’s Measured and What’s Important

In Part 1, I presented a table illustrating the usual assessments of performance for the IT function. These indicators are measurable and precise. They also represent the true gauges of personal performance.  Failure to perform adequately in the KTLO (Keep The Light On) category, can rapidly lead to dismissal.  Underperformance in the OTOB (On-Time On-Budget) category may take more time to notice, but will eventually translate into career changes. I’ve charted this reality in a simple but eloquent figure.

At the end of Part 1, I related a simple question: “What about all the other good things you should expect from your corporate IT function?”.  You should now grasp that any such remaining features will fall in the lower left hand quadrant of this figure. They are not measured quantitatively or not even gauged, and they have little impact on IT staff keeping their jobs.  If you believe that IT’s performance should cover much more than KTLO or OTOB accountabilities, then I strongly suggest that you scale back your expectations concerning behaviors unassociated with the upper right hand categories.

I strongly suggest that you scale back your expectations concerning behaviors associated with anything else but KTLO or OTOB accountabilities.

The next burning question is obviously: “What falls under ‘The Rest’?”  As its name implies, this category encompasses all other desired duties: the mundane and less significant ones, as well as the crucial virtues that seriously impact the quality of corporate IT’s output.

Another Problem For IT to Solve?

In several upcoming articles you will discover that the perception of quality and the means of its control are significantly related to its position in the chart above. Quality controls specifically associated with quantitatively measured KTLO performance objectives will be defined and applied.  I can safely bet that your IT function is pretty good at those tasks. I can also confidently speculate that the quality controls which play an active role in delivering products on-time and within budget are taken seriously and applied systematically.

The remaining controls are mostly subjective, or plainly nonexistent, thanks to the few repercussions that inefficiencies in these areas have on people’s jobs.

Unfortunately, many missing measures can have a direct impact on your organization’s capability to react promptly in an ever-changing environment.  Important areas such as compliance to your own standards, ease of maintenance of platforms, reuse of existing assets, adaptability, or documentation have little impact on people’s jobs, and are, at best, qualitatively measured, if measured at all. These areas fall under “The Rest”, and are probably poorly managed.

But if you think that you simply need to demand that your IT organization be better at those things, you are misled.  The performance criteria in The Rest have been neglected for decades.

All attempts that I have seen or heard of were either weak, unevenly applied, or didn’t last very long.  As long as the current hierarchy of rewarded behaviors reigns, it won’t happen.

But expanding what really counts above and beyond KTLO and OTOB requires to remove the conflicting accountabilities.  As described in a previous article, your IT function is stuck in an engagement model where, for convenience and historical reasons, a single desk is given all accountabilities.  As you will see in my upcoming book, your IT has little means for implementing a healthy segregation of duties, and has cashable incentives to remain mediocre in several key areas.

IT’s Quantitatively Measured Duties Are What Really Matter

Despite corporate IT’s renowned penchant for solving complex problems, there are some issues for which you should not count on them.  The ones that involve conflicting accountabilities.  Finding the clashing duties is not obvious, but this series of two articles will guide you to them. The first step is to understand what really counts.

Clashing Accountabilities in Pizza Delivery

In order to help you grasp the type of conflict at stake, let’s look at a simple example: pizza delivery.  Let’s say you’re the proud owner of a high-end pizza restaurant and your delivery team is accountable for delivering orders in the shortest time possible.  This makes sense, since your customers expect prompt service.

How do you make sure that speed is the top priority?  Easy: with measured controls.  Each driver is equipped with a wireless device that the customer signs upon receiving the ordered food. But this speed-related accountability could conflict with two other goals: minimization of fuel consumption and compliance with driving regulations. The faster the driver gets the awaited meal to its destination, the more fuel she burns while traveling the same distance.  In addition, the shortest and fastest route to her destination would require the driver to ignore one-way streets, left turn allowances, and speed limits. So how do you address these additional factors while keeping customer experience at its highest level with short waiting times?  The answer is once again with measured controls.

Controls are already in place for driving regulations: there are well known law enforcement bodies that will catch a driver ignoring these rules and give her a ticket or suspend her driver’s licence, leaving her jobless.  Respecting the driving code may slow down the delivery process, but the non-compliance risks are such that everyone in the community agrees that all vehicles should conform.  I’ll refer to this as an independent control mechanism.  Street patrols cannot make the driver’s performance objectives theirs.  Moreover, their own performance measures are at stake if they do not encourage or enforce strict observance of driving rules.  As human beings, they may have compassionate feelings about the driver, but they have a job to do that is very well delineated from the pizza delivery business.  Hence, from the point of view of attaining this objective, albeit conflicting with customer satisfaction, your pizza delivery process is adequately covered.

For the fuel usage objective, the situation is slightly different.  You cannot count on an external body to take care of this.  You’d probably put in place physical devices to continuously monitor fuel consumption on delivery cars.  This device would show live consumption rates on the dashboard to favor driver learning of goof habits, and you’d get weekly reports cross-referenced to each driver’s on-duty periods.

With both delivery times and consumption ratios in hands, you and your drivers have what it takes to balance these conflicting objectives:  (1) measured delivery times for each run, (2) legal safeguards for careful driving, and (3) gauged fuel consumption for each driver’s work shift.

The last thing you need is to find a way to motivate your drivers to harmoniously juggle these conflicting targets.  I’ll let you imagine how you’d do it.  There is a wide range of options, from warmly felt taps on the back to annual hard cash bonuses.

There is one last point to draw your attention to: data on the attainment of these objectives are both independently gathered and quantitatively measured.

Now that we’re warmed-up, let’s drop the pizza delivery industry for a moment and get down to the corporate IT business.

Corporate IT Accountabilities Made Simple

Corporate IT is made up of a wide variety of roles.  If we took all of these jobs and analyzed how performance translates into measures, we’d fill hundreds of pages.  Furthermore, these duties are, for the most part, fairly technical and non-IT people have a hard time relating to what achievement or efficiency practically mean.

But this is not your field of expertise, and your expectations from IT are at another level.  So we need to elevate ourselves to the highest level of anticipation toward the IT function, the one where what IT does makes sense to a business person. Incidentally, this exercise allows us to prune through an intricate mesh of techno duties and get to the real, business-rooted measures of achievement.

What kind of technology-related achievement will make your IT executives shine?  What type of counter-performance would be career threatening for senior IT staff?  Easy! Find out the measureable indices, for which data is systematically collected and that use standardized units.

The typical performance indicators and their accompanying measures are summarized in the table below.  Take a moment to have a good look:

Recognized Performance Indices

There are in fact only two sets of standardized, quantitatively measured duties: they are labelled Keep The Light On (KTLO), which deals with operational stability and efficiency, and On-Time On-Budget (OTOB), which covers efficiency in managing major changes.

There are of course other counter-performance issues that could lead to dismissal, such as skill retention issues or leadership problems, but the table above deals only with the core accountabilities that apply exclusively to the IT function.

One striking point about the table is that the accountabilities are quantitatively measured; not by approximate measures, but rather by highly precise gauges, which in some cases are within three digit decimal fractions! Also remarkable, all of these metrics use standardized units of measure applicable to all possible cases.  They are easy to understand, both from the side that delivers (IT) and the side that pays (you).  Universality and the quantification of the measures of performance both indicate the importance of any given accountability. One last important observation, albeit less obvious, is that these measures are easily auditable.  You could decide to have these metered by independent parties to avoid that the counting party isn’t also the one that is being evaluated.

In your organization, there are certainly other gauges in place, but how do they measure against the ones above in terms of business criticality?  Are they qualitatively evaluated or hard-numbered?  Are they related to IT accountabilities or general measures applied to all functions?

My guess is that the really important stuff is what is closely related to the table above: flawless execution in support of the operations, and managing change within planned budgets and time frames.

Are You Satisfied?

Now, what about all the other good things that you should expect from what your corporate IT function delivers? For example, what about adaptability to change, compliance to standards, or maintainability of delivered assets? How about speed?  What about quality?  Isn’t IT delivering tangible “stuff” that should be counted, trended and compared, like any other corporate function? Why aren’t these other elements represented in the table above?

They are absent, along with the many other expectations that you may have in mind, because the conflicting accountabilities of the usual corporate IT engagement model push them to third – and far behind these two categories – regardless of their innate virtues.  That’s what we shall see in Part 2 of this article.  And we’ll come back to pizza delivery too!

Corporate IT’s Non-Speed Formula

A crucial aspect of your organization’s agility lies in the speed at which your IT function can deliver change.  Not the small run-the-mill types of change, but the mission-critical delivery of the new enabling technologies, digital platforms and IT solutions that your business needs to strive.  Speed gives you a competitive edge in your respective markets, and as such the momentum of your corporate IT team stands as a key strategic enabler.  Let’s be honest however, corporate IT is often branded with all sorts of depreciatory qualifiers related to the pace at which it can deliver.

But what is corporate IT speed?  And how is it measured?  The answers you’ll find below are probably not what you thought, and are certainly not what you’d want them to be.

In the case of cars, trains or marathon runners, the formula is the one we’ve learned at school: distance traveled divided by the time it takes to travel that distance.

That’s why we often use kilometers per hour to gauge the speed of travelling things.  All of this is obvious.  It is evident because we all have a sense of what distance means since it’s part of the tangible world we live in.  Same for time: even if some of us (you know them!) have an elastic conception of time, there are a standardized measures and tools such as a clock.

That’s fine for transportation, but speed can be so many other things.  The “speed” at which an automobile factory produces cars is measured by the number of cars built, divided by the time it takes to build them.  In the end, speed can be viewed as the measurement of some achievement divided by the time taken to reach it.

Now that we have a formula applicable to any situation, let’s try to answer the questions above (what is corporate IT speed and how is it measured).  The divisor is always time, so we can forget about it for now and focus exclusively on the dividend.

To assess IT speed, you need to know what an achievement is and be able to measure it.  But to be eligible, achievement measures must have certain characteristics:

  1. They have to be measurable quantitatively; and
  2. Their units of measure must be standardized.

That’s sensible since measures of speed should not be left to qualitative interpretations and should be applicable to all solutions yielded by IT.  Same for the standardization of the units of achievement, an absolute must if you want to compare speeds.  After all, what’s the point of measuring speed if you cannot draw comparative conclusions?

That’s where the whole corporate IT speed thing collapses.  In the case of the car factory, you count cars, but in the case of corporate IT, what are the units?  There are documented units of productivity for some types of IT work, but that’s not sufficient because:

  1. these units vary from one work product to the other;
  2. they also vary from one part of your IT to the other;
  3. they do not cover the whole process that yields what you pay for; and
  4. I suspect that the processes to systematically measure them aren’t implemented.

So what is the equivalent of the cars that you count on the shipping dock of the automotive factory?  The sad but true answer is that there is likely no such equivalent in your IT shop. Hence, everyone falls back on project delivery or the tangible outputs delivered through them.  Speed gauges become statements such as: “We delivered the new version of the CRM in 14 one-month sprints,” or “Release 3 of system XYZ took four months to deliver, compared to six months each for releases 1 and 2.”

But you cannot fairly compare the new version of the CRM with the preceding one.  What you delivered in releases 1, 2 and 3 may be quite different in their nature and size.  Neither can you compare anything between system XYZ, your CRM application and the majority of the hundreds of disparate business solutions you own.  Thus, this gauge of speed is not sufficient either, because the units are not standardized.

When units of achievement vary from one project or one team to the other, that’s not usable as a valid measure of speed. That’s anecdotal evidence, nothing more.

Regardless, someone still needs to show that something has been provided at a certain speed.  Since IT deliverables vary so much in size and nature, the only thing left to assess speed is money.  You have to make the leap of faith that on average, higher-priced projects (or phases, releases, or whatever units of delivery you choose) yield more throughput.  By doing so, cost actuals become a proxy to measure what has been delivered.

Assuming you can bare the assumption, the result is disconcerting: speed of delivery becomes the budget size of what has been delivered, divided by the time it took to deliver it.  When we factor that into the formula above, it yields the following:

In other words, corporate IT speed is measured by the speed at which money is burnt.

Which also means that if you ask your corporate IT function to get any faster, the only thing they can do is spend your money sooner, leaving you the onus of believing that more was achieved per unit of time.  This is far from a valid measure of speed.

Corporate IT’s unenviable reputation with respect to pace is not unrelated to the formula above.   You have within your organization a function for which speed of delivery is a critical competitive element, but it is not measured adequately.

We all know that what is not measured will not improve, and measuring it in such a grotesque way as in the formula above is like not measuring it at all.

This is the reality of corporate IT today because no one has ever had enough motivation to develop better and more accurate ways to measure throughput. Do not get sweet-talked into the difficulties of developing such measures.  It’s neither because such measures don’t exist, nor that your IT staff doesn’t have the skills to make it happen.  Furthermore, it has nothing to do with technology, rather with how accountabilities are distributed and how team or personal performance measures are defined.

Next week’s article will provide more insights on what performance really means in corporate IT.  My book gives a broader view of the problem and a deeper understanding of the non-technological root causes behind the poor state of speed in corporate IT.

No One is Accountable for What Is Not Measured

In a previous article on the construction industry’s distribution of roles, I demonstrated that centuries of cumulative trials and errors have led to a clear delineation between the main stakeholder’s responsibilities, all to the benefit of the paying customer and the public in general. In corporate IT, as we saw in the following article, things are quite different: the paying customer deals with a single desk that plays all roles.

The healthy segregation between those that define the solution and those that build it, those that set standards and those that use them, those that deliver excellence and those that control that quality, is unquestionably absent. 

It would be a mistake to believe this is due to the nature of the solutions being built, as segregation of roles was not always present in the construction industry either. Roles definitions were once an issue, as we can see by this citation from Philibert Delorme [1514-1570], architect and thought leader of the Renaissance:

Patrons should employ architects instead of turning to some master mason or master carpenter as is the custom or some painter, some notary or some other person who is supposed to be qualified but often than not has no better judgment than the patron himself […]”[1]

In my career in IT, I have seen it all: projects without architects, improvised architects with skills issues, true architects without any architecting accountability, architects left to themselves with no organizational support, IT managers architecting, project managers architecting, customers architecting, programmers architecting. These cases are not exceptions, but rather the norm, in one form or another.

There are two main reasons for so much laxity in the execution of such an important function as IT architecture: conflicting roles and lack of measures.

First, the conflicting placement of the architect, often located in a quarter where he/she isn’t able to truly defend the customer’s interests, is subordinate to line managers or project managers that have higher priorities than architecting solutions the right way.

Second, expectations towards the quality of the architecture are neither set nor gauged, again, because there are more urgent and measured accountabilities hanging in the balance.

With little consequences for wrongdoings, it’s no wonder the architect’s role is so easily hijacked by whoever wants to have a say in that area. 

IT architecture is a field where anyone can be elected, or self-elected, to the status of an architect, as long as he/she can make things work. But as we saw in a previous article, a working solution doesn’t prove much. Everyone can have an opinion on the right way to design but is never held accountable for the quality of it.  Opinions without accountability on the subject are as relevant as any other conversation around the coffee machine.

Fortunately, by balancing the distribution of roles with healthy segregation, measures of performance can move toward a healthier equilibrium, so that coffee machine discussions don’t become IT strategies that put at risk million-dollar projects.  The architect’s role will stop being usurped, for doing so will then entail being accountable for it.  An in-depth analysis of these insights and more will be available in my upcoming book, to be published soon.

——-

[1] Catherine Wilson, “The New Professionalism in the Renaissance,” in The Architect: Chapters in the History of the Profession, University of California Press, 1977, p. 125.

1