• rm@rmbastien.com

Complexity

Designing Your Stairway to Heaven

Standing the Test of Time

I’ve been an unflagging fan of Led Zeppelin since my early teens. I’ve been a worshiper of their founder and lead guitarist Jimmy Page.  That’s probably why YouTube’s algorithm presented me this 17 minute video from the BBC where Mr. Page describes the intent and the result of Zep’s most iconic composition: Stairway to Heaven.  Saying that this piece has an enduring popularity is an understatement.  Today, teenagers whose parents weren’t yet born when this opus was written are still fascinated by the creation. 

Jimmy’s Architecture

There are certainly a series of reasons why Stairway to Heaven is so good, and not being a musician, I’m not cognizant enough to comment on all of them.  However, at 4:38 into the video, Jimmy said something that struck me:

“All this stuff was planned.  It was not an accident, or everyone chipping in.  It really was a sort of design.”

Jimmy Page

If you listen to the whole video, there will be no possible doubt: Stairway to Heaven is the result of conscious design.  The magnum opus was architected, from the beginning, with a clear vision about the sequence of movements, the textures, the build-up of tempo and the unfoldment of the majestic finale.

Innovation Is Not Design — It Feeds It

Another clear learning from Master Page: this was not the result of some brainstorming session, an unplanned mashup, or random amalgamation in hopes of finding a gem.  Unknowingly, Jimmy brought more fuel to a conviction that I’ve seen building in my mind over the years:  innovations and epiphanies emerge before the actual design of digital solutions begins.  These pieces of enlightenment are then embedded into the greater creation. The innovations —if any— reside in specific areas of the final product, but they are not the final achievement. 

Architecture and Design Make the Masterpiece

This leads to another observation, which is supported by decades of scrutiny and involvement in the world of information systems design: brainstorming sessions, focus groups, innovation dives —and all the good practices that encourage seeing things differently— will not yield a masterpiece.  They will nourish the subsequent process of architecting a creation that uses the innovative gems, but the master work comes from intentional design.

Randomly searching for innovation may lead to interesting designs; but masterpieces that stand the test of time are architected.

If you’re tempted to think that great business systems emerge from innovation, beware that it’s far from enough.  Don’t put all your marbles on the lateral thinking side of things.  Save a few for conscious design.

“They Don’t Know What They Want!” and a Few Ruthless Questions About Estimation in Corporate IT

Estimating how much effort is required for digital transformation projects is not an easy task, especially with incomplete information in your hands. If one doesn’t know in sufficient detail what the business solution to be built has to do, how can they estimate correctly?  In face of such an unchallengeable truth, my only recommendation is to look at the problem from another angle and ask these simple but ruthless questions: 

Q1: Why are there so many unknowns about the requirements when estimation time comes?

Instead of declaring that requirements are too vague for performing reliable estimation, couldn’t we simply get better requirements? My observations are that technical teams that need clear requirements aren’t pushing enough on the requesting parties. This could be rooted in a lack of direct involvement in the core business affairs, an us-and-them culture, an order-taker attitude, or all of the above. Whatever the reason, there is a tendency to take it as an ineluctable fact of life rather than asking genuine questions and doing something about it.

Q2: Why do IT people need detailed requirements for estimation?

There are industries where they get pretty good estimate with very rough requirements. In the construction world, with half a dozen questions and square footage, experts can give a range that’s pretty good —compared to IT projects. I can hear from a distance that IT projects are far more complex, that “it’s not comparable”, etc. These are valid arguments that do not justify the laxity with which your corporate IT teams tackle the estimation process. In the construction industry, they have worked hard to get to that point and they relentlessly seek to improve their estimation performance.

Couldn’t IT teams develop techniques to assess what has to be done with rough requirements, then refine those requirements, re-assess estimates, and then learn from the discrepancies between rough and detailed to improve their techniques?  Read carefully the last sentence: I did not write ‘improve their estimates’ but rather ‘improve their techniques’. IT staffs know how to re-assess when more detailed requirements are known, but they are clueless about refining their estimation techniques.

Q3: Is IT the only engineering field where customers don’t know in details what they want at some point? 

Of course not!  All engineering fields where professionals have to build something that works face the challenge of customers not knowing what they want, especially at the early stages.  Rough requirement can be as vague as “A new regional hospital”,  “ A personal submarine”, “A multi-sports stadium”, “A log home”, “A wedding gown”. Professionals in these other fields genuinely work at improving their estimation skills and techniques even with sketchy requirements. But no so in corporate IT.

Q4: Who’s accountable for providing the requirements? 

The standard answer is that it should come from the user or the paying customer, and that’s fair. The problem is that IT folks have pushed too far such a statement and distorted it to a point where requirements should fall from the skies and be detailed enough for precise estimation. Which has led to an over-used and over-written statement that “Users don’t know what they want!”  And that’s not fair, especially when it is used to declare that estimating is a useless practice.  Which leads to the next question.

Q5: Who’s accountable for getting clear requirements?

That’s the most interesting question.  The query is different from the previous question, read carefully.  It’s about getting the requirements and being accountable for getting clear requirements.  Digital systems are not wedding gowns or log homes.  Non-IT people often have a hard time understanding how and what to ask for.  Whose responsibility is it to help them? If the requirements aren’t clear enough, who’s accountable for doing something about it?  The answer to all these questions should be those that have the knowledge, and that’s generally the IT folks.  What I observe is that IT staff are too often nurturing an us versus them culture where they don’t know what they want.  Let’s turn for a moment that statement around to: “We don’t know what to do”.  Isn’t that an interesting way to see things? It’s not anymore that they don’t know what they want, but rather that the IT teams don’t know what to build to provide the outcome that the organization needs.

Q6: Who’s accountable for knowing what to do? 

We all know who they are. Seeing the problem from that end and with another lighting may substantially reduce the cases when “they don’t know what they want” is a valid point.

Agile™ and Iterative Development to the Rescue! Or is it?

The clarity of requirements issue has lead smart IT people to use iterative prototyping to solve it for good.  The idea is ingenious and simple: let’s build smaller pieces of the solution within a short period of time, show that portion to the users and let them determine if that’s what they thought they wanted.  That’s great, and that’s one reason why the Agile™ methods have had such a widespread acceptance.  However, iterative prototyping doesn’t solve everything, and it certainly avoids a few important issues:

Q7: Are users getting better at understanding their requirements with Agile™?

Are sponsors and users getting any better at knowing what they need before they get any technical team involved? Of course not. Things haven’t improved on that front with Agile™ methods or any iterative prototyping technique for that matter.

Q8: Could prototyping be used as a means for improving how people define requirements

It certainly could, but that is not being taken care of.  Worse, it encourages laxity in the understanding of the requirements.  After all if we’re going to get something every 3 weeks that we can show our sponsor, why should we spend time comprehending the requirements and detailing them?  That’s a tempting path of least effort for any busy fellow.  The problem is that thinking a bit more, asking more questions, writing down requirements, having others read them and provide comments takes an order of magnitude less effort than mobilizing a whole team to deliver a working prototype in 3 weeks. The former option is neglected at the expense of having fun building something on the patron’s cuff.

The False Innovation Argument

Iterative prototyping is used across the board for all kinds of technology-related change endeavors, including those that have little to no innovation at all.  Do not get fooled into thinking that all what the IT teams are doing is cutting edge innovation. 

In fact, I posit that for the vast majority of the work done, the real innovation has occurred in the very early stages, often at a purely business level, totally detached from technology.  What I see for most endeavors, is IT teams building mainstream solutions that have been done dozens or hundreds of times within your organization or in others. Why then is iterative prototyping required? In those cases, using iterative development methods is less for clarifying requirements than to manage the uncertainty around teams not knowing how to build the solution or not understanding the systems they work on.

In many cases, using Agile™ is a means for managing the uncertainty around IT folks not knowing how to do it.

Did I ask this other cruel question: who’s accountable for knowing the details of the systems and technologies in place? You know the answer, so it’s not in the list of questions. It’s more like a reminder.

And finally, the most important question related to estimation:

Q10: Is iterative prototyping helping anyone get better at estimating?

Of course not.   The whole topic is tossed on the side as irrelevant when not squarely labelled as evil by those that believe that precious time should be taken to develop a new iteration of the product rather than guessing the future.

The Rachitic (or Dead) Estimation Practice

The consequence is that there is no serious estimation practice developed within corporate IT.  Using the above impediments about ‘not knowing what they want’ to explain why estimations are so often off-mark is one thing.  Using these hurdles as an excuse to not get better at estimating is another.  IT projects are very good at counting how much something actually costed and comparing it to how much was budgeted.  But no-one in IT as any interest in comparing actual costs with what was estimated with the genuine intent of getting better estimations the next time. 

This flabbiness in executing what should be a continuous and relentless quest for improvement in the exercise of estimating takes its root in a very simple reality:  corporate IT is the one and only serving your needs, providing to your organization everything under the IT sun.  While in the infrastructure side of IT, competition has been around and aggressively trying to offer similar services to your organization as alternatives to your in-house function, the other portion of corporate IT –the one driving change endeavors and managing your application systems—operates in a dream business model: one locked-in customer that pays for all expenses, wages and bonuses, and pays by the hour.  When wrong estimates neither make you lose your shirt nor any future business opportunity, the effort for issuing better ones can safely be put elsewhere, where the risks imminent.

Don’t Ask for Improvement, Change the Game

These behaviors cannot be changed or improved without providing incentives for betterment. Unfortunately, the current, typical engagement model of corporate IT in your organization is a major blocker. Don’t ask your IT teams to fix it: they’re stuck in the model. The ones that can change the game are not working on the IT shop floor.

Want some sustainable improvement? Start your journey by understanding the issues, and their true root causes.

Note on the Notes

Notes on the Synthesis of Form

This book is in my view the equivalent of the Old Testament for designers and architects. It dates 1964. Although another Alexander book from 1974, The Timeless Way of Building, has been raised to quasi cult level as it paved the way to very important principles in software design, I believe that this seminal work from the same author is more profound.
In its 1971 preface, Alexander wrote this:

“No one will become better designer by blindly following this method, or indeed by following any method blindly. On the other hand, if you try to understand the idea that you can create abstract patterns by studying the implication of limited systems of forces, and can create new forms in free combination of these patterns – and realize that this will only work if the patterns which you define deal with systems of forces whose internal interaction is very dense, and whose interaction with the other forces is very weak – then, in the process of trying to create such diagrams or patterns for yourself, you will reach the central idea which this book is all about.”

That’s the high-cohesion-low-coupling principle in its most earliest form. The fact that I can just read the preface and grasp what he meant in this dense sentence is both a sign of the influence he has had on future generations, and the importance of the principle.

You will also note the wise recommendation about following methods without thinking.

The man is born on the same year as my father: 1936

Small, Autonomous and Fast-Forward to Lower Quality

I am a jack-of-all-trades.  Admittedly  — and proudly  —  I realize that a lifelong  series of trial and error, crash courses, evening reading and incurable curiosity have  resulted in this ability to do many things  —  especially things involving some manual work. I feel self-satisfaction to think about all the situations that could arise in which I would know what to do, and how to do it.  I can help someone fix a kitchen sink on a Sunday afternoon.  I can drive a snowmobile, sail a catamaran, or connect a portable gasoline generator to your house.  My varied skill-set affords me a serene feeling of power over the random hazards of life.  That, and it’s also lots of fun to do different things.

There is currently an interesting trend in many organizations to favour highly autonomous teams.  The rational is quite simple: autonomy often translates to an accrued operational leeway that offers a better breeding ground for innovation.  By not being weighed down by other teams, there’s a hope that the group will perform better and yield more innovative ideas.  There is also the expectation that the team, using an Agile™ method, will produce tangible implementations much faster if it can be left alone.  The justification is founded in  the belief that small teams perform better.  Makes sense:  the smaller the team, the easier the communication, and we all know that ineffective communication is  a major source of inefficiency  —  in IT as well as in any  other field.  And if you want your team to be autonomous and composed of as few individuals as possible, then there is a very good chance that you need multi-skilled resources. 

Jacks-Of-All-Trades and Interchangeable Roles

You need jacks-of-all-trades or else either the number of individuals will increase or you will need to interact with other teams that have some of the required skills to get the job done. As a result, you will not be as autonomous as you’d like.  

But there is more: the sole presence of multi-skilled individuals is not enough to keep your team small and efficient in yielding visible results at an acceptable pace.  You must have an operating rule that all individuals are interchangeable in their roles.  If Judy —a highly skilled business analyst— is not available in the next two days to work on sketching the revamped process flow, then Imad —a less skilled business analyst, but a highly motivated jack-of-all-trades nevertheless— needs to take that ball and run with it.   You need multi-skilled resources and interchangeable roles.  That’s all pretty basic and understandable, and your organization might already have these types of teams in action.

For a small and autonomous team to keep its low size and independence upon others, it needs to be made of jacks-of-all-trades and roles must be interchangeable, or else it will either grow in size or depend on outsiders.

Conflicts of Roles in Small Autonomous Teams

Before you declare victory or you rush into hiring a bunch of graduates with a major in all-trades resourcefulness and let them loose on a green field of innovation turf, read what follows so that you also put into place the proper boundaries.   If you want to ensure maximum levels of quality and sustainability of what comes out of small, autonomous, multi-skilled teams, you need to ensure that there are no conflicting roles put on the shoulders of the individuals that need to juggle them.

Conflicts of role occur when the same person needs to do work that should normally be assigned to different persons.  The most obvious —and, in corporate IT, the most abused — combination of conflicting roles, is creating something and quality controlling that same thing.  This can be said of any field, really  — not just IT.  Industrial production process designers have understood for centuries now that he who quality checks should never be the one that is being checked.  Easily solved, might you think. You just need to have Judy check Imad’s work in two days when she’s available, and the issue is solved!  Maybe  —but there’s a catch. 

No Accountability and No Independence

Proper quality control requires at least one of these two conditions: (a) the person being checked and the controller must be both accountable for the level of quality of the work, or else (b) the person doing the quality control must be able to perform the reviews independently.  If Imad and Judy are both part of a team that is measured on the speed at which it delivers innovative solutions that work, then there is a good chance that quality is reduced to having a solution that works, period.   Other quality criteria are undoubtedly agreed upon virtues that no-one is against, but are not as important as speed.  As described in another article, in IT more than any other field, a working solution might be ‘under-the-hood’ a chaotic technical  collage, hardly holding itself with haywire and duct tape— but it can still work. 

These situations often occur when IT staff are put under pressure and are forced to cut corners.  As such, speed of delivery becomes in direct competition with quality when assigning the person hours required to deliver excellence.  If the small, autonomous, multi-skilled team’s ultimate success criterion is speed, then Judy’s check on Imad’s work is jeopardized if the quality of his work has no impact on speed.  In this case, because Judy and Imad are both part of a group that must deliver with speed, then none of them is really accountable for any other quality criterion than simply have that thing work. As long as it doesn’t impede delivery pace, any other quality criterion is just an agreeable and desirable virtue, but nothing more. Judy is not totally independent in her quality control role and worse, there is no accountability regarding quality.

When a small and autonomous team’s main objective is to deliver fast, any quality item that has no immediate impact on speed of delivery becomes secondary, and no-one is accountable for it.

And it doesn’t stop there: considering that quality control takes time, the actual chore of checking for quality comes in direct conflict with speed, since valuable time from multi-skilled people will be needed to ensure quality compliance.  After two days, when she becomes available, Judy could check on Imad’s work, yes, but she could also start to work on the next iteration, thus helping the team run faster.  If no-one is accountable for quality, Judy’s oversight will soon be forgotten.  Quality is continuously jeopardized, and in your autonomous teams there is a fertile soil for the systematic creation of lower quality work.  

There’s No Magic Solution: Make Them Accountable or Use Outsiders

So, what precautions must be taken to ensure maximum levels of quality in multi-skilled, autonomous teams?   The answer is obvious: either (1) the whole team must be clearly held accountable for all aspects of the work —including quality— or (2) potentially conflicting role assignments have to be given to individuals who are independent; that is accountable and measured on the work they do, not for the team’s performance.  

If you go with the first option, beware of not getting trapped into conflicting accountabilities again, and read this article to understand how quality can be challenged by how it is measured.  To achieve independence (second option), you will require having team members report to some other cross-functional team, or allow an infringement to your hopes of total autonomy by relying on outsiders.  Although multi-skilled and autonomous teams are an enticing perspective for jacks-of-all-trades, the agility they bring to the team should not be embraced at the expense of the quality of the assets you harvest from them.

Lower Quality at Scale

If you want to understand how and why unwanted behaviors such as those depicted above are not only affecting small autonomous teams, but are also transforming the whole of corporate IT into a mass complexity-generating machine that slows down business, read this mind-changing book.  It will help you understand why lower quality work products are bound to be created, not only in small, autonomous and innovation-centric teams, but almost everywhere in your IT function.

Innovation: Where IT Standards Should Stand

The use, re-use or definition of standards when implementing any type of IT solution has very powerful virtues. I’m going to outline them here so you can see how these standards play into the (often misunderstood) notion of innovation in corporate IT. We’ll then see where IT innovation truly happens in this context, while underpinning the importance of using or improve IT standards to support overall innovation effectiveness.

The Innate Virtues of IT Standards

  • Sharing knowledge.  Without standardization, each team works in its own little arena, unaware of potentially better ways of doing things and not sharing its own wisdom.  It is much easier to make all IT stakeholders aware of practices, tools or processes when they are standardized. Systematic use and continuous improvement of IT standards act as a powerful incentive for knowledge sharing.
  • Setting quality targets. Standards minimize errors and poor quality through the systematic use of good practices.  They encompass many facets, from effectiveness to security, to adaptability, to maintainability, and much more.
  • Focusing on what counts.  A green field with no constraints and no prior decisions to comply with might entice your imagination, but it can also drive you crazy if everything has to be defined and decided.  IT standards allow you to focus on what needs to be changed, defaulting all other decisions to the use of the existing standards.  
  • Containing unnecessary complexity.  The proliferation of IT technologies, tools, processes and practices in your corporate landscape is a scourge that impedes business agility.  Absence of standards interferes with knowledge sharing and mobility of IT resources.  Multiplicity of similar technologies makes your IT environment more difficult to comprehend, forcing scarce expert resources to focus on making sense out of the existing complexity rather than building the envisioned business value.

The use and continuous improvements of IT standards is one of the most effective cross-enterprise safeguards for IT effectiveness, IT quality, and in the end your business agility.

Despite all these advantages, there is a trend emerging in many organizations that puts these virtues at risk of not being present.

The Lab Trend

In the last few years, it has become mainstream strategy for large, established corporations to create parallel organizations, often called “labs”, that act as powerhouses to propel the rest of the organization into the new digitalized era of disruptive innovations.  This article is not about challenging this wisdom, which may be the only possible way —at least in the short-term— to relieve the organization from the burden of decades of organic development of IT assets and processes that slow down the innovation pace. 

Unfortunately, there are people in your organization who associate standards with the ‘old way’ of doing things.  After all, aren’t all standards created after innovation, to support the repeated mainstream usage of innovative tools, processes or technologies that came before them?

Making the leap that IT  standards should not be considered in the innovation process, not included in the development of prototypes or proofs of concept, or   — more simplistically  — not be part of anything close to innovative groups, is a huge mistake.

 The decision to use or not use a given IT standard depends on what you are innovating, and at what stage of the innovation process you are in.   The IT work required to implement business innovations is rarely wall-to-wall innovative.  Standards cannot —and should not— be taken out of the innovation process from start to finish.  I’d a go step further: standards should always be used except when the innovation requires redefining them.  But the latter case is exceptional.  To help you grasp the difference between true business innovation and its actual implementation, here’s a simple analogy:

The Nuts and Bolts of Innovation

In the construction industry, there are well known standards that determine when to use nails, when to use screws, and when to use bolts in building a structure.  It stipulates the reasons to choose one over the other (e.g. because nailing is much faster to execute and cheaper in materials costs). The standards also spell out how to execute: how many nails to drive, the size and spacing between them, safety precautions, etc.

Now suppose that your new business model is about building houses that can be easily dismantled and moved elsewhere. Let’s say to support a niche market of temporary housing for the growing cases of climate-related catastrophes.   You decide to build whole houses without ever using nails or screws by bolting everything.  You would make this decision to simplify dismantlement, easily moving the house and rebuilding it elsewhere.  The technical novelty here lies in the systematic use of bolts where the rest of the industry normally uses nails.  Bolts are slower to install and more expensive, but they would allow you to easily disassemble the house.  

But when a worker bolts two-by-six wood studs, the actual execution of bolting is not an innovation; it has been known for centuries and the execution standard can be used as is.  In other words, when a worker is on the site and bolting, the innovation has already occurred when the choice was made not to use nails or screws. The market disruptive strategy was determined before, and it is now time to apply bolting best practices and good craftsmanship.

No Ubiquitous IT Innovation in Corporate IT

For IT based business solutions, when the teams are in the phases of implementing the processes, systems and technologies, most of the business innovation has probably occurred in the previous phases.  

When IT staffs are actually building the technical components of your new modes of operation, the business innovation part has already occurred: it lies in the prior choices made during design.

The techies might be testing the innovation through some sort of a prototype, but it doesn’t make their work innovative. When you look at it from a high enough viewpoint, isn’t implementing a new business process with information technologies what corporate IT has been doing for decades?  

When building the IT components of innovative business solutions, where is the actual innovation?  Is it in the new business processes or in the way they are technically implemented?  Chances are that the real value is in the former, not the latter because your initial intention was to aim for  business value, not technical prowess.    

It may very well be that, at the IT shop-floor level, what needs to be done is to apply good practices and standards that have been around for years, if not decades.

In our era of multi-skills, cross-functional, autonomous, self-directed and agile teams  —  which are all busy growing new solutions that support constantly evolving business processes  —  there is a line that should not be crossed: thinking that innovation applies to everything, including the shop-floor level definition of good craftsmanship.  

Don’t Pioneer Without IT Standards

My observations are that when IT practitioners are part of teams dedicated to innovative business solutions, they often become overzealous, abandoning standardization and tossing tried-and-true practices out the window.   I’ve seen IT people making a clean-sweep of all established standards and proclaiming every part of a solution as innovative.   I’ve seen technical staff blindly pulling so-called innovative technologies into the equation with little understanding of their real contribution to business value.  This has a direct impact on the quality of the resulting work. Here’s how:

  1. : IT staff end up using bolts where nails would be fine or using nails where they should have used bolts;
  2. : new platforms are built with no standards used or defined.

In both cases, the impact on your future change projects is catastrophic: lack of shared knowledge, unknown quality levels, lost time and effort reinventing the world, and most importantly, creation of more unnecessary IT complexity.  The resulting assets will be hard to integrate, impossible to dismantle, incomprehensible by anyone else but those that created it, and costly to maintain.  In other words, your business agility will be seriously jeopardized.

The results from innovation without standards will fast-track you to the same burdensome position you tried to free yourself from with your old, outdated platforms.

The only way to avoid this unhealthy pattern is to make sure that the mandate is not just about innovating at any cost.  It must include the use and creation of standards, and limit the scope of change to what creates business value.

Set the Standard

First, your innovation team should not only devise new ways to do business: it must make it a priority to use and reuse standard practices and technologies, unless required to innovate. When a given standard is not applicable, their job should include to define the replacing one.  The idiom “to set the standard” earns all its significance: re-inventing business models that others will now run to catch or match, and defining the standards for your organization and future projects to use and leverage.  Your future business agility heavily depends on the systematic application of good craftsmanship in your current innovations.

New Technologies Need to Bring Value, Not Novelty

Secondly, your new parallel ‘lab’ organization should bear the onus of justifying the use of any new or different technology. How will it contribute to the innovative, business-oriented end-result that you seek?   When technologists are put in front of the enticing prospect of having no obligation to the use of any of the standards in place in your organization, they will jump at it.  It will often lead to the introduction of new technologies for the sake of it, based on no other justification than hunches, hearsay, or how attractive it may look when printed on a resume.

The use, reuse, and redefinition of IT standards should always be part of your innovation team’s mandate.  If not, your future business model will be made of foundational assets built as if there was no tomorrow.

Beware of falling into the trap of catching the contagious over-excitement about the scope of innovation.  Most of IT processes and components that result from business innovation can use mainstream practices and standard technologies. The legitimately innovative portion  — the one that really makes a difference —  is just a fraction of the whole undertaking, and very often, the truly novel part is simply not technological.

Provide Leeway But Set Quality Expectations

So, even if you rightfully decide to go down the path of creating parallel organizations, don’t allow these organizations to have too much leeway when it comes to standards..  Do not sign the cheque without a minimal set of formal expectations regarding sustainability, which must include standards compliance.  

The key is in clear accountabilities and coherent measures of performance. If you want to learn more about how poorly-distributed roles can sabotage the work of your corporate IT function, read this short but mind-changing business strategy book.

IT Project Failures Are IT Failures

While conducting research for Volume 1 of my first book[1], I wanted to investigate the root causes of IT project failures. I was completely convinced – and still am– that these failures are significantly related to the quality of the work previously done by the teams laboring on these endeavors. In other words, the recurring struggle that IT teams face, often leading to their inability to successfully deliver IT projects on time, is directly linked to the nature (and the qualities) of the IT assets already in place. I found a wealth of information relating to project failures, as well as a disappointing revelation.

The Puzzling Root Cause Inventory

This disconcerting realization was that the complexity of existing IT assets is rarely mentioned. By far, technological issues do not appear frequently in the majority of literature on project failure. Just for the sake of it, I performed an unscientific and unsystematic survey of professional blogs and magazines, and came up with a list of 190 determinants of causes for failure. The reasons range from insufficient sponsor involvement, to faulty governance, communications, engagement, etc. I found nothing really surprising, albeit depressing in some ways.  Of these reasons, a mere 11 were related to the technology itself, while one, and only one, referred to the underestimation of the complexity.

This number inaccurately reflects reality.  It doesn’t make sense that, for technology-based projects, there is such a thin representation of technology-related issues. The proportions don’t match.  It doesn’t fit with the reality in the corporate trenches on a day-to-day basis. If your platforms are made of too many disjointed components, or were built by siloed teams; if their design and implementation was poorly documented to cut on costs, or standard compliance practices were ill-controlled, then they are bound to contribute to failure. If your internal IT teams have a hard time understanding their own creations, or frequently uncover new technical components that were never taken into account, how can you be surprised when schedule slippages occur in their projects?  The state of what is in place plays a major role —and it’s definitely not in a proportion of 1:190.

A Definite Project Management Skew

This gap in the documented understanding is due to a project management bias in the identification of root causes of IT project failure.   This is quite understandable, since the project management community is at the forefront of determining project success and failure. Project managers are mainly assessed for on-time and on-budget project delivery[2]. They consider underperformance seriously, and that is why available knowledge on root causes is disproportionately skewed toward non-technical sources.

Project managers tackle failure as a genuine project management issue, and the solutions they find are consequently colored by their project management practice and knowledge.

I wouldn’t want to undervalue the importance of the skills, the processes and good practices of project management. But we need to recognize the foundational importance of the assets that are already in place. They are are not just another risk management plan variable to take into account.  They the base matter from which an IT project starts from, along with business objectives and derived requirements. On any given workday, IT staff are not working “on a project”; they are heads down plowing through existing assets or creating new ones that need to fit with the rest.

The Base Matter Matters a Lot

If IT projects were delivering houses, the assets in place would be the geological nature of the lot, the street in front of the lot, the water and sewage mains under the pavement, the posts and cables delivering electricity, and the availability of raw materials. Such parameters are well known when estimating real estate projects.  If you did not take into account that the street was unavailable at the start date of the construction project, that there was no supply of electricity, that the lot was in fact a swamp, or that there was no cement factory within a 400 mile radius of your construction site, you can be sure that the project would run over-schedule and over-budget.  The state of your existing set of assets creates “surprises” of the same magnitude as the construction analogies above.  When your assumptions about the things in place are confounded because quality standards weren’t followed or up-to-date documentation was unavailable, your estimates will suffer.

Any corporate IT project that doesn’t start from a clean slate[3] —and most aren’t— runs into issues related to the state of the assets already in place.

The unnecessary complexity induced by poorly documented or contorted solutions is not a view of the mind.  It is the harsh reality that corporate IT teams face on a daily basis.  It is the matter that undermines their capacity to estimate what has to be done, that cripples their ability to execute at the speed you wish they delivered.

IT Quality Is an IT Accountability

Although project success is, by all means, a project management objective, the state of an IT portfolio isn’t.

The quality of what has been delivered in the past, and how it helps or impedes project success is not a project management accountability. It’s a genuine corporate IT issue.

So tossing it all to project management accountabilities is an easy way out. If important business projects are bogged down by an organization’s inadequate IT portfolio, it’s primarily an IT problem, and secondly a project risk or issue. Project Managers with slipping schedules and blown up budgets took failures seriously enough to identify 190 potential root causes, and devise ways to tackle them.  Nobody in Corporate IT has ever done anything close to that concerning IT complexity or any other quality criteria applicable to IT assets.

This vacuum has nothing to do with skills, since IT people have all the expertise required to identify the root causes and work out ways to reduce unwanted complexity.

It’s all about having the incentives to fix the problem.  Reasons to solve are not just weak, but outweighed by motivations to not do anything about it[4].

———–

[1] More details on the book available on my blog’s book page.

[2] Also detailed in the book, or in this recent article.

[3] See this other article the clean slate myth.

[4] For more details on this, take a look at my latest book.

Perennial IT Memory Loss

There is a strange thing happening in corporate IT functions; a recurring phenomenon that makes the IT organization lose its memory. I’m not talking about a total amnesia, but rather a selective one afflicting corporate IT’s ability to deal with the current state of the technical assets it manages. This condition becomes especially acute at the very beginning of a project focussed on implementing technical changes to drive business evolution. Here’s how it happens:

It all starts with project-orientation. As we discussed in another article, the management of major changes in your internal IT organization is probably project oriented. Projects are a proven conduit for delivering change. Thanks to current education and industry certification standards of practice, managed projects are undoubtedly the way to go to ensure that your IT investment dollars and the resulting outputs are tightly governed. Unfortunately, things start to slip when project management practices become so entrenched that they overshadow all other types of sound management, until the whole IT machine surrenders to project-orientation.

The Constraints of Project Scope

As you may know, by definition, and as taught to hundreds of thousands of project managers (PMs) worldwide, a project is a temporary endeavor. It has a start date and an end date. Circumstantially, what happens before kickoff and after closure is not part of the project.

The scope of the project therefore excludes any activity leading to the current state of your IT portfolio. The strengths or limitations of the foundational technical components that serve as the base matter from which business changes are initiated are considered project planning inputs. The estimation of the work effort to change current assets, or the identification and quantification of risks associated with the state of the IT portfolio, will always be considered nothing more than project planning and project risk management.

Further excluded from project management are considerations that will apply after the project finish date. These factors encompass effects on future projects or consequences for the flexibility of platforms in face of subsequent changes. Quality assessments are common project related activities, likely applied as part of a quality management plan. But a project being a project, any quality criteria with impact exclusively beyond the project boundaries will have less weight than those within a project’s scope – and by a significant margin. Procedures directly influencing project performance – that is, being on-time and on-budget (OTOB)– will be treated with diligence. All other desired qualities, especially those that have little to do with what is delivered within the current project, become second-class citizens.

Any task to control a quality criterion that does not help achieving project objectives (OTOB) becomes a project charge like any other one, and an easy target for cost avoidance.

This ranking is more than obvious when a project is pressured by stakeholder timelines or in cases of shortages of all sorts become manifest. Keep in mind that the PM is neck-deep into managing a project, not managing the whole technology assets lifecycle. Also remember that the PM has money for processes happening within the boundaries of the project. After the project crosses the finish line, the PM will work on another project, or may look for a new job or contract elsewhere.

When all changes are managed by a PM within a project, with little counter-weight from any other of type of management, corporate IT surrenders to project-orientation.  When no effective cross-cutting process exists independently from project management prerogatives, your IT becomes project oriented.  I confidently suspect that your corporate IT suffers from this condition unless you already have made a shift to the new age of corporate IT.

Project Quality vs. Asset Quality

Project orientation has a very perverse effect on how technology is delivered: all radars are focussed on projects, with their start and end dates, and as such the whole machine becomes bounded by near term objectives. The short term project goals in turn directly impact quality objectives and the means put in place to ascertain compliance. Again, since quality control is project funded and managed, the controls that directly impact project performance will always be favored, especially when resources are scarce.

In project-oriented IT, quality criteria such as the ability of a built solution to sustain change, or the complexity of the resulting assets don’t stand a chance.

The result is patent: a web of complex, disjointed, heterogeneous, and convoluted IT components which become a burden to future projects.

It’s here that the amnesia kicks in.

All IT Creations Are Acts of God

When the next project dependent on the previously created or updated components commences, everyone acts as if the state of these assets was just a fact of life.

Whatever the state of the assets in place, at the beginning of a new project, it’s as if some alien phenomena had put them place; as if they were the result of an uncontrollable godly force external to IT.

Everyone in IT has suddenly forgotten that the complexity, heterogeneity, inferior quality, inflexibility, and any other flaws come from their own decisions, made during the preceding projects.

This affliction, like the spring bloom of perennial plants, repeats itself continuously. At the vernal phase of IT projects, when positivism and hopes are high, everybody looks ahead; no one wants to take a critical look behind. This epidemic has nothing to do with skills or good faith, but can instead be traced to how accountabilities are assigned and the measurement of performance.

When all changes are subject to project-oriented IT management, the assets become accessory matter. Your corporate IT team delivers projects, not assets.

The Latest Change in Vocabulary Doesn’t Turn Liabilities into Assets

In last week’s article we saw that you should be very prudent concerning IT Tactical Solutions. They are often presented by your IT teams as temporary situations; sidesteps that must be taken before the envisioned strategic situation can be reached. But more often than not, these patches are permanent. Since these dodged solutions work, most business people aren’t keen to invest in further revisions to develop an optimal design. Hence, these enduring fixes lower the quality of your digital platforms and compromise the agility and speed in future business projects.

The effect of the repeated production of sub-par assets – regardless of the name they’re given – is nothing less damaging than the continuous creation of unnecessary complexity, leading to the progressive decline of your IT platforms.

Let’s Get Financially Disciplined

The cumulative detriment to IT assets has recently inspired some smart IT people to come up with a new idiom: Technical Debt. If an IT colleague has ever uttered a sentence to you including that pair of words, you should read the following.

The Technical Debt idea entails that an IT person will document cases of sub-optimally built solutions into some sort of a ledger. Each individual occurrence, as well as the sum of everything in the register, is referred to as a technical debt. With each new IT hiccup added to the books, an official process makes the paying business sponsor officially aware of the added technical debt. The message from IT sent to the client in such situations means something like this:

  1. “For technical reasons, the project cannot be delivered according to the original blueprint and/or customary good practices within the allotted time and budget.
  2. This may impede the agility of the platform, or create additional costs in future projects. Hence there is a technical debt recognized.
  3. We all acknowledge that this debt should be corrected.”

Technical Debts are Fine for Communicating

This is great from a communications point of view. There are, however, caveats regarding such a well-intended message:

  1. The project will deliver something anyway, and it will work[1].
  2. But you won’t have a clue about the problematic “technical reasons” used to justify inferior quality; you’re held hostage by a single IT desk, holder of all technical knowledge.
  3. The debt is declared, but the impact is not evaluated. There is no reliable forecast suggesting the amount of the added deficit to write off.
  4. There is probably no transparent process in place to check the ledger at the end of a project in order to track and contain the global deficit.

Loans 2.0

This whole concept of indebtedness in IT doesn’t make sense from the start. It leads any business people to falsely believe that the deficit is managed. So you have a debt? As a businessperson, the following questions probably come to mind:

  1.  Who is the lender?
  2. Who is the debtor?
  3. What are the interests made of?
  4. What is the interest rate?
  5. How and when is the principal being reimbursed?

The answers are brutal:

  1. You.
  2. You.
  3. Budgetary increases or lost speed pertaining to future business projects.
  4. Nobody knows.
  5. At an undefined date, when you ditch your platform and pay for another one.

Call ‘em Whatever You Want – You Pay for Everything

Short term management, conflicting accountabilities, or any other good or bad reasons to cut corners will foster the creation of lower quality assets by your IT team.

Your IT staff can call these situations fixes, patches, tactical solutions, or technical debts, but the result is always the same: the customer pays for everything, now or in the future, in hard cash or in reduced business agility.

As for the assets in question, you will always keep them for a longer time than you’d want to, whether they are true assets or debt-ridden liabilities[2].

Measuring Quality

The gloomy outcome I’ve been describing is not inevitable – there is hope. But only if you work to change how accountabilities are distributed. In this book you will have the opportunity to look more closely at the reasons why accountability on IT asset quality is missing and afflictive.

—————-

[1] For more details on why it will always work, refer to this other article.

[2] The IT Liability idiom is borrowed from the work of Peter Weill & Jeanne Ross from MIT Sloan’s Center for Information Systems Research, and refers to the fact that IT investments may create liabilities rather than assets if these so-called assets become a burden under changing business conditions.


The Tactical Steps Sideways That Keep You On the Sidelines

Things happen in IT projects.  At times, some quality elements will be sacrificed in order to offset the vagaries of the project delivery scene.  A solution that works of course.  But as discussed in a previous article, a working solution brings no comfort regarding its quality, since almost anything can be done in the virtual dimensions of software and computers. And when issues arise to put pressure on IT teams, a suboptimal alternative will be presented as a fix, a patch, a temporary solution, or as the most wickedly named: the tactical solution.

In circles of experienced IT managers and practitioners, the ‘tactical solution’ sits somewhere between fairy tale and sham.

The word suggests to the non-IT stakeholder that the chosen tactic is a step sideways, and that once the applicable steps are taken, the product should attain the desired state, which is often labelled as the strategic or target solution.

Because the tactical solution works (since anything in IT can be made to work), it could be viewed as a small step in the right direction.  After this dodged solution is implemented, we simply need to perform a few extra steps to reach the strategic state, right?

Not really.

Tactical Solutions Waste Work

The solution does work, and common wisdom says “If it isn’t broke, don’t fix it”. Besides, how could it be broken if it works? Unfortunately, and I know that I am repeating myself, the fact that it works does not guarantee of anything.

Tactical solutions are never presented to you as a step in the wrong direction or a step back, but most of the time they are, and here’s the logic:

Once a tactical solution is delivered, the next step is not a move forward, but rather a revision of the sub-optimally designed part. The system will often have to be partly dismantled and then rebuilt, throwing away portions of the previous work. That’s not a step in the right direction.  That’s not tactical.  That’s wasted work.

Assets Built on Hope Aren’t Enough

Not many business people are keen to pay for throwing away something that works, and as such, when money for the next phase becomes available, there is a good chance that the sponsor will want to invest in an effort that brings more business value, rather than redoing what’s already completed. Moreover, in many cases the bewildered customer will need to pay an additional fee for the removal of something that was paid to put in place. That’s a stillborn path to the strategic state.

Hence, to get there, the IT team has to hope for luck, or must fall back on secrecy. Hope to correct the situation in the lucky event that the tactical solution breaks, or count on a forthcoming major project to allow them the opportunity to openly (or discretely) administer the needed rework effort.

Next time you hear a friendly IT person confidently talk about a tactical solution or any of its synonymous labels, don’t jump too fast to the conclusion that it will elegantly be transmuted to a strategically positioned investment based on a greater plan to get there.

Most of the time, a so-called tactical solution is in reality a permanent solution that sacrifices agility and becomes an IT liability¹ for many years to come.

If you know -or vaguely heard of- the technical debt concept and hope that it will prevent sideways steps that keep your IT assets on the sidelines of the strategic investment field, stay tuned for next week’s article.  You will realize that processes designed for the continuous development of software sold directly to customers don’t always propitiously apply to the delivery of business solutions in support of what your organization makes a living from.

——————-

[1] The IT Liability idiom is borrowed from the work of Peter Weill & Jeanne Ross from MIT Sloan’s Center for Information Systems Research, and refers to the fact that IT investments may create liabilities rather than assets if these so-called assets become a burden under changing business conditions.

Joseph’s Machine and the Unnecessary Complexity of Business IT Solutions

The best non-technical analogy to explain the extent of the complexity of corporate IT assets, and by the same token why a working IT solution doesn’t prove anything about its quality (the subject of a previous article) appeared on my LinkedIn feed last week:  https://www.youtube.com/watch?v=auIlGqEyTm8.

After watching this two-minute video, your first reaction is probably like mine: amusement and awe over Joseph’s ingenuity. But once I was over the toddler’s cuteness, it came to me that Joseph’s machine can teach a lot about IT solutions.

Am I insinuating that your IT solutions are like Joseph’s machine?  You bet!

Yes, IT business solutions’ engines often look like this under the nice, shiny hood of sleek user interfaces.  What you see is the final product, the cake you want to eat.  What you don’t see are the contorted paths taken to get it to you.

So why are we IT people making things so complicated?

There are many reasons. My first book will give you a broader view of the problem and a deeper understanding of the non-tech root causes. In the meantime, here are three key pointers:

First, Joseph is dealing with the laws of physics – in a brilliant way I should add. In the virtual world of software-based solutions, such laws don’t apply. Furthermore, I suspect that Joseph had to go to a dozen stores to buy all this apparatus and spend a lot of time finding the right gizmos to fit his process.

In software-based solutions, you just click, download it, resize it, or copy and paste it ad infinitum if you wish.  It is usually simple, often effortless.

It can also go in all directions, augment the overall complexity, but still your IT staff will find a way to make it work. 

In other words, the drawback of computer-based solutions is that it is easy to “clog your kitchen” as in the video.

Second, after Joseph is done with video-making, he cleans the kitchen before the in-laws come for dinner. Your IT-based solutions support your business and they stay there as long as you’re operating. As easy as it is to fill the kitchen with software-based components, it is proportionately as difficult to empty the room – unless it was planned for.

Roles distribution and performance indicators do not promote designs that make your systems easy to remove. Most of the time, you’re stuck with it.

Finally, Joseph’s machine works and it delivers the cake. The same can be said about your IT business solutions. The current hierarchy of performance measures for corporate IT is dominated by short-term focus with the sempiternal “Keep-the-Light-On” (KTLO) and “On-Time-On-Budget” (OTOB) efficiency gauges.

If your sole expectation is to get the cake on your plate before the competition gets it, then you’ll receive your pastry all right, but do not hope for more. 

It doesn’t have to be this way.

With a more balanced distribution of accountabilities and performance measures that extend beyond short-term expectations to the intrinsic quality of what is built, you can earn a significant competitive edge with your IT solutions. The added benefit?  The next time, when you need pudding or ice cream instead of a cake, you’ll reduce the probability of your IT team telling you that you need to buy a whole new kitchen. The kitchen-building industry is a prosperous one these days, but it takes your investment money, and precious time to beat the rivals.

Do Not Assume Anything From IT Solutions That (Always) Work

This is where we start: an initiatory revelation that will help you understand many of the everlasting issues plaguing corporate IT.  This truth is one of the most important drivers of lower quality in the work products of the corporate IT function.

Business IT solutions are mainly made of software, and software is highly flexible and malleable. These are characteristics that are difficult to find elsewhere. Fundamentally, software is a series of electrical impulses representing numbers.  All a computer does is add numbers – nothing else. The images on your screen, the voice that you hear on your phone and any other seemingly magical digital phenomenon can be reduced to zeroes and ones.  These numbers are then eaten and processed by an immensely powerful number-crunching machine the size of your thumbnail.

Limitless IT Possibilities

Software exists in a virtual world where the laws of physics, as well as most constraints found in other fields, don’t apply. Of course, applications must remain compatible with the physical characteristics of the human beings or machines who will use them. If an IT business solution interacts with production machinery, perhaps opening and closing garage doors, you can expect it to abide by the laws of physics, and probably some standards and regulations.

But apart from these specific cases, it is fair to say that if the IT experts of most businesses are challenged with questions such as “… but is it doable? Can you make it work?” they cannot honestly answer “no”, because there is always a way to make an IT solution work.

Why Does Your IT Team Say “No”?

You may have painful memories of instances where you were told “no” by your IT teams. Let me assure you that, excluding extreme cases, the reasons for these negative answers were probably that the budget was exhausted, the time left was too brief, the compliance to standards was problematic, or the teams in place were busy doing other things, but not that it wasn’t doable.  There is always a way to make it happen when you’re dealing with the intangibles of software and the immense capabilities of computing hardware.

That’s the good news.

Beware of Alternate Solutions That Cut Corners

Often, making programs work just requires doing things differently.  Since software is so workable, the options available are usually numerous.  Unfortunately, doing things differently does not invariably mean finding a totally innovative, out-of-the-box paradigm.

Most of the time, being imaginative means finding ways to cut corners and make it work still.

The range of options can be further extended by the relative inconsequence of errors.  In the virtual world of corporate IT, there is little risk of human injuries or casualties. Thus far in my career, I’ve never seen anyone drawn into a court of law for a botched design.  External bodies will never audit a project down into its technical details. Events of skimping on quality never get published outside the corporation, and not even outside the project team.

Quality Issues That Translate in More Complexity

Your IT team will find a way to make a solution work: I can guarantee it.

They will get it to work, whether it’s with little effort or a heroic tug, and through the use of best practices or with haywire.  But heroism and best practices require more time and labor.

Hence the end result will most probably be subject to more maintenance, or run slower, or have stability issues, or present learning challenges to future employees, or require replacement sooner, or augment costs in other projects, but it will work.

And if the expected quality levels are not achieved at the finish line, it will be called a fix, a patch, or my favorite, a tactical solution, to convey recognition that it could have been designed and built in a better way.   But these idioms don’t express the truth that such solutions increase unnecessary IT complexity which in turn impedes the agility of the team that created it.

Does it mean that the great powers of information technologies, with their almost limitless applications can also be a hindrance?  I’m afraid so. We’re in a case of the archetypical two-edged sword.

Not Proving Much

Your most important takeaway is the following:

The fact that a solution works proves nothing other than the fact that it works. Do not even contemplate for a second the mere idea that it determines anything about the quality of the end product.

Whatever the depth of your sorrow about this depressing statement, you might be tempted to think that, given all the virtual flexibility of IT, sub-optimally designed solutions can be easily corrected in subsequent projects. But that’s not the way it works, so don’t hold your breath for quality issues to be corrected.  In an upcoming article, I will present another unpublicised truth about corporate IT that will lower your expectations about IT’s capacity to realign after sub-optimal solutions are delivered.

Before you do anything hasty, let me reassure you: there is light at the end, and there is a way to get higher levels of quality that promote nimbleness.  The good news is that it has nothing to do with technology and is within the reach of non-IT business executives.  If you’re interested, take a minute to subscribe and you will get automated reminder when new posts are published.

1