• rm@rmbastien.com

Assets

Note on the Notes

Notes on the Synthesis of Form

This book is in my view the equivalent of the Old Testament for designers and architects. It dates 1964. Although another Alexander book from 1974, The Timeless Way of Building, has been raised to quasi cult level as it paved the way to very important principles in software design, I believe that this seminal work from the same author is more profound.
In its 1971 preface, Alexander wrote this:

“No one will become better designer by blindly following this method, or indeed by following any method blindly. On the other hand, if you try to understand the idea that you can create abstract patterns by studying the implication of limited systems of forces, and can create new forms in free combination of these patterns – and realize that this will only work if the patterns which you define deal with systems of forces whose internal interaction is very dense, and whose interaction with the other forces is very weak – then, in the process of trying to create such diagrams or patterns for yourself, you will reach the central idea which this book is all about.”

That’s the high-cohesion-low-coupling principle in its most earliest form. The fact that I can just read the preface and grasp what he meant in this dense sentence is both a sign of the influence he has had on future generations, and the importance of the principle.

You will also note the wise recommendation about following methods without thinking.

The man is born on the same year as my father: 1936

missing

Anything Missing When Measuring Corporate IT Performance?

Let me provide some reassurance about corporate IT: all the accountabilities that are linked to quantitatively gauged measures of performance are subject to rigorous management and are never neglected.

The two broad categories of clearly defined and clearly measured performance objectives are KTLO and OTOB, acronyms for Keep The Light On, and On-Time On Budget, respectively.

The first category relates to IT operations. Corporate IT’s first and foremost responsibility is to make sure that what has been purchased, leased, built, installed, and has proven to work the first time, actually continues to do so, continuously and as long as your business runs. IT operations are less glamorous from an innovation point-of-view.  IT Ops – as it is often called – doesn’t invent new customer experiences. Neither does it re-architect your organization through radical business design.

But Ops is by far the most critical information technology function because its failure directly impacts the survival of your business in the very short term. If your organization cannot deliver the services to your customers and partners, it literally ceases to exist.  As such, IT operations should be taken very seriously; everything IT does or manages is monitored and measured quantitatively, down to fractions of a digit. Expectations on the quality, stability, and performance of operations are quantitatively defined up-front. Failure happens, but if the frequency or length of missteps is above the agreed-upon performance levels, some people will get seriously nervous about their jobs.

“With the quantitatively measured performance objectives of IT Operations, if failure happens too often, people get  nervous about their career.”

The second category, OTOB, relates to the execution of business change endeavors. Over the past few decades, there have been many scholar and trade discussions about the measurement of project performance, and how adequate – or not – the traditional triple evaluation scheme of cost-schedule-scope actually is. The model may have its limitations for those that are intimately involved in the execution of the endeavors that result in business change, but for those that command the change, assume the risks and reap the benefits – that is you, the paying customer – this performance measurement triad makes a lot of sense. The cost is how much money you need to spend to get what you want or need. The schedule is the time required to get it. And the scope is the extent of what you get for your money.

Scope can be subject to much discussion since the knowledge of what you want and what you really need in the end may differ quite a bit between the pre-project and end-of-project phases. To further complicate matters, there are as of yet no universal units of measure for scope for IT change projects. This imprecision contrasts with the universally understood measuring of cost and schedule.

That’s why many business people fall back on the sole use of on-time-on-schedule as a comprehensive tool for assessing the performance levels of IT in delivering change, assuming that what is delivered (scope) should be roughly what it ought to be for some business value stream to transmute to its new state.

“Scope of what is delivered by digital change projects is hard to measure and compare.  That’s why most business people will fall back on what they can grasp: on time and on budget.”

The importance of managing change is not an acute necessity for IT operations. Failure to be on-time or on-budget doesn’t have the same impact on personal and team performance evaluations, but performance is fathomed nonetheless and delivery dates are being managed.

So What’s Missing?

The major issue is that there are very few other quantitatively measured signs of excellence. The rest of IT is either subject to non-standard and qualitative evaluations or simply not measured at all. Non-quantified evaluations are debatable and easy to challenge on contextual differences. Non-standardized gauges are hard to compare.

In the end, IT measures itself for only a portion of what it does, focusing on improving what literally counts: where there are unchallengeable numbers with universally understandable units of measure. The rest is left to good intentions, or to how it is believed to positively impact OTOB or KTLO.

Notice that both KLTO and OTOB are measures of either immediate (KLTO) and short-term performance (OTOB). ‘Keeping the lights on’ means continuous operations, or short transactional tasks. Change projects are by definition temporary endeavors with a beginning and an end. What happens after the project is finished is completely irrelevant. Even the major transformation programs are split into manageable chunks that often fit into a civil year.

The IT management repercussion of short-termism is that the lasting impact of ITs work on your organization is veiled by short-range prerogatives.

The IT aspects that get the hit on the flank by short-term measures are quality and assets. More precisely, it is a hit on the quality of the work done that impacts the quality of the assets you get as a result.

The impact on your organizational capacity to adapt itself or respond quickly to changes in its environment is highly dependent on the quality of the assets. Asset readiness for change will suffer from lower quality work done in previous projects.

Get the bigger picture in this book about things executives need to know about IT – it will help you understand how most IT teams are evaluated today. These typical metrics have a direct impact on what gets improved, but also, what isn’t being taken care of.  Enjoy!

Small, Autonomous and Fast-Forward to Lower Quality

I am a jack-of-all-trades.  Admittedly  — and proudly  —  I realize that a lifelong  series of trial and error, crash courses, evening reading and incurable curiosity have  resulted in this ability to do many things  —  especially things involving some manual work. I feel self-satisfaction to think about all the situations that could arise in which I would know what to do, and how to do it.  I can help someone fix a kitchen sink on a Sunday afternoon.  I can drive a snowmobile, sail a catamaran, or connect a portable gasoline generator to your house.  My varied skill-set affords me a serene feeling of power over the random hazards of life.  That, and it’s also lots of fun to do different things.

There is currently an interesting trend in many organizations to favour highly autonomous teams.  The rational is quite simple: autonomy often translates to an accrued operational leeway that offers a better breeding ground for innovation.  By not being weighed down by other teams, there’s a hope that the group will perform better and yield more innovative ideas.  There is also the expectation that the team, using an Agile™ method, will produce tangible implementations much faster if it can be left alone.  The justification is founded in  the belief that small teams perform better.  Makes sense:  the smaller the team, the easier the communication, and we all know that ineffective communication is  a major source of inefficiency  —  in IT as well as in any  other field.  And if you want your team to be autonomous and composed of as few individuals as possible, then there is a very good chance that you need multi-skilled resources. 

Jacks-Of-All-Trades and Interchangeable Roles

You need jacks-of-all-trades or else either the number of individuals will increase or you will need to interact with other teams that have some of the required skills to get the job done. As a result, you will not be as autonomous as you’d like.  

But there is more: the sole presence of multi-skilled individuals is not enough to keep your team small and efficient in yielding visible results at an acceptable pace.  You must have an operating rule that all individuals are interchangeable in their roles.  If Judy —a highly skilled business analyst— is not available in the next two days to work on sketching the revamped process flow, then Imad —a less skilled business analyst, but a highly motivated jack-of-all-trades nevertheless— needs to take that ball and run with it.   You need multi-skilled resources and interchangeable roles.  That’s all pretty basic and understandable, and your organization might already have these types of teams in action.

For a small and autonomous team to keep its low size and independence upon others, it needs to be made of jacks-of-all-trades and roles must be interchangeable, or else it will either grow in size or depend on outsiders.

Conflicts of Roles in Small Autonomous Teams

Before you declare victory or you rush into hiring a bunch of graduates with a major in all-trades resourcefulness and let them loose on a green field of innovation turf, read what follows so that you also put into place the proper boundaries.   If you want to ensure maximum levels of quality and sustainability of what comes out of small, autonomous, multi-skilled teams, you need to ensure that there are no conflicting roles put on the shoulders of the individuals that need to juggle them.

Conflicts of role occur when the same person needs to do work that should normally be assigned to different persons.  The most obvious —and, in corporate IT, the most abused — combination of conflicting roles, is creating something and quality controlling that same thing.  This can be said of any field, really  — not just IT.  Industrial production process designers have understood for centuries now that he who quality checks should never be the one that is being checked.  Easily solved, might you think. You just need to have Judy check Imad’s work in two days when she’s available, and the issue is solved!  Maybe  —but there’s a catch. 

No Accountability and No Independence

Proper quality control requires at least one of these two conditions: (a) the person being checked and the controller must be both accountable for the level of quality of the work, or else (b) the person doing the quality control must be able to perform the reviews independently.  If Imad and Judy are both part of a team that is measured on the speed at which it delivers innovative solutions that work, then there is a good chance that quality is reduced to having a solution that works, period.   Other quality criteria are undoubtedly agreed upon virtues that no-one is against, but are not as important as speed.  As described in another article, in IT more than any other field, a working solution might be ‘under-the-hood’ a chaotic technical  collage, hardly holding itself with haywire and duct tape— but it can still work. 

These situations often occur when IT staff are put under pressure and are forced to cut corners.  As such, speed of delivery becomes in direct competition with quality when assigning the person hours required to deliver excellence.  If the small, autonomous, multi-skilled team’s ultimate success criterion is speed, then Judy’s check on Imad’s work is jeopardized if the quality of his work has no impact on speed.  In this case, because Judy and Imad are both part of a group that must deliver with speed, then none of them is really accountable for any other quality criterion than simply have that thing work. As long as it doesn’t impede delivery pace, any other quality criterion is just an agreeable and desirable virtue, but nothing more. Judy is not totally independent in her quality control role and worse, there is no accountability regarding quality.

When a small and autonomous team’s main objective is to deliver fast, any quality item that has no immediate impact on speed of delivery becomes secondary, and no-one is accountable for it.

And it doesn’t stop there: considering that quality control takes time, the actual chore of checking for quality comes in direct conflict with speed, since valuable time from multi-skilled people will be needed to ensure quality compliance.  After two days, when she becomes available, Judy could check on Imad’s work, yes, but she could also start to work on the next iteration, thus helping the team run faster.  If no-one is accountable for quality, Judy’s oversight will soon be forgotten.  Quality is continuously jeopardized, and in your autonomous teams there is a fertile soil for the systematic creation of lower quality work.  

There’s No Magic Solution: Make Them Accountable or Use Outsiders

So, what precautions must be taken to ensure maximum levels of quality in multi-skilled, autonomous teams?   The answer is obvious: either (1) the whole team must be clearly held accountable for all aspects of the work —including quality— or (2) potentially conflicting role assignments have to be given to individuals who are independent; that is accountable and measured on the work they do, not for the team’s performance.  

If you go with the first option, beware of not getting trapped into conflicting accountabilities again, and read this article to understand how quality can be challenged by how it is measured.  To achieve independence (second option), you will require having team members report to some other cross-functional team, or allow an infringement to your hopes of total autonomy by relying on outsiders.  Although multi-skilled and autonomous teams are an enticing perspective for jacks-of-all-trades, the agility they bring to the team should not be embraced at the expense of the quality of the assets you harvest from them.

Lower Quality at Scale

If you want to understand how and why unwanted behaviors such as those depicted above are not only affecting small autonomous teams, but are also transforming the whole of corporate IT into a mass complexity-generating machine that slows down business, read this mind-changing book.  It will help you understand why lower quality work products are bound to be created, not only in small, autonomous and innovation-centric teams, but almost everywhere in your IT function.

Innovation: Where IT Standards Should Stand

The use, re-use or definition of standards when implementing any type of IT solution has very powerful virtues. I’m going to outline them here so you can see how these standards play into the (often misunderstood) notion of innovation in corporate IT. We’ll then see where IT innovation truly happens in this context, while underpinning the importance of using or improve IT standards to support overall innovation effectiveness.

The Innate Virtues of IT Standards

  • Sharing knowledge.  Without standardization, each team works in its own little arena, unaware of potentially better ways of doing things and not sharing its own wisdom.  It is much easier to make all IT stakeholders aware of practices, tools or processes when they are standardized. Systematic use and continuous improvement of IT standards act as a powerful incentive for knowledge sharing.
  • Setting quality targets. Standards minimize errors and poor quality through the systematic use of good practices.  They encompass many facets, from effectiveness to security, to adaptability, to maintainability, and much more.
  • Focusing on what counts.  A green field with no constraints and no prior decisions to comply with might entice your imagination, but it can also drive you crazy if everything has to be defined and decided.  IT standards allow you to focus on what needs to be changed, defaulting all other decisions to the use of the existing standards.  
  • Containing unnecessary complexity.  The proliferation of IT technologies, tools, processes and practices in your corporate landscape is a scourge that impedes business agility.  Absence of standards interferes with knowledge sharing and mobility of IT resources.  Multiplicity of similar technologies makes your IT environment more difficult to comprehend, forcing scarce expert resources to focus on making sense out of the existing complexity rather than building the envisioned business value.

The use and continuous improvements of IT standards is one of the most effective cross-enterprise safeguards for IT effectiveness, IT quality, and in the end your business agility.

Despite all these advantages, there is a trend emerging in many organizations that puts these virtues at risk of not being present.

The Lab Trend

In the last few years, it has become mainstream strategy for large, established corporations to create parallel organizations, often called “labs”, that act as powerhouses to propel the rest of the organization into the new digitalized era of disruptive innovations.  This article is not about challenging this wisdom, which may be the only possible way —at least in the short-term— to relieve the organization from the burden of decades of organic development of IT assets and processes that slow down the innovation pace. 

Unfortunately, there are people in your organization who associate standards with the ‘old way’ of doing things.  After all, aren’t all standards created after innovation, to support the repeated mainstream usage of innovative tools, processes or technologies that came before them?

Making the leap that IT  standards should not be considered in the innovation process, not included in the development of prototypes or proofs of concept, or   — more simplistically  — not be part of anything close to innovative groups, is a huge mistake.

 The decision to use or not use a given IT standard depends on what you are innovating, and at what stage of the innovation process you are in.   The IT work required to implement business innovations is rarely wall-to-wall innovative.  Standards cannot —and should not— be taken out of the innovation process from start to finish.  I’d a go step further: standards should always be used except when the innovation requires redefining them.  But the latter case is exceptional.  To help you grasp the difference between true business innovation and its actual implementation, here’s a simple analogy:

The Nuts and Bolts of Innovation

In the construction industry, there are well known standards that determine when to use nails, when to use screws, and when to use bolts in building a structure.  It stipulates the reasons to choose one over the other (e.g. because nailing is much faster to execute and cheaper in materials costs). The standards also spell out how to execute: how many nails to drive, the size and spacing between them, safety precautions, etc.

Now suppose that your new business model is about building houses that can be easily dismantled and moved elsewhere. Let’s say to support a niche market of temporary housing for the growing cases of climate-related catastrophes.   You decide to build whole houses without ever using nails or screws by bolting everything.  You would make this decision to simplify dismantlement, easily moving the house and rebuilding it elsewhere.  The technical novelty here lies in the systematic use of bolts where the rest of the industry normally uses nails.  Bolts are slower to install and more expensive, but they would allow you to easily disassemble the house.  

But when a worker bolts two-by-six wood studs, the actual execution of bolting is not an innovation; it has been known for centuries and the execution standard can be used as is.  In other words, when a worker is on the site and bolting, the innovation has already occurred when the choice was made not to use nails or screws. The market disruptive strategy was determined before, and it is now time to apply bolting best practices and good craftsmanship.

No Ubiquitous IT Innovation in Corporate IT

For IT based business solutions, when the teams are in the phases of implementing the processes, systems and technologies, most of the business innovation has probably occurred in the previous phases.  

When IT staffs are actually building the technical components of your new modes of operation, the business innovation part has already occurred: it lies in the prior choices made during design.

The techies might be testing the innovation through some sort of a prototype, but it doesn’t make their work innovative. When you look at it from a high enough viewpoint, isn’t implementing a new business process with information technologies what corporate IT has been doing for decades?  

When building the IT components of innovative business solutions, where is the actual innovation?  Is it in the new business processes or in the way they are technically implemented?  Chances are that the real value is in the former, not the latter because your initial intention was to aim for  business value, not technical prowess.    

It may very well be that, at the IT shop-floor level, what needs to be done is to apply good practices and standards that have been around for years, if not decades.

In our era of multi-skills, cross-functional, autonomous, self-directed and agile teams  —  which are all busy growing new solutions that support constantly evolving business processes  —  there is a line that should not be crossed: thinking that innovation applies to everything, including the shop-floor level definition of good craftsmanship.  

Don’t Pioneer Without IT Standards

My observations are that when IT practitioners are part of teams dedicated to innovative business solutions, they often become overzealous, abandoning standardization and tossing tried-and-true practices out the window.   I’ve seen IT people making a clean-sweep of all established standards and proclaiming every part of a solution as innovative.   I’ve seen technical staff blindly pulling so-called innovative technologies into the equation with little understanding of their real contribution to business value.  This has a direct impact on the quality of the resulting work. Here’s how:

  1. : IT staff end up using bolts where nails would be fine or using nails where they should have used bolts;
  2. : new platforms are built with no standards used or defined.

In both cases, the impact on your future change projects is catastrophic: lack of shared knowledge, unknown quality levels, lost time and effort reinventing the world, and most importantly, creation of more unnecessary IT complexity.  The resulting assets will be hard to integrate, impossible to dismantle, incomprehensible by anyone else but those that created it, and costly to maintain.  In other words, your business agility will be seriously jeopardized.

The results from innovation without standards will fast-track you to the same burdensome position you tried to free yourself from with your old, outdated platforms.

The only way to avoid this unhealthy pattern is to make sure that the mandate is not just about innovating at any cost.  It must include the use and creation of standards, and limit the scope of change to what creates business value.

Set the Standard

First, your innovation team should not only devise new ways to do business: it must make it a priority to use and reuse standard practices and technologies, unless required to innovate. When a given standard is not applicable, their job should include to define the replacing one.  The idiom “to set the standard” earns all its significance: re-inventing business models that others will now run to catch or match, and defining the standards for your organization and future projects to use and leverage.  Your future business agility heavily depends on the systematic application of good craftsmanship in your current innovations.

New Technologies Need to Bring Value, Not Novelty

Secondly, your new parallel ‘lab’ organization should bear the onus of justifying the use of any new or different technology. How will it contribute to the innovative, business-oriented end-result that you seek?   When technologists are put in front of the enticing prospect of having no obligation to the use of any of the standards in place in your organization, they will jump at it.  It will often lead to the introduction of new technologies for the sake of it, based on no other justification than hunches, hearsay, or how attractive it may look when printed on a resume.

The use, reuse, and redefinition of IT standards should always be part of your innovation team’s mandate.  If not, your future business model will be made of foundational assets built as if there was no tomorrow.

Beware of falling into the trap of catching the contagious over-excitement about the scope of innovation.  Most of IT processes and components that result from business innovation can use mainstream practices and standard technologies. The legitimately innovative portion  — the one that really makes a difference —  is just a fraction of the whole undertaking, and very often, the truly novel part is simply not technological.

Provide Leeway But Set Quality Expectations

So, even if you rightfully decide to go down the path of creating parallel organizations, don’t allow these organizations to have too much leeway when it comes to standards..  Do not sign the cheque without a minimal set of formal expectations regarding sustainability, which must include standards compliance.  

The key is in clear accountabilities and coherent measures of performance. If you want to learn more about how poorly-distributed roles can sabotage the work of your corporate IT function, read this short but mind-changing business strategy book.

IT Project Failures Are IT Failures

While conducting research for Volume 1 of my first book[1], I wanted to investigate the root causes of IT project failures. I was completely convinced – and still am– that these failures are significantly related to the quality of the work previously done by the teams laboring on these endeavors. In other words, the recurring struggle that IT teams face, often leading to their inability to successfully deliver IT projects on time, is directly linked to the nature (and the qualities) of the IT assets already in place. I found a wealth of information relating to project failures, as well as a disappointing revelation.

The Puzzling Root Cause Inventory

This disconcerting realization was that the complexity of existing IT assets is rarely mentioned. By far, technological issues do not appear frequently in the majority of literature on project failure. Just for the sake of it, I performed an unscientific and unsystematic survey of professional blogs and magazines, and came up with a list of 190 determinants of causes for failure. The reasons range from insufficient sponsor involvement, to faulty governance, communications, engagement, etc. I found nothing really surprising, albeit depressing in some ways.  Of these reasons, a mere 11 were related to the technology itself, while one, and only one, referred to the underestimation of the complexity.

This number inaccurately reflects reality.  It doesn’t make sense that, for technology-based projects, there is such a thin representation of technology-related issues. The proportions don’t match.  It doesn’t fit with the reality in the corporate trenches on a day-to-day basis. If your platforms are made of too many disjointed components, or were built by siloed teams; if their design and implementation was poorly documented to cut on costs, or standard compliance practices were ill-controlled, then they are bound to contribute to failure. If your internal IT teams have a hard time understanding their own creations, or frequently uncover new technical components that were never taken into account, how can you be surprised when schedule slippages occur in their projects?  The state of what is in place plays a major role —and it’s definitely not in a proportion of 1:190.

A Definite Project Management Skew

This gap in the documented understanding is due to a project management bias in the identification of root causes of IT project failure.   This is quite understandable, since the project management community is at the forefront of determining project success and failure. Project managers are mainly assessed for on-time and on-budget project delivery[2]. They consider underperformance seriously, and that is why available knowledge on root causes is disproportionately skewed toward non-technical sources.

Project managers tackle failure as a genuine project management issue, and the solutions they find are consequently colored by their project management practice and knowledge.

I wouldn’t want to undervalue the importance of the skills, the processes and good practices of project management. But we need to recognize the foundational importance of the assets that are already in place. They are are not just another risk management plan variable to take into account.  They the base matter from which an IT project starts from, along with business objectives and derived requirements. On any given workday, IT staff are not working “on a project”; they are heads down plowing through existing assets or creating new ones that need to fit with the rest.

The Base Matter Matters a Lot

If IT projects were delivering houses, the assets in place would be the geological nature of the lot, the street in front of the lot, the water and sewage mains under the pavement, the posts and cables delivering electricity, and the availability of raw materials. Such parameters are well known when estimating real estate projects.  If you did not take into account that the street was unavailable at the start date of the construction project, that there was no supply of electricity, that the lot was in fact a swamp, or that there was no cement factory within a 400 mile radius of your construction site, you can be sure that the project would run over-schedule and over-budget.  The state of your existing set of assets creates “surprises” of the same magnitude as the construction analogies above.  When your assumptions about the things in place are confounded because quality standards weren’t followed or up-to-date documentation was unavailable, your estimates will suffer.

Any corporate IT project that doesn’t start from a clean slate[3] —and most aren’t— runs into issues related to the state of the assets already in place.

The unnecessary complexity induced by poorly documented or contorted solutions is not a view of the mind.  It is the harsh reality that corporate IT teams face on a daily basis.  It is the matter that undermines their capacity to estimate what has to be done, that cripples their ability to execute at the speed you wish they delivered.

IT Quality Is an IT Accountability

Although project success is, by all means, a project management objective, the state of an IT portfolio isn’t.

The quality of what has been delivered in the past, and how it helps or impedes project success is not a project management accountability. It’s a genuine corporate IT issue.

So tossing it all to project management accountabilities is an easy way out. If important business projects are bogged down by an organization’s inadequate IT portfolio, it’s primarily an IT problem, and secondly a project risk or issue. Project Managers with slipping schedules and blown up budgets took failures seriously enough to identify 190 potential root causes, and devise ways to tackle them.  Nobody in Corporate IT has ever done anything close to that concerning IT complexity or any other quality criteria applicable to IT assets.

This vacuum has nothing to do with skills, since IT people have all the expertise required to identify the root causes and work out ways to reduce unwanted complexity.

It’s all about having the incentives to fix the problem.  Reasons to solve are not just weak, but outweighed by motivations to not do anything about it[4].

———–

[1] More details on the book available on my blog’s book page.

[2] Also detailed in the book, or in this recent article.

[3] See this other article the clean slate myth.

[4] For more details on this, take a look at my latest book.

The Latest Change in Vocabulary Doesn’t Turn Liabilities into Assets

In last week’s article we saw that you should be very prudent concerning IT Tactical Solutions. They are often presented by your IT teams as temporary situations; sidesteps that must be taken before the envisioned strategic situation can be reached. But more often than not, these patches are permanent. Since these dodged solutions work, most business people aren’t keen to invest in further revisions to develop an optimal design. Hence, these enduring fixes lower the quality of your digital platforms and compromise the agility and speed in future business projects.

The effect of the repeated production of sub-par assets – regardless of the name they’re given – is nothing less damaging than the continuous creation of unnecessary complexity, leading to the progressive decline of your IT platforms.

Let’s Get Financially Disciplined

The cumulative detriment to IT assets has recently inspired some smart IT people to come up with a new idiom: Technical Debt. If an IT colleague has ever uttered a sentence to you including that pair of words, you should read the following.

The Technical Debt idea entails that an IT person will document cases of sub-optimally built solutions into some sort of a ledger. Each individual occurrence, as well as the sum of everything in the register, is referred to as a technical debt. With each new IT hiccup added to the books, an official process makes the paying business sponsor officially aware of the added technical debt. The message from IT sent to the client in such situations means something like this:

  1. “For technical reasons, the project cannot be delivered according to the original blueprint and/or customary good practices within the allotted time and budget.
  2. This may impede the agility of the platform, or create additional costs in future projects. Hence there is a technical debt recognized.
  3. We all acknowledge that this debt should be corrected.”

Technical Debts are Fine for Communicating

This is great from a communications point of view. There are, however, caveats regarding such a well-intended message:

  1. The project will deliver something anyway, and it will work[1].
  2. But you won’t have a clue about the problematic “technical reasons” used to justify inferior quality; you’re held hostage by a single IT desk, holder of all technical knowledge.
  3. The debt is declared, but the impact is not evaluated. There is no reliable forecast suggesting the amount of the added deficit to write off.
  4. There is probably no transparent process in place to check the ledger at the end of a project in order to track and contain the global deficit.

Loans 2.0

This whole concept of indebtedness in IT doesn’t make sense from the start. It leads any business people to falsely believe that the deficit is managed. So you have a debt? As a businessperson, the following questions probably come to mind:

  1.  Who is the lender?
  2. Who is the debtor?
  3. What are the interests made of?
  4. What is the interest rate?
  5. How and when is the principal being reimbursed?

The answers are brutal:

  1. You.
  2. You.
  3. Budgetary increases or lost speed pertaining to future business projects.
  4. Nobody knows.
  5. At an undefined date, when you ditch your platform and pay for another one.

Call ‘em Whatever You Want – You Pay for Everything

Short term management, conflicting accountabilities, or any other good or bad reasons to cut corners will foster the creation of lower quality assets by your IT team.

Your IT staff can call these situations fixes, patches, tactical solutions, or technical debts, but the result is always the same: the customer pays for everything, now or in the future, in hard cash or in reduced business agility.

As for the assets in question, you will always keep them for a longer time than you’d want to, whether they are true assets or debt-ridden liabilities[2].

Measuring Quality

The gloomy outcome I’ve been describing is not inevitable – there is hope. But only if you work to change how accountabilities are distributed. In this book you will have the opportunity to look more closely at the reasons why accountability on IT asset quality is missing and afflictive.

—————-

[1] For more details on why it will always work, refer to this other article.

[2] The IT Liability idiom is borrowed from the work of Peter Weill & Jeanne Ross from MIT Sloan’s Center for Information Systems Research, and refers to the fact that IT investments may create liabilities rather than assets if these so-called assets become a burden under changing business conditions.


1