• rm@rmbastien.com


Value of Technology Part 3 – Corporate IT’s Value Is in the Crafting

Using business value as a gauge of performance is not only wrong, it is too advantageous for corporate IT teams.  What a predicament: to be measured on something that you have little control over and is created by others!  As long as the other party performs well, you get a free-ride to prosperity. 

Everyone that contributed to, let’s say a 20% reduction in waiting times at the airport gates, should celebrate the business value of this achievement. However, it is by no means a sign that the corporate IT staff involved in the project were any good at doing their part of the job.  

Successfully using technology to make a valuable business achievement doesn’t mean anything about the performance of the IT teams involved in the accomplishment.

As a supporting capacity to the real creation of value by your organization, the IT function requires to be measured on other things.  Within the current typical engagement model, the corporate IT function shouldn’t be made accountable for creating business value. We saw and Part 2 that corporate IT’s business is not banking, insurance, car manufacturing or offshore drilling. In Part1 we covered the important of not confusing the value of an investment in technology with the performance of the tech-teams that create the technology.

I’m sure that you’d agree that corporate IT teams should be accountable for adequately supporting your business endeavors.  And what should you expect from them?  The answer is not that simple because corporate IT is —and always has been— a two-headed beast!

One brain is dedicated to operating the IT assets and the other one focused on changing them.  The ‘operate’ and ‘change’ halves are very different.  Furthermore, their respective performances cannot be evaluated collectively.  One side is dedicated to continuity, short-term actions, and transactional speed.   The other one is dedicated to change and is judged on very different criteria. 

The First Half: Quantitatively Measured and Standards-Based

The IT operations —as they are usually named—are devoted to keeping computer systems running smoothly.   They work with existing assets, the solutions that were put in place by the second half dedicated to change.  

The good news about operations is that, over the last few decades, this half has implemented quantitative measures of performance that leave little space for interpretation. System failures, downtimes, response times and others are undebatable measures of a good job done.  Furthermore, the major part of the IT operation costs comes from purchased, standards-based technology resources.  This means that most of the costs to operate the IT assets are traceable to vendor invoices, so that there is always an auditable evidence of the costs. This means that most of the costs can be compared to the same commodities provided by other vendors.  This opens the door to frequent optimization efforts. They measure themselves, and they genuinely improve.

The Ill-measured Other Half

The other side of corporate IT lives and breathes on change.  The development, software factory, solution engineering  —or any other of the many name it’s given— is dedicated to understanding new requirements and changing the systems to adequately support the evolution of your enterprise.  The expectation toward the second function should be that they do the right thing, at the right pace.  In other words, the evaluation of their performance should be the speed at which they provide their contribution, and the quality of the work done.  

Quality and Speed

Both velocity and excellence of work are important and interdependent. Corporate IT could produce quality outputs at an unacceptable pace or inversely, speedily provide poorly engineered deliverables.  It is not without embarrassment that I’ve witnessed much too often poor quality outputs delivered at a slow pace.

Quality and speed allow your organization to more rapidly respond to market changes, or better, to provoke the disruptive changes that will give you a profitable business edge.    

Cost is voluntarily out of the equation, since most corporate IT costs for the development half of IT are directly linked to speed.  Being slower or delivering poor quality results usually increases costs, sooner or later. Additionally, I suspect that from a purely business point of view, you need more rapid delivery of change, not lower costs —although that wouldn’t hurt.

Poor Measures

The bad news about development head is that this half has no reliable measure for speed, and an incomplete grasp on quality.    Refer to this article to discover why speed in delivering change cannot improve. The reason is simple: it is not measured. It has never been, and unless the engagement model is changed, the second half will continue to be un-measured on speed —and remain mediocre on that front.

You may also want to learn about  why the same faulty engagement model leaves unattended a whole section of quality that later boomerangs back to your business endeavors and slows IT turnarounds.  

Summing Up Value for Corporate IT

Regardless of the state of your organization’s processes to assess corporate IT’s performance, keeping business value in the right place, as an investment assessment tool, is a first step toward clarity upon expected results.

Business value should never be used for appraising corporate IT work, and is no replacement for adequate measures of performance that foster accountability.

The next step is to align IT’s work with quantitative measures of performance that are aligned with you enterprise’s expectation toward their work. Beware however that new meters need to be put in place. The ones in place leave too many important areas unattended, which irremediably leads to underperformances.

Constructing Digital for Deconstruction

The Citation

“The information revolution is sweeping through our economy.  No company can escape its effects.  Dramatic reductions in the cost of obtaining, processing, and transmitting information are changing the way we do business.”

How do you relate to the statements above?  True? False?  Haunting you day and night?  Excited about endless opportunities?    Need help?  All of the above?

Here’s the interesting detail: it comes from a landmark article from Porter & Millar published in July 1985.  Yes, nineteen-eighty-five.  How old were you that year?  That’s 35 years ago! 

Don’t re-read the quote to try to find a flaw related to its age.  There isn’t. 

Change, the Only Certitude

There is wisdom to emerge out of realizing that disruptive technologies like big data, internet of things or artificial intelligence are just the last in a long series of tech-drivers that change the way business is done.   That was true in 1985, and will be true in 2055.    Change is the only certainty.   

How is your organization prepared for that?  More specifically, how well rigged are your digital teams —and the digital assets they created over the years— to sustain constant change for the next 35 years?

If there is one thing that will not change, it’s the certainty that whatever corporate IT does to support a given business shift, it will need to be changed again and again.   Sooner than later, what they’ve created will require to be replaced or retired.   

Keep that in mind for a moment as we side track to a personal experience.

Summer Festivals

In fall 2014, I was attending an annual symposium organized by the Montreal Chapter of the Project Management Institute.  One of the speakers gave a candid presentation on how projects are managed in his business: logistics and physical installation of the infrastructure required for summer festivals.   His job was to transform villages, parks, beaches, or city streets into giant entertainment complexes, with performance stages, parking areas, restaurants, kids amusement gear, etc.   That’s pretty cool.  His business domain appeared to the IT guy I am as both remote and refreshing. There is one portion of his talk that struck me.

It was about timing pressures.  Not the very common pressures of having very little time allotted for complex endeavors.   Nothing new there.  Not the fact that the start dates of these festivals are cast in stone, publicized at least a year before, with no possible way to delay the delivery.  That made the IT guy think about the hundreds of delayed deliveries that I witnessed in corporate IT projects and made me feel somewhat both privileged and ashamed.  But that’s not what struck me.

Guns and Hoses

In order for these happy summer events to occur, streets are blocked.   That’s a nightmare for police and fire departments.   In case of an unhappy event such as a fire, they have to race onsite without hitting pedestrians on their way there.  As such, these civil servants have notably stringent requirements of not only setting up the stages in very little time, but also to get the hell out of there ASAP and have the streets clear and clean before the Monday morning rush hour. 

The speaker explained that many of the techniques, materials, or processes used for setting up the festivals were not just chosen to actually do the job: they were designed and done to favor quick dismantling. For having firefighters and cops breathing in your neck is a good incentive.  They’re very serious about it —and they mean it.

That’s when I got struck. 

Building Stone Monuments

I realized that the assets built by your digital teams are never built with dismantling in mind.    The mindset is more something in the lines of building pyramids or century-defying monuments.    Most systems I have dealt with were never designed to be removed.  Neither were they made in order for their constituting parts to be easily replaced by new ones. 

The first explanation that comes to the mind of most IT experts if that it takes more time and effort to design for easy removal. That’s true.

But haven’t we agreed that change is the only certainty?  That any asset created to support your business is bound to be changed or replace, sooner than later?  Then why a whole industry that knows very well that change is inevitable cannot create things that are easily removable and replaceable?

Incentives for Doing It

The rock-bottom reason is simple: there are no incentives to do any better.  Why would this brand new system be built to be easily dismantled?  Isn’t it the newest and best thing, with the hottest technologies ever, that is going to propel the business to new heights for years to come?  Are you asking your IT team to envision removal of their new baby whereas it is not even born yet?  Without strong incentives, it just won’t happen.  That’s why it is rarely the case that special effort and care are put in all these little details that make the difference for rapid dismantlement.

Incentives for Not Doing It

You might think that acquiring third party software creates these situations. But vendors do not create solutions that are easily dismantled.  They lack inducement for putting in place easy to remove solutions. Furthermore they truly have hard cash incentives for doing the opposite.  They are in business to make money. They have no interest in dismantling their very source of income. 

For internal IT, isn’t the maintenance, and the removal of IT assets also a source of income?  In corporate IT, when time comes to pull out something, it often has to be done by the same staff that built it. And you pay them by the hour.

Against the Grain

No, IT builders do not think about dismantlement, and asking for it would be going against the grain.

That is unless there were nervous cops or firefighters breathing in the neck of corporate IT staff about rapid removal and replacement.    For that to happen, radical change in the corporate IT engagement model has to occur.

Designing Your Stairway to Heaven

Standing the Test of Time

I’ve been an unflagging fan of Led Zeppelin since my early teens. I’ve been a worshiper of their founder and lead guitarist Jimmy Page.  That’s probably why YouTube’s algorithm presented me this 17 minute video from the BBC where Mr. Page describes the intent and the result of Zep’s most iconic composition: Stairway to Heaven.  Saying that this piece has an enduring popularity is an understatement.  Today, teenagers whose parents weren’t yet born when this opus was written are still fascinated by the creation. 

Jimmy’s Architecture

There are certainly a series of reasons why Stairway to Heaven is so good, and not being a musician, I’m not cognizant enough to comment on all of them.  However, at 4:38 into the video, Jimmy said something that struck me:

“All this stuff was planned.  It was not an accident, or everyone chipping in.  It really was a sort of design.”

Jimmy Page

If you listen to the whole video, there will be no possible doubt: Stairway to Heaven is the result of conscious design.  The magnum opus was architected, from the beginning, with a clear vision about the sequence of movements, the textures, the build-up of tempo and the unfoldment of the majestic finale.

Innovation Is Not Design — It Feeds It

Another clear learning from Master Page: this was not the result of some brainstorming session, an unplanned mashup, or random amalgamation in hopes of finding a gem.  Unknowingly, Jimmy brought more fuel to a conviction that I’ve seen building in my mind over the years:  innovations and epiphanies emerge before the actual design of digital solutions begins.  These pieces of enlightenment are then embedded into the greater creation. The innovations —if any— reside in specific areas of the final product, but they are not the final achievement. 

Architecture and Design Make the Masterpiece

This leads to another observation, which is supported by decades of scrutiny and involvement in the world of information systems design: brainstorming sessions, focus groups, innovation dives —and all the good practices that encourage seeing things differently— will not yield a masterpiece.  They will nourish the subsequent process of architecting a creation that uses the innovative gems, but the master work comes from intentional design.

Randomly searching for innovation may lead to interesting designs; but masterpieces that stand the test of time are architected.

If you’re tempted to think that great business systems emerge from innovation, beware that it’s far from enough.  Don’t put all your marbles on the lateral thinking side of things.  Save a few for conscious design.

Beauty in All Creations

In the world of buildings, the importance of architectural beauty is rarely questioned.  Well-designed buildings inspire us, comfort us, and ignite seldom felt emotions.  The widely recognized merit of beauty is, in part, founded on the fact that human constructions are tangible creations. We live in them, work in them, and look at them.  We can relate the design to what we see or know and understand the value of beauty.

The Many Faces of Beauty

This very interesting video, sent to me by Wolfgang Göbl,  is an emotionally compelling reminder that beauty can take many forms.  But just because beauty can take many forms does not mean that it is anything. If beauty can be anything, it loses its significance. Something repulsive or ugly is not beautiful just because someone, somewhere may find beauty in it.

Beauty is Important

This video is also a reminder that beauty is far-reaching.  Having been designing for a few decades now, I feel compelled to make a bold statement about it:  beauty should be a sought after attribute in everything that is worth the time and effort to be designed.  

And that’s not just me saying that after some sort of epiphany.  Business architecture expert Mike Rosen once reminded me that back in 40 BC, Marcus Vitruvius postulated that all buildings should have three attributes: firmitasutilitas, and venustas, which could be translated to durability, utility and beauty

The Many Names of Beauty

I use thesaurus.com every day, so,  I searched for the related meanings of the word beauty.  I found that one of the synonym tabs was labelled advantage.  That’s interesting, I thought.  I clicked on  ‘advantage’  and a world of related meanings appeared: feature, importance, value, asset, attraction, benefit, blessing, boon, merit, and worth. 

These synonyms  and the video remind us that beauty is not just visual and can be found in the value that something brings. Isn’t that beautiful?

Beauty in the Intangible Creations

The designs that architects create for information systems and digital technology solutions are chiefly abstract and not visual.  Saying that what you see on your computer screen is just the tip of the iceberg is an understatement.  These designs are impossible to relate to for any user of the system. In fact, these designs are hard to link to for the majority of computer-literate geeks that work in one IT field or another. That’s why beauty in these types of designs is not just a hard sell; it is often viewed by the uninitiated as a ludicrous quest rooted in some form of designer’s vanity. 

But it’s there.  Some information technology designs bear beauty because they bring value, asset, attraction, benefit, resilience, intelligence or wisdom.   

Beauty in All Designs

I strongly believe that the quality criteria for architecture and design in information technology creations needs to include beauty.  A corollary of this belief is that those who declare themselves designers or architects should understand the importance of beauty, what beauty means for their design, and seek to achieve it… or else leave it to others that care.


Anything Missing When Measuring Corporate IT Performance?

Let me provide some reassurance about corporate IT: all the accountabilities that are linked to quantitatively gauged measures of performance are subject to rigorous management and are never neglected.

The two broad categories of clearly defined and clearly measured performance objectives are KTLO and OTOB, acronyms for Keep The Light On, and On-Time On Budget, respectively.

The first category relates to IT operations. Corporate IT’s first and foremost responsibility is to make sure that what has been purchased, leased, built, installed, and has proven to work the first time, actually continues to do so, continuously and as long as your business runs. IT operations are less glamorous from an innovation point-of-view.  IT Ops – as it is often called – doesn’t invent new customer experiences. Neither does it re-architect your organization through radical business design.

But Ops is by far the most critical information technology function because its failure directly impacts the survival of your business in the very short term. If your organization cannot deliver the services to your customers and partners, it literally ceases to exist.  As such, IT operations should be taken very seriously; everything IT does or manages is monitored and measured quantitatively, down to fractions of a digit. Expectations on the quality, stability, and performance of operations are quantitatively defined up-front. Failure happens, but if the frequency or length of missteps is above the agreed-upon performance levels, some people will get seriously nervous about their jobs.

“With the quantitatively measured performance objectives of IT Operations, if failure happens too often, people get  nervous about their career.”

The second category, OTOB, relates to the execution of business change endeavors. Over the past few decades, there have been many scholar and trade discussions about the measurement of project performance, and how adequate – or not – the traditional triple evaluation scheme of cost-schedule-scope actually is. The model may have its limitations for those that are intimately involved in the execution of the endeavors that result in business change, but for those that command the change, assume the risks and reap the benefits – that is you, the paying customer – this performance measurement triad makes a lot of sense. The cost is how much money you need to spend to get what you want or need. The schedule is the time required to get it. And the scope is the extent of what you get for your money.

Scope can be subject to much discussion since the knowledge of what you want and what you really need in the end may differ quite a bit between the pre-project and end-of-project phases. To further complicate matters, there are as of yet no universal units of measure for scope for IT change projects. This imprecision contrasts with the universally understood measuring of cost and schedule.

That’s why many business people fall back on the sole use of on-time-on-schedule as a comprehensive tool for assessing the performance levels of IT in delivering change, assuming that what is delivered (scope) should be roughly what it ought to be for some business value stream to transmute to its new state.

“Scope of what is delivered by digital change projects is hard to measure and compare.  That’s why most business people will fall back on what they can grasp: on time and on budget.”

The importance of managing change is not an acute necessity for IT operations. Failure to be on-time or on-budget doesn’t have the same impact on personal and team performance evaluations, but performance is fathomed nonetheless and delivery dates are being managed.

So What’s Missing?

The major issue is that there are very few other quantitatively measured signs of excellence. The rest of IT is either subject to non-standard and qualitative evaluations or simply not measured at all. Non-quantified evaluations are debatable and easy to challenge on contextual differences. Non-standardized gauges are hard to compare.

In the end, IT measures itself for only a portion of what it does, focusing on improving what literally counts: where there are unchallengeable numbers with universally understandable units of measure. The rest is left to good intentions, or to how it is believed to positively impact OTOB or KTLO.

Notice that both KLTO and OTOB are measures of either immediate (KLTO) and short-term performance (OTOB). ‘Keeping the lights on’ means continuous operations, or short transactional tasks. Change projects are by definition temporary endeavors with a beginning and an end. What happens after the project is finished is completely irrelevant. Even the major transformation programs are split into manageable chunks that often fit into a civil year.

The IT management repercussion of short-termism is that the lasting impact of ITs work on your organization is veiled by short-range prerogatives.

The IT aspects that get the hit on the flank by short-term measures are quality and assets. More precisely, it is a hit on the quality of the work done that impacts the quality of the assets you get as a result.

The impact on your organizational capacity to adapt itself or respond quickly to changes in its environment is highly dependent on the quality of the assets. Asset readiness for change will suffer from lower quality work done in previous projects.

Get the bigger picture in this book about things executives need to know about IT – it will help you understand how most IT teams are evaluated today. These typical metrics have a direct impact on what gets improved, but also, what isn’t being taken care of.  Enjoy!

Small, Autonomous and Fast-Forward to Lower Quality

I am a jack-of-all-trades.  Admittedly  — and proudly  —  I realize that a lifelong  series of trial and error, crash courses, evening reading and incurable curiosity have  resulted in this ability to do many things  —  especially things involving some manual work. I feel self-satisfaction to think about all the situations that could arise in which I would know what to do, and how to do it.  I can help someone fix a kitchen sink on a Sunday afternoon.  I can drive a snowmobile, sail a catamaran, or connect a portable gasoline generator to your house.  My varied skill-set affords me a serene feeling of power over the random hazards of life.  That, and it’s also lots of fun to do different things.

There is currently an interesting trend in many organizations to favour highly autonomous teams.  The rational is quite simple: autonomy often translates to an accrued operational leeway that offers a better breeding ground for innovation.  By not being weighed down by other teams, there’s a hope that the group will perform better and yield more innovative ideas.  There is also the expectation that the team, using an Agile™ method, will produce tangible implementations much faster if it can be left alone.  The justification is founded in  the belief that small teams perform better.  Makes sense:  the smaller the team, the easier the communication, and we all know that ineffective communication is  a major source of inefficiency  —  in IT as well as in any  other field.  And if you want your team to be autonomous and composed of as few individuals as possible, then there is a very good chance that you need multi-skilled resources. 

Jacks-Of-All-Trades and Interchangeable Roles

You need jacks-of-all-trades or else either the number of individuals will increase or you will need to interact with other teams that have some of the required skills to get the job done. As a result, you will not be as autonomous as you’d like.  

But there is more: the sole presence of multi-skilled individuals is not enough to keep your team small and efficient in yielding visible results at an acceptable pace.  You must have an operating rule that all individuals are interchangeable in their roles.  If Judy —a highly skilled business analyst— is not available in the next two days to work on sketching the revamped process flow, then Imad —a less skilled business analyst, but a highly motivated jack-of-all-trades nevertheless— needs to take that ball and run with it.   You need multi-skilled resources and interchangeable roles.  That’s all pretty basic and understandable, and your organization might already have these types of teams in action.

For a small and autonomous team to keep its low size and independence upon others, it needs to be made of jacks-of-all-trades and roles must be interchangeable, or else it will either grow in size or depend on outsiders.

Conflicts of Roles in Small Autonomous Teams

Before you declare victory or you rush into hiring a bunch of graduates with a major in all-trades resourcefulness and let them loose on a green field of innovation turf, read what follows so that you also put into place the proper boundaries.   If you want to ensure maximum levels of quality and sustainability of what comes out of small, autonomous, multi-skilled teams, you need to ensure that there are no conflicting roles put on the shoulders of the individuals that need to juggle them.

Conflicts of role occur when the same person needs to do work that should normally be assigned to different persons.  The most obvious —and, in corporate IT, the most abused — combination of conflicting roles, is creating something and quality controlling that same thing.  This can be said of any field, really  — not just IT.  Industrial production process designers have understood for centuries now that he who quality checks should never be the one that is being checked.  Easily solved, might you think. You just need to have Judy check Imad’s work in two days when she’s available, and the issue is solved!  Maybe  —but there’s a catch. 

No Accountability and No Independence

Proper quality control requires at least one of these two conditions: (a) the person being checked and the controller must be both accountable for the level of quality of the work, or else (b) the person doing the quality control must be able to perform the reviews independently.  If Imad and Judy are both part of a team that is measured on the speed at which it delivers innovative solutions that work, then there is a good chance that quality is reduced to having a solution that works, period.   Other quality criteria are undoubtedly agreed upon virtues that no-one is against, but are not as important as speed.  As described in another article, in IT more than any other field, a working solution might be ‘under-the-hood’ a chaotic technical  collage, hardly holding itself with haywire and duct tape— but it can still work. 

These situations often occur when IT staff are put under pressure and are forced to cut corners.  As such, speed of delivery becomes in direct competition with quality when assigning the person hours required to deliver excellence.  If the small, autonomous, multi-skilled team’s ultimate success criterion is speed, then Judy’s check on Imad’s work is jeopardized if the quality of his work has no impact on speed.  In this case, because Judy and Imad are both part of a group that must deliver with speed, then none of them is really accountable for any other quality criterion than simply have that thing work. As long as it doesn’t impede delivery pace, any other quality criterion is just an agreeable and desirable virtue, but nothing more. Judy is not totally independent in her quality control role and worse, there is no accountability regarding quality.

When a small and autonomous team’s main objective is to deliver fast, any quality item that has no immediate impact on speed of delivery becomes secondary, and no-one is accountable for it.

And it doesn’t stop there: considering that quality control takes time, the actual chore of checking for quality comes in direct conflict with speed, since valuable time from multi-skilled people will be needed to ensure quality compliance.  After two days, when she becomes available, Judy could check on Imad’s work, yes, but she could also start to work on the next iteration, thus helping the team run faster.  If no-one is accountable for quality, Judy’s oversight will soon be forgotten.  Quality is continuously jeopardized, and in your autonomous teams there is a fertile soil for the systematic creation of lower quality work.  

There’s No Magic Solution: Make Them Accountable or Use Outsiders

So, what precautions must be taken to ensure maximum levels of quality in multi-skilled, autonomous teams?   The answer is obvious: either (1) the whole team must be clearly held accountable for all aspects of the work —including quality— or (2) potentially conflicting role assignments have to be given to individuals who are independent; that is accountable and measured on the work they do, not for the team’s performance.  

If you go with the first option, beware of not getting trapped into conflicting accountabilities again, and read this article to understand how quality can be challenged by how it is measured.  To achieve independence (second option), you will require having team members report to some other cross-functional team, or allow an infringement to your hopes of total autonomy by relying on outsiders.  Although multi-skilled and autonomous teams are an enticing perspective for jacks-of-all-trades, the agility they bring to the team should not be embraced at the expense of the quality of the assets you harvest from them.

Lower Quality at Scale

If you want to understand how and why unwanted behaviors such as those depicted above are not only affecting small autonomous teams, but are also transforming the whole of corporate IT into a mass complexity-generating machine that slows down business, read this mind-changing book.  It will help you understand why lower quality work products are bound to be created, not only in small, autonomous and innovation-centric teams, but almost everywhere in your IT function.

Innovation: Where IT Standards Should Stand

The use, re-use or definition of standards when implementing any type of IT solution has very powerful virtues. I’m going to outline them here so you can see how these standards play into the (often misunderstood) notion of innovation in corporate IT. We’ll then see where IT innovation truly happens in this context, while underpinning the importance of using or improve IT standards to support overall innovation effectiveness.

The Innate Virtues of IT Standards

  • Sharing knowledge.  Without standardization, each team works in its own little arena, unaware of potentially better ways of doing things and not sharing its own wisdom.  It is much easier to make all IT stakeholders aware of practices, tools or processes when they are standardized. Systematic use and continuous improvement of IT standards act as a powerful incentive for knowledge sharing.
  • Setting quality targets. Standards minimize errors and poor quality through the systematic use of good practices.  They encompass many facets, from effectiveness to security, to adaptability, to maintainability, and much more.
  • Focusing on what counts.  A green field with no constraints and no prior decisions to comply with might entice your imagination, but it can also drive you crazy if everything has to be defined and decided.  IT standards allow you to focus on what needs to be changed, defaulting all other decisions to the use of the existing standards.  
  • Containing unnecessary complexity.  The proliferation of IT technologies, tools, processes and practices in your corporate landscape is a scourge that impedes business agility.  Absence of standards interferes with knowledge sharing and mobility of IT resources.  Multiplicity of similar technologies makes your IT environment more difficult to comprehend, forcing scarce expert resources to focus on making sense out of the existing complexity rather than building the envisioned business value.

The use and continuous improvements of IT standards is one of the most effective cross-enterprise safeguards for IT effectiveness, IT quality, and in the end your business agility.

Despite all these advantages, there is a trend emerging in many organizations that puts these virtues at risk of not being present.

The Lab Trend

In the last few years, it has become mainstream strategy for large, established corporations to create parallel organizations, often called “labs”, that act as powerhouses to propel the rest of the organization into the new digitalized era of disruptive innovations.  This article is not about challenging this wisdom, which may be the only possible way —at least in the short-term— to relieve the organization from the burden of decades of organic development of IT assets and processes that slow down the innovation pace. 

Unfortunately, there are people in your organization who associate standards with the ‘old way’ of doing things.  After all, aren’t all standards created after innovation, to support the repeated mainstream usage of innovative tools, processes or technologies that came before them?

Making the leap that IT  standards should not be considered in the innovation process, not included in the development of prototypes or proofs of concept, or   — more simplistically  — not be part of anything close to innovative groups, is a huge mistake.

 The decision to use or not use a given IT standard depends on what you are innovating, and at what stage of the innovation process you are in.   The IT work required to implement business innovations is rarely wall-to-wall innovative.  Standards cannot —and should not— be taken out of the innovation process from start to finish.  I’d a go step further: standards should always be used except when the innovation requires redefining them.  But the latter case is exceptional.  To help you grasp the difference between true business innovation and its actual implementation, here’s a simple analogy:

The Nuts and Bolts of Innovation

In the construction industry, there are well known standards that determine when to use nails, when to use screws, and when to use bolts in building a structure.  It stipulates the reasons to choose one over the other (e.g. because nailing is much faster to execute and cheaper in materials costs). The standards also spell out how to execute: how many nails to drive, the size and spacing between them, safety precautions, etc.

Now suppose that your new business model is about building houses that can be easily dismantled and moved elsewhere. Let’s say to support a niche market of temporary housing for the growing cases of climate-related catastrophes.   You decide to build whole houses without ever using nails or screws by bolting everything.  You would make this decision to simplify dismantlement, easily moving the house and rebuilding it elsewhere.  The technical novelty here lies in the systematic use of bolts where the rest of the industry normally uses nails.  Bolts are slower to install and more expensive, but they would allow you to easily disassemble the house.  

But when a worker bolts two-by-six wood studs, the actual execution of bolting is not an innovation; it has been known for centuries and the execution standard can be used as is.  In other words, when a worker is on the site and bolting, the innovation has already occurred when the choice was made not to use nails or screws. The market disruptive strategy was determined before, and it is now time to apply bolting best practices and good craftsmanship.

No Ubiquitous IT Innovation in Corporate IT

For IT based business solutions, when the teams are in the phases of implementing the processes, systems and technologies, most of the business innovation has probably occurred in the previous phases.  

When IT staffs are actually building the technical components of your new modes of operation, the business innovation part has already occurred: it lies in the prior choices made during design.

The techies might be testing the innovation through some sort of a prototype, but it doesn’t make their work innovative. When you look at it from a high enough viewpoint, isn’t implementing a new business process with information technologies what corporate IT has been doing for decades?  

When building the IT components of innovative business solutions, where is the actual innovation?  Is it in the new business processes or in the way they are technically implemented?  Chances are that the real value is in the former, not the latter because your initial intention was to aim for  business value, not technical prowess.    

It may very well be that, at the IT shop-floor level, what needs to be done is to apply good practices and standards that have been around for years, if not decades.

In our era of multi-skills, cross-functional, autonomous, self-directed and agile teams  —  which are all busy growing new solutions that support constantly evolving business processes  —  there is a line that should not be crossed: thinking that innovation applies to everything, including the shop-floor level definition of good craftsmanship.  

Don’t Pioneer Without IT Standards

My observations are that when IT practitioners are part of teams dedicated to innovative business solutions, they often become overzealous, abandoning standardization and tossing tried-and-true practices out the window.   I’ve seen IT people making a clean-sweep of all established standards and proclaiming every part of a solution as innovative.   I’ve seen technical staff blindly pulling so-called innovative technologies into the equation with little understanding of their real contribution to business value.  This has a direct impact on the quality of the resulting work. Here’s how:

  1. : IT staff end up using bolts where nails would be fine or using nails where they should have used bolts;
  2. : new platforms are built with no standards used or defined.

In both cases, the impact on your future change projects is catastrophic: lack of shared knowledge, unknown quality levels, lost time and effort reinventing the world, and most importantly, creation of more unnecessary IT complexity.  The resulting assets will be hard to integrate, impossible to dismantle, incomprehensible by anyone else but those that created it, and costly to maintain.  In other words, your business agility will be seriously jeopardized.

The results from innovation without standards will fast-track you to the same burdensome position you tried to free yourself from with your old, outdated platforms.

The only way to avoid this unhealthy pattern is to make sure that the mandate is not just about innovating at any cost.  It must include the use and creation of standards, and limit the scope of change to what creates business value.

Set the Standard

First, your innovation team should not only devise new ways to do business: it must make it a priority to use and reuse standard practices and technologies, unless required to innovate. When a given standard is not applicable, their job should include to define the replacing one.  The idiom “to set the standard” earns all its significance: re-inventing business models that others will now run to catch or match, and defining the standards for your organization and future projects to use and leverage.  Your future business agility heavily depends on the systematic application of good craftsmanship in your current innovations.

New Technologies Need to Bring Value, Not Novelty

Secondly, your new parallel ‘lab’ organization should bear the onus of justifying the use of any new or different technology. How will it contribute to the innovative, business-oriented end-result that you seek?   When technologists are put in front of the enticing prospect of having no obligation to the use of any of the standards in place in your organization, they will jump at it.  It will often lead to the introduction of new technologies for the sake of it, based on no other justification than hunches, hearsay, or how attractive it may look when printed on a resume.

The use, reuse, and redefinition of IT standards should always be part of your innovation team’s mandate.  If not, your future business model will be made of foundational assets built as if there was no tomorrow.

Beware of falling into the trap of catching the contagious over-excitement about the scope of innovation.  Most of IT processes and components that result from business innovation can use mainstream practices and standard technologies. The legitimately innovative portion  — the one that really makes a difference —  is just a fraction of the whole undertaking, and very often, the truly novel part is simply not technological.

Provide Leeway But Set Quality Expectations

So, even if you rightfully decide to go down the path of creating parallel organizations, don’t allow these organizations to have too much leeway when it comes to standards..  Do not sign the cheque without a minimal set of formal expectations regarding sustainability, which must include standards compliance.  

The key is in clear accountabilities and coherent measures of performance. If you want to learn more about how poorly-distributed roles can sabotage the work of your corporate IT function, read this short but mind-changing business strategy book.

IT Project Failures Are IT Failures

While conducting research for Volume 1 of my first book[1], I wanted to investigate the root causes of IT project failures. I was completely convinced – and still am– that these failures are significantly related to the quality of the work previously done by the teams laboring on these endeavors. In other words, the recurring struggle that IT teams face, often leading to their inability to successfully deliver IT projects on time, is directly linked to the nature (and the qualities) of the IT assets already in place. I found a wealth of information relating to project failures, as well as a disappointing revelation.

The Puzzling Root Cause Inventory

This disconcerting realization was that the complexity of existing IT assets is rarely mentioned. By far, technological issues do not appear frequently in the majority of literature on project failure. Just for the sake of it, I performed an unscientific and unsystematic survey of professional blogs and magazines, and came up with a list of 190 determinants of causes for failure. The reasons range from insufficient sponsor involvement, to faulty governance, communications, engagement, etc. I found nothing really surprising, albeit depressing in some ways.  Of these reasons, a mere 11 were related to the technology itself, while one, and only one, referred to the underestimation of the complexity.

This number inaccurately reflects reality.  It doesn’t make sense that, for technology-based projects, there is such a thin representation of technology-related issues. The proportions don’t match.  It doesn’t fit with the reality in the corporate trenches on a day-to-day basis. If your platforms are made of too many disjointed components, or were built by siloed teams; if their design and implementation was poorly documented to cut on costs, or standard compliance practices were ill-controlled, then they are bound to contribute to failure. If your internal IT teams have a hard time understanding their own creations, or frequently uncover new technical components that were never taken into account, how can you be surprised when schedule slippages occur in their projects?  The state of what is in place plays a major role —and it’s definitely not in a proportion of 1:190.

A Definite Project Management Skew

This gap in the documented understanding is due to a project management bias in the identification of root causes of IT project failure.   This is quite understandable, since the project management community is at the forefront of determining project success and failure. Project managers are mainly assessed for on-time and on-budget project delivery[2]. They consider underperformance seriously, and that is why available knowledge on root causes is disproportionately skewed toward non-technical sources.

Project managers tackle failure as a genuine project management issue, and the solutions they find are consequently colored by their project management practice and knowledge.

I wouldn’t want to undervalue the importance of the skills, the processes and good practices of project management. But we need to recognize the foundational importance of the assets that are already in place. They are are not just another risk management plan variable to take into account.  They the base matter from which an IT project starts from, along with business objectives and derived requirements. On any given workday, IT staff are not working “on a project”; they are heads down plowing through existing assets or creating new ones that need to fit with the rest.

The Base Matter Matters a Lot

If IT projects were delivering houses, the assets in place would be the geological nature of the lot, the street in front of the lot, the water and sewage mains under the pavement, the posts and cables delivering electricity, and the availability of raw materials. Such parameters are well known when estimating real estate projects.  If you did not take into account that the street was unavailable at the start date of the construction project, that there was no supply of electricity, that the lot was in fact a swamp, or that there was no cement factory within a 400 mile radius of your construction site, you can be sure that the project would run over-schedule and over-budget.  The state of your existing set of assets creates “surprises” of the same magnitude as the construction analogies above.  When your assumptions about the things in place are confounded because quality standards weren’t followed or up-to-date documentation was unavailable, your estimates will suffer.

Any corporate IT project that doesn’t start from a clean slate[3] —and most aren’t— runs into issues related to the state of the assets already in place.

The unnecessary complexity induced by poorly documented or contorted solutions is not a view of the mind.  It is the harsh reality that corporate IT teams face on a daily basis.  It is the matter that undermines their capacity to estimate what has to be done, that cripples their ability to execute at the speed you wish they delivered.

IT Quality Is an IT Accountability

Although project success is, by all means, a project management objective, the state of an IT portfolio isn’t.

The quality of what has been delivered in the past, and how it helps or impedes project success is not a project management accountability. It’s a genuine corporate IT issue.

So tossing it all to project management accountabilities is an easy way out. If important business projects are bogged down by an organization’s inadequate IT portfolio, it’s primarily an IT problem, and secondly a project risk or issue. Project Managers with slipping schedules and blown up budgets took failures seriously enough to identify 190 potential root causes, and devise ways to tackle them.  Nobody in Corporate IT has ever done anything close to that concerning IT complexity or any other quality criteria applicable to IT assets.

This vacuum has nothing to do with skills, since IT people have all the expertise required to identify the root causes and work out ways to reduce unwanted complexity.

It’s all about having the incentives to fix the problem.  Reasons to solve are not just weak, but outweighed by motivations to not do anything about it[4].


[1] More details on the book available on my blog’s book page.

[2] Also detailed in the book, or in this recent article.

[3] See this other article the clean slate myth.

[4] For more details on this, take a look at my latest book.

Let’s Start Fresh with a Digital Platform!

A dichotomy seems to be emerging in corporate IT strategy and enterprise architecture. Let’s take a closer look at what looks like a seemingly promising strategy to propel your business into the digital era.

A Reliable but Inflexible Set of Operational Assets

In the right corner, we have the technology backbone of an organization’s operations. This platform must be robust and standardized, and allow the business to shift toward new paradigms as seamlessly as possible. These types of platforms have been around for decades; some of them outlasting many foundational technology changes. The ability of these operational backbones to support transactional operations effectively and speedily is, in my view, a success for the IT world. Unfortunately, their flexibility and agility in face of changing business needs are mediocre, at best. Decades of chaotic short-sighted design deployments have transformed these backbones into liabilities. Such backbones are not always able to sustain change, being highly sensitive to heterogeneous, non-standard, or stove-piped solutions that find their way to production status through the loose mesh of ineffective quality governance.

In the right corner, your operational backbone: rapid, robust, but inflexible yet indispensable.

A Fresh New Platform for Digital Integrations

In the left corner we have a newer concept, often labeled the Digital Platform, envisioned as an innovative way to achieve flexibility and quick turn-around times. The central idea is smart: let’s put in place an IT infrastructure that allows the rapid development of new business integrations at the level of the extended enterprise.  By extended enterprise, I am referring to an obvious focus on external partnerships, including opportunities created by social media, Internet of Things, or the entire array of cloud-based services that relentlessly expands every month.

One of the most appealing features of the platform in the left hand corner is that it provides a clean slate for a project to start from. Absent is the burden imposed by legacy systems like those constituting the technology backbone we have in the right corner.  The novelty provided by the Digital Platform is bound to create a fertile soil for agility to blossom, inspired by the bold, highly publicized start-ups known as Market Disruptors.

In the left corner, your digital platform: new, nimble, source of promising business value.

The second promising characteristic of this paradigm lies in its function as a foundation where things can be not only rapidly developed, but easily removed. When something developed last year doesn’t make business or technical sense anymore, you can unplug it and work on a more promising integration. With no legacy artifacts slowing your momentum and your ability to continually apply and reapply integrations, you will surely be able to provide enhanced customer experiences or new amalgamated products, positioning yourself as the disrupting player.

So are Digital Platforms the way to go? Yes of course! I strongly recommend our contender in the left corner.  That being said, I also feel compelled to share a few very important words of caution on how to introduce it, as well as some caveats regarding your expectations.

I never believed in miracles – at least not in corporate IT.  Most of great things therein come from discipline and hard work.

Parallel for a Long Time

My first piece of advice relates to the right hand option. Do not think for a second that the sudden wave of hype and excitement stimulated by your new digital platform will cause your operational backbone to disappear.  Your new products, new markets, and new ideas, however promising, will not replace all the current products and existing markets overnight. The base of older technology already in place might very well continue to produce the bread and butter used to feed new ideas.

You may wish it were the case, your business is not a start-up, and your new platform will co-exist with the older ones for a long time.

Don’t assume that the latest technology replaces the previous one entirely; they must exist in parallel. Anything you do to position your business on the cutting edge of the market is a step in the right direction, but you must take responsibility for the amalgamation of your platforms. Remember: he who wills the end wills the means.  Your new platform absolutely needs the old one.

If your new digital platform were a vehicle, it would be a lightweight 4×4 truck. But you mustn’t forget that if this were the case, your operational backbone would be a train on rails with a tanker car containing the fuel for your 4X4. The implementation of a new digital platform comes nowhere near any form of rationalization.

The Old Impacts the New

My second warning involves the state of your right corner assets. The apparent separation between the two sides of corporate IT strategy, and the expected leeway provided by a clean slate solution may not last that long.  Sooner than you think, or maybe right from the onset, you will need to use data or function provided by your operational backbone; this requires an integration point like any other that your IT has created in the past. The fact that it takes its source in your new digital platform will make little difference regarding speed or limitations.

Any dependency between the old and the new will be as easy to implement as the state of the weakest link in the chain allows.

Depending on the flexibility and agility of your right hand corner backbone, the new link may be implemented in a breeze, or become the boat anchor that slows everyone down.

Magical Thinking is of Little Help

My last point concerns the business agility of the new digital platform. The new infrastructure’s ability to adapt gracefully to changing business requirements is based on two factors. Firstly, the fact that it’s new means that it hasn’t deteriorated into the state of entropy found in older assets; the clean slate is a benediction.  Secondly, the architecture patterns used to develop the new digital solutions offer evolutionary possibilities.

But declaring a new platform’s ability to support nimbleness, by easily adding, replacing, or removing components is no guarantee whatsoever that it will actually happen. Why? Because easily adding, replacing, or removing portions of a solution, an application, or a platform have been desired outcome from the onset of the operational backbone sitting in the right hand corner! Creating malleable products has been a central focus of IT architecture for longer than I’ve been working in the IT field.

Malleability has been a desired characteristic for as long as IT solutions have been designed.  This wish hasn’t shielded your current platforms from becoming what they are today.

It takes more than wishes and strategic statements to ensure that you get the agility that you expect from the new digital platform.  For that to happen consistently beyond the first 18 months of the new platform’s introduction, you need talented and forward-thinking IT architects designing assets that can be quickly rolled-out, easily replaced, and painlessly removed. These disciplined IT teams must also define and abide by strict quality standards. Finally, you need healthy governance processes to guide your decisions and determine whether or not you have successfully achieved the coveted agility ideal.

Careful design and quality work in your new digital platform are as needed as ever.

If you don’t have all the right safeguards in place, your new digital platform may organically grow into an inextricable tangle that will eventually collapse under its own weight.  It’s been often witnessed before[1], and nothing suggests that the left corner is shielded from unwanted complexity.

[1] To understand how it systematically happens, see this easy-to-read, non-technical book.

Perennial IT Memory Loss

There is a strange thing happening in corporate IT functions; a recurring phenomenon that makes the IT organization lose its memory. I’m not talking about a total amnesia, but rather a selective one afflicting corporate IT’s ability to deal with the current state of the technical assets it manages. This condition becomes especially acute at the very beginning of a project focussed on implementing technical changes to drive business evolution. Here’s how it happens:

It all starts with project-orientation. As we discussed in another article, the management of major changes in your internal IT organization is probably project oriented. Projects are a proven conduit for delivering change. Thanks to current education and industry certification standards of practice, managed projects are undoubtedly the way to go to ensure that your IT investment dollars and the resulting outputs are tightly governed. Unfortunately, things start to slip when project management practices become so entrenched that they overshadow all other types of sound management, until the whole IT machine surrenders to project-orientation.

The Constraints of Project Scope

As you may know, by definition, and as taught to hundreds of thousands of project managers (PMs) worldwide, a project is a temporary endeavor. It has a start date and an end date. Circumstantially, what happens before kickoff and after closure is not part of the project.

The scope of the project therefore excludes any activity leading to the current state of your IT portfolio. The strengths or limitations of the foundational technical components that serve as the base matter from which business changes are initiated are considered project planning inputs. The estimation of the work effort to change current assets, or the identification and quantification of risks associated with the state of the IT portfolio, will always be considered nothing more than project planning and project risk management.

Further excluded from project management are considerations that will apply after the project finish date. These factors encompass effects on future projects or consequences for the flexibility of platforms in face of subsequent changes. Quality assessments are common project related activities, likely applied as part of a quality management plan. But a project being a project, any quality criteria with impact exclusively beyond the project boundaries will have less weight than those within a project’s scope – and by a significant margin. Procedures directly influencing project performance – that is, being on-time and on-budget (OTOB)– will be treated with diligence. All other desired qualities, especially those that have little to do with what is delivered within the current project, become second-class citizens.

Any task to control a quality criterion that does not help achieving project objectives (OTOB) becomes a project charge like any other one, and an easy target for cost avoidance.

This ranking is more than obvious when a project is pressured by stakeholder timelines or in cases of shortages of all sorts become manifest. Keep in mind that the PM is neck-deep into managing a project, not managing the whole technology assets lifecycle. Also remember that the PM has money for processes happening within the boundaries of the project. After the project crosses the finish line, the PM will work on another project, or may look for a new job or contract elsewhere.

When all changes are managed by a PM within a project, with little counter-weight from any other of type of management, corporate IT surrenders to project-orientation.  When no effective cross-cutting process exists independently from project management prerogatives, your IT becomes project oriented.  I confidently suspect that your corporate IT suffers from this condition unless you already have made a shift to the new age of corporate IT.

Project Quality vs. Asset Quality

Project orientation has a very perverse effect on how technology is delivered: all radars are focussed on projects, with their start and end dates, and as such the whole machine becomes bounded by near term objectives. The short term project goals in turn directly impact quality objectives and the means put in place to ascertain compliance. Again, since quality control is project funded and managed, the controls that directly impact project performance will always be favored, especially when resources are scarce.

In project-oriented IT, quality criteria such as the ability of a built solution to sustain change, or the complexity of the resulting assets don’t stand a chance.

The result is patent: a web of complex, disjointed, heterogeneous, and convoluted IT components which become a burden to future projects.

It’s here that the amnesia kicks in.

All IT Creations Are Acts of God

When the next project dependent on the previously created or updated components commences, everyone acts as if the state of these assets was just a fact of life.

Whatever the state of the assets in place, at the beginning of a new project, it’s as if some alien phenomena had put them place; as if they were the result of an uncontrollable godly force external to IT.

Everyone in IT has suddenly forgotten that the complexity, heterogeneity, inferior quality, inflexibility, and any other flaws come from their own decisions, made during the preceding projects.

This affliction, like the spring bloom of perennial plants, repeats itself continuously. At the vernal phase of IT projects, when positivism and hopes are high, everybody looks ahead; no one wants to take a critical look behind. This epidemic has nothing to do with skills or good faith, but can instead be traced to how accountabilities are assigned and the measurement of performance.

When all changes are subject to project-oriented IT management, the assets become accessory matter. Your corporate IT team delivers projects, not assets.

The Latest Change in Vocabulary Doesn’t Turn Liabilities into Assets

In last week’s article we saw that you should be very prudent concerning IT Tactical Solutions. They are often presented by your IT teams as temporary situations; sidesteps that must be taken before the envisioned strategic situation can be reached. But more often than not, these patches are permanent. Since these dodged solutions work, most business people aren’t keen to invest in further revisions to develop an optimal design. Hence, these enduring fixes lower the quality of your digital platforms and compromise the agility and speed in future business projects.

The effect of the repeated production of sub-par assets – regardless of the name they’re given – is nothing less damaging than the continuous creation of unnecessary complexity, leading to the progressive decline of your IT platforms.

Let’s Get Financially Disciplined

The cumulative detriment to IT assets has recently inspired some smart IT people to come up with a new idiom: Technical Debt. If an IT colleague has ever uttered a sentence to you including that pair of words, you should read the following.

The Technical Debt idea entails that an IT person will document cases of sub-optimally built solutions into some sort of a ledger. Each individual occurrence, as well as the sum of everything in the register, is referred to as a technical debt. With each new IT hiccup added to the books, an official process makes the paying business sponsor officially aware of the added technical debt. The message from IT sent to the client in such situations means something like this:

  1. “For technical reasons, the project cannot be delivered according to the original blueprint and/or customary good practices within the allotted time and budget.
  2. This may impede the agility of the platform, or create additional costs in future projects. Hence there is a technical debt recognized.
  3. We all acknowledge that this debt should be corrected.”

Technical Debts are Fine for Communicating

This is great from a communications point of view. There are, however, caveats regarding such a well-intended message:

  1. The project will deliver something anyway, and it will work[1].
  2. But you won’t have a clue about the problematic “technical reasons” used to justify inferior quality; you’re held hostage by a single IT desk, holder of all technical knowledge.
  3. The debt is declared, but the impact is not evaluated. There is no reliable forecast suggesting the amount of the added deficit to write off.
  4. There is probably no transparent process in place to check the ledger at the end of a project in order to track and contain the global deficit.

Loans 2.0

This whole concept of indebtedness in IT doesn’t make sense from the start. It leads any business people to falsely believe that the deficit is managed. So you have a debt? As a businessperson, the following questions probably come to mind:

  1.  Who is the lender?
  2. Who is the debtor?
  3. What are the interests made of?
  4. What is the interest rate?
  5. How and when is the principal being reimbursed?

The answers are brutal:

  1. You.
  2. You.
  3. Budgetary increases or lost speed pertaining to future business projects.
  4. Nobody knows.
  5. At an undefined date, when you ditch your platform and pay for another one.

Call ‘em Whatever You Want – You Pay for Everything

Short term management, conflicting accountabilities, or any other good or bad reasons to cut corners will foster the creation of lower quality assets by your IT team.

Your IT staff can call these situations fixes, patches, tactical solutions, or technical debts, but the result is always the same: the customer pays for everything, now or in the future, in hard cash or in reduced business agility.

As for the assets in question, you will always keep them for a longer time than you’d want to, whether they are true assets or debt-ridden liabilities[2].

Measuring Quality

The gloomy outcome I’ve been describing is not inevitable – there is hope. But only if you work to change how accountabilities are distributed. In this book you will have the opportunity to look more closely at the reasons why accountability on IT asset quality is missing and afflictive.


[1] For more details on why it will always work, refer to this other article.

[2] The IT Liability idiom is borrowed from the work of Peter Weill & Jeanne Ross from MIT Sloan’s Center for Information Systems Research, and refers to the fact that IT investments may create liabilities rather than assets if these so-called assets become a burden under changing business conditions.

The Tactical Steps Sideways That Keep You On the Sidelines

Things happen in IT projects.  At times, some quality elements will be sacrificed in order to offset the vagaries of the project delivery scene.  A solution that works of course.  But as discussed in a previous article, a working solution brings no comfort regarding its quality, since almost anything can be done in the virtual dimensions of software and computers. And when issues arise to put pressure on IT teams, a suboptimal alternative will be presented as a fix, a patch, a temporary solution, or as the most wickedly named: the tactical solution.

In circles of experienced IT managers and practitioners, the ‘tactical solution’ sits somewhere between fairy tale and sham.

The word suggests to the non-IT stakeholder that the chosen tactic is a step sideways, and that once the applicable steps are taken, the product should attain the desired state, which is often labelled as the strategic or target solution.

Because the tactical solution works (since anything in IT can be made to work), it could be viewed as a small step in the right direction.  After this dodged solution is implemented, we simply need to perform a few extra steps to reach the strategic state, right?

Not really.

Tactical Solutions Waste Work

The solution does work, and common wisdom says “If it isn’t broke, don’t fix it”. Besides, how could it be broken if it works? Unfortunately, and I know that I am repeating myself, the fact that it works does not guarantee of anything.

Tactical solutions are never presented to you as a step in the wrong direction or a step back, but most of the time they are, and here’s the logic:

Once a tactical solution is delivered, the next step is not a move forward, but rather a revision of the sub-optimally designed part. The system will often have to be partly dismantled and then rebuilt, throwing away portions of the previous work. That’s not a step in the right direction.  That’s not tactical.  That’s wasted work.

Assets Built on Hope Aren’t Enough

Not many business people are keen to pay for throwing away something that works, and as such, when money for the next phase becomes available, there is a good chance that the sponsor will want to invest in an effort that brings more business value, rather than redoing what’s already completed. Moreover, in many cases the bewildered customer will need to pay an additional fee for the removal of something that was paid to put in place. That’s a stillborn path to the strategic state.

Hence, to get there, the IT team has to hope for luck, or must fall back on secrecy. Hope to correct the situation in the lucky event that the tactical solution breaks, or count on a forthcoming major project to allow them the opportunity to openly (or discretely) administer the needed rework effort.

Next time you hear a friendly IT person confidently talk about a tactical solution or any of its synonymous labels, don’t jump too fast to the conclusion that it will elegantly be transmuted to a strategically positioned investment based on a greater plan to get there.

Most of the time, a so-called tactical solution is in reality a permanent solution that sacrifices agility and becomes an IT liability¹ for many years to come.

If you know -or vaguely heard of- the technical debt concept and hope that it will prevent sideways steps that keep your IT assets on the sidelines of the strategic investment field, stay tuned for next week’s article.  You will realize that processes designed for the continuous development of software sold directly to customers don’t always propitiously apply to the delivery of business solutions in support of what your organization makes a living from.


[1] The IT Liability idiom is borrowed from the work of Peter Weill & Jeanne Ross from MIT Sloan’s Center for Information Systems Research, and refers to the fact that IT investments may create liabilities rather than assets if these so-called assets become a burden under changing business conditions.

The Unmeasured and Inconsequential Aren’t Getting Any Better

In part 1 of this article, we saw that what really counts in corporate IT is not only measured, but also metered quantitatively, with standardized gauges that leave as little space as possible for misinterpretations. Through exploring a parallel with the pizza delivery business, I attempted to show that anyone can be assigned conflicting accountabilities, such as delivery speed on one hand, and driving regulation compliance or fuel consumption mindfulness on the other. The only way to juggle these clashing devoirs is through the application of control measures, and the establishment of personal or team-based incentives linked to the resulting indices.

Incentive-Based Performance

Now, if one of the controlled expectations is quantified and directly linked to next year’s bonus, but the other anticipated behavior is not numerically evaluated, what will happen? The result will be the same as it would within our pizza delivery example. If you don’t measure the time it takes for each driver in your team to deliver pizza, then respecting driving rules (because the controls are already in place) and minimizing fuel burnt (assuming this is metered) become the top priorities. When the time comes for yearly performance reviews, delivery time will be left to the manager’s memory of the past 12 months and the driver’s ego. You already know that the manager’s memory will be focused on the most recent weeks, and that the drivers will naturally overstate their delivery speed.  This just wouldn’t work; you would get safe, low-carbon footprint, legally respectful driving, but slower delivery times that would jeopardize customer experience –and your competitive edge.

What’s Measured and What’s Important

In Part 1, I presented a table illustrating the usual assessments of performance for the IT function. These indicators are measurable and precise. They also represent the true gauges of personal performance.  Failure to perform adequately in the KTLO (Keep The Light On) category, can rapidly lead to dismissal.  Underperformance in the OTOB (On-Time On-Budget) category may take more time to notice, but will eventually translate into career changes. I’ve charted this reality in a simple but eloquent figure.

At the end of Part 1, I related a simple question: “What about all the other good things you should expect from your corporate IT function?”.  You should now grasp that any such remaining features will fall in the lower left hand quadrant of this figure. They are not measured quantitatively or not even gauged, and they have little impact on IT staff keeping their jobs.  If you believe that IT’s performance should cover much more than KTLO or OTOB accountabilities, then I strongly suggest that you scale back your expectations concerning behaviors unassociated with the upper right hand categories.

I strongly suggest that you scale back your expectations concerning behaviors associated with anything else but KTLO or OTOB accountabilities.

The next burning question is obviously: “What falls under ‘The Rest’?”  As its name implies, this category encompasses all other desired duties: the mundane and less significant ones, as well as the crucial virtues that seriously impact the quality of corporate IT’s output.

Another Problem For IT to Solve?

In several upcoming articles you will discover that the perception of quality and the means of its control are significantly related to its position in the chart above. Quality controls specifically associated with quantitatively measured KTLO performance objectives will be defined and applied.  I can safely bet that your IT function is pretty good at those tasks. I can also confidently speculate that the quality controls which play an active role in delivering products on-time and within budget are taken seriously and applied systematically.

The remaining controls are mostly subjective, or plainly nonexistent, thanks to the few repercussions that inefficiencies in these areas have on people’s jobs.

Unfortunately, many missing measures can have a direct impact on your organization’s capability to react promptly in an ever-changing environment.  Important areas such as compliance to your own standards, ease of maintenance of platforms, reuse of existing assets, adaptability, or documentation have little impact on people’s jobs, and are, at best, qualitatively measured, if measured at all. These areas fall under “The Rest”, and are probably poorly managed.

But if you think that you simply need to demand that your IT organization be better at those things, you are misled.  The performance criteria in The Rest have been neglected for decades.

All attempts that I have seen or heard of were either weak, unevenly applied, or didn’t last very long.  As long as the current hierarchy of rewarded behaviors reigns, it won’t happen.

But expanding what really counts above and beyond KTLO and OTOB requires to remove the conflicting accountabilities.  As described in a previous article, your IT function is stuck in an engagement model where, for convenience and historical reasons, a single desk is given all accountabilities.  As you will see in my upcoming book, your IT has little means for implementing a healthy segregation of duties, and has cashable incentives to remain mediocre in several key areas.

Joseph’s Machine and the Unnecessary Complexity of Business IT Solutions

The best non-technical analogy to explain the extent of the complexity of corporate IT assets, and by the same token why a working IT solution doesn’t prove anything about its quality (the subject of a previous article) appeared on my LinkedIn feed last week:  https://www.youtube.com/watch?v=auIlGqEyTm8.

After watching this two-minute video, your first reaction is probably like mine: amusement and awe over Joseph’s ingenuity. But once I was over the toddler’s cuteness, it came to me that Joseph’s machine can teach a lot about IT solutions.

Am I insinuating that your IT solutions are like Joseph’s machine?  You bet!

Yes, IT business solutions’ engines often look like this under the nice, shiny hood of sleek user interfaces.  What you see is the final product, the cake you want to eat.  What you don’t see are the contorted paths taken to get it to you.

So why are we IT people making things so complicated?

There are many reasons. My first book will give you a broader view of the problem and a deeper understanding of the non-tech root causes. In the meantime, here are three key pointers:

First, Joseph is dealing with the laws of physics – in a brilliant way I should add. In the virtual world of software-based solutions, such laws don’t apply. Furthermore, I suspect that Joseph had to go to a dozen stores to buy all this apparatus and spend a lot of time finding the right gizmos to fit his process.

In software-based solutions, you just click, download it, resize it, or copy and paste it ad infinitum if you wish.  It is usually simple, often effortless.

It can also go in all directions, augment the overall complexity, but still your IT staff will find a way to make it work. 

In other words, the drawback of computer-based solutions is that it is easy to “clog your kitchen” as in the video.

Second, after Joseph is done with video-making, he cleans the kitchen before the in-laws come for dinner. Your IT-based solutions support your business and they stay there as long as you’re operating. As easy as it is to fill the kitchen with software-based components, it is proportionately as difficult to empty the room – unless it was planned for.

Roles distribution and performance indicators do not promote designs that make your systems easy to remove. Most of the time, you’re stuck with it.

Finally, Joseph’s machine works and it delivers the cake. The same can be said about your IT business solutions. The current hierarchy of performance measures for corporate IT is dominated by short-term focus with the sempiternal “Keep-the-Light-On” (KTLO) and “On-Time-On-Budget” (OTOB) efficiency gauges.

If your sole expectation is to get the cake on your plate before the competition gets it, then you’ll receive your pastry all right, but do not hope for more. 

It doesn’t have to be this way.

With a more balanced distribution of accountabilities and performance measures that extend beyond short-term expectations to the intrinsic quality of what is built, you can earn a significant competitive edge with your IT solutions. The added benefit?  The next time, when you need pudding or ice cream instead of a cake, you’ll reduce the probability of your IT team telling you that you need to buy a whole new kitchen. The kitchen-building industry is a prosperous one these days, but it takes your investment money, and precious time to beat the rivals.

What Drives Quality

Making parallels between corporate IT work products and those of other fields is adventurous. Nevertheless, I need to find a way to explain what quality control means for corporate IT without getting technical.

Imagine for a moment that your corporate IT team was not delivering technology solutions to your business, but rather automobiles.  Also assume, for the sake of the parallel, that your usual corporate IT quality controls would be applied to these cars.

The car would be put on a tarmac track and a test driver would start the car, accelerate, turn right, turn left, and brake. She would also open all doors and windows, check the fuel gauge, engage the lights, turn on the radio, open windows, tilt seats – everything.  In short, all features would be tested for their practical effectiveness. The car would then be handed off to its owner. That’s it.

Are you tempted to say that it’s enough? If all features and functions are operational, then the quality is where it should be? Of course not.

Fortunately for car makers and owners, some important points are missing from the quality control plan I’ve outlined above; especially determining how well the car is built and assess its ability to handle sustained use over a period of time – long after its sale to a customer.  In the automotive industry, these procedures will address a world of additional concerns such as: will the car be plagued with rust holes in 12 months?  Will the brakes require changing every 1000 miles? Will the corner garage mechanic need to drill a hole in the engine pan to make an oil change?

Carmakers have understood long ago that features are not enough if the product does not show many other qualities like longevity, safety, maintainability or reliability. But corporate IT is a strange beast whose behaviors often defy common sense.

So strange that the IT equivalent of drilling a hole in the oil pan is not that farfetched.

Project-Oriented IT and Quality Control

The scope of quality control on technology solutions can be qualified as business requirements centric. Far be it from me to downplay the extent of the tests required to ensure that all requirements are fulfilled, but that’s far from enough. The resulting output can only suffer from inferior levels of excellence when certain areas aren’t duly inspected. It’s true for cars and applies universally to any situation where there’s a mix of human beings and tight schedules.

How simple will it be to expand the solution? How much effort will it take to retire that solution? Will future generations of IT staff have crystal clear technical documentation at their finger tips? Can this solution easily integrate with other systems or technologies? These questions cannot be answered by controlling the correctness of features and functions.

To understand the dynamics responsible for deficient quality control of corporate IT output, one must first recognize that any change to existing assets or new asset creations are made within the context of a project. This makes sense, since nobody wants multi-million endeavors governed by anything less than good project management practices.

The issue doesn’t lie with the use of project management wisdom. The problem is that corporate IT decision-making processes are heavily skewed toward the use of project management logic, even in cases where different rationales should be applied. I call this ubiquitous pattern Project-Oriented IT.

Remember that a project is, by definition, a temporary endeavor[1]; it must have a start date and an end date, or else it’s not a project. This also means that anything happening before the project start or after its finish will not be considered part of the project.

So, within our carmaker analogy, the project end date will be when the automobile is delivered to the customer with all promised features functional.  An IT project will be deemed complete when the solution and all of its components are successfully tested to make sure that every feature works properly.

These tests do not acknowledge issues that may (or may not) arise months or years later. A few moons after the IT solution is delivered, the project will have been closed for a long time. Long-term quality does not fit easily into a project.  In project-oriented IT, considerations equivalent of car maintenance costs, body rust, or the premature wearing of parts are rarely a concern.

QA Skills and Independence

“Aren’t corporate IT quality control processes intended to check all these things?” You might be tempted to ask.  The sad but true answer is: not really. For all aspects of quality to be checked systematically and consistently, there needs to be a certain degree of separation between those that build quality and those that control its presence. In most cases, the independent quality controls cover only business features and are being carried out by the only unconnected parties in the equation: non-IT folks working for the business sponsor.

These individuals will conduct checks according to their skill sets, which don’t include the technical knowledge required for looking under the hood.  Those that have the skills to inspect the engine and the cabling are probably busy welding another car (working on another project). Even when the internals of the solution are checked, the reviewers are rarely independent enough because they are working under the auspices of project-oriented IT where many quality concerns are of a lesser importance.

In an upcoming article, I show that conflicting roles lead stakeholders to quickly pushback against any quality criterion that doesn’t directly help a project within its immediate lifecycle. You will also discover that these same accountability issues are killing the independence required to perform quality controls covering all aspects of the value of what is delivered.

Your takeaway from this article is simple: when it comes to controlling the quality of what you get from your IT investment, you hardly get anything better than a test drive.

To change this, the distribution of measured accountabilities must change in such a way that all aspects of quality are evaluated, not just those that directly impact a project’s delivery. In this book, I dive in all aspects of IT that impede the creation of quality assets, all of them being rooted in the roles distribution, the accountability given to these roles, and the associated measures of performance.

[1] As defined by the Project Management Institute and applied by its hundreds of thousands of certified professionals.

No One is Accountable for What Is Not Measured

In a previous article on the construction industry’s distribution of roles, I demonstrated that centuries of cumulative trials and errors have led to a clear delineation between the main stakeholder’s responsibilities, all to the benefit of the paying customer and the public in general. In corporate IT, as we saw in the following article, things are quite different: the paying customer deals with a single desk that plays all roles.

The healthy segregation between those that define the solution and those that build it, those that set standards and those that use them, those that deliver excellence and those that control that quality, is unquestionably absent. 

It would be a mistake to believe this is due to the nature of the solutions being built, as segregation of roles was not always present in the construction industry either. Roles definitions were once an issue, as we can see by this citation from Philibert Delorme [1514-1570], architect and thought leader of the Renaissance:

Patrons should employ architects instead of turning to some master mason or master carpenter as is the custom or some painter, some notary or some other person who is supposed to be qualified but often than not has no better judgment than the patron himself […]”[1]

In my career in IT, I have seen it all: projects without architects, improvised architects with skills issues, true architects without any architecting accountability, architects left to themselves with no organizational support, IT managers architecting, project managers architecting, customers architecting, programmers architecting. These cases are not exceptions, but rather the norm, in one form or another.

There are two main reasons for so much laxity in the execution of such an important function as IT architecture: conflicting roles and lack of measures.

First, the conflicting placement of the architect, often located in a quarter where he/she isn’t able to truly defend the customer’s interests, is subordinate to line managers or project managers that have higher priorities than architecting solutions the right way.

Second, expectations towards the quality of the architecture are neither set nor gauged, again, because there are more urgent and measured accountabilities hanging in the balance.

With little consequences for wrongdoings, it’s no wonder the architect’s role is so easily hijacked by whoever wants to have a say in that area. 

IT architecture is a field where anyone can be elected, or self-elected, to the status of an architect, as long as he/she can make things work. But as we saw in a previous article, a working solution doesn’t prove much. Everyone can have an opinion on the right way to design but is never held accountable for the quality of it.  Opinions without accountability on the subject are as relevant as any other conversation around the coffee machine.

Fortunately, by balancing the distribution of roles with healthy segregation, measures of performance can move toward a healthier equilibrium, so that coffee machine discussions don’t become IT strategies that put at risk million-dollar projects.  The architect’s role will stop being usurped, for doing so will then entail being accountable for it.  An in-depth analysis of these insights and more will be available in my first book.


[1] Catherine Wilson, “The New Professionalism in the Renaissance,” in The Architect: Chapters in the History of the Profession, University of California Press, 1977, p. 125.

The Impossible Polygon Behind the Single Desk

In Part 1 of this series of two articles, I presented a high-level description of the engagement model used in the construction industry, and how the 3 main stakeholders share accountabilities and duties. Although these three poles have diverging concerns which often lead to conflicting viewpoints, the system works because (a) the roles are clearly defined, and (b) there are institutionalized mechanisms in place to safeguard the stakeholders from potentially detrimental misbehavior.

Let’s look at the most interesting part of this comparison, focusing on the relationship between stakeholders[1].

The Construction Industry Engagement Model

In the construction industry, a customer hires an architect to define the specifics of the structure to be built. The customer then hires a builder, often collaborating with the architect during the selection process. It is quite customary for the architect to perform worksite inspections within the construction engagement model, in order to ensure that the builder has conformed to the drawings and specifications.

The Turn-key Alternative

There is an alternate engagement model in the construction industry called a “turnkey” project. In this model, a customer hires a builder (usually a general contractor) to take care of everything, including architecture, engineering, building, landscaping, and even the procurement of permits. There are two major advantages for the customer within a turn-key project: engaging with a single point-of-contact, and getting a single price that includes all costs.

There are, however, major risks for the customer choosing this type of project: he is placing complete trust in a single party, while forfeiting the independent quality control available through an architect’s worksite inspections.

Industry Safeguards at Play

Most customers are aware of these potential liabilities, which is why many of them chose the standard A-B-C engagement model. But if one chooses to go with a turnkey arrangement, there are many structural mechanisms to protect the customer in a standardized industry such as civil construction, as described in Part 1 and depicted in the figure below.

Even when the customer deals with only one provider who monopolizes project operations, the city inspectors, trades certifications, building codes, and professional orders remain independent. As for standards compliance, an architect can lose her license to practice if building codes aren’t respected; professional order disciplinary committees or judges will demonstrate little empathy for the fact that she was working for a general contractor who signed directly with that customer.

This variation of the construction engagement model (turnkey) is very important because it mirrors the usual relationship between your organization’s business sponsors and corporate IT. This similarity only exists on the surface, however. There is a huge difference:

there are no external, independent bodies that oversee, standardize, or control the activities and the outputs of your IT department.

The IT Engagement Model Applied to Construction

If we were to apply the IT engagement model to the construction industry, it would resemble the figure below:

The IT systems builder who you engage with is, in fact, responsible for literally everything: gathering requirements, designing the architecture, engineering, managing all the various specialty skills, and of course delivering the solution that you need.

But that’s not all. Your IT builder takes care of the (not so) independent controlling bodies in our construction parallel. The IT counterparts of the construction industry safeguards described above are embedded in that same team.

The corporate IT function determines all standards, establishes mid and long-term plans, baselines the required skills for all IT trades, assesses the adequate knowledge level of staff, delineates roles and their respective accountabilities, and last but not least, oversees its own compliance to the quality standards it defines.

It’s Worse Than You Think

If you think it’s already too much for your definition of segregation of duties, there’s more. Corporate IT is not just responsible for building technology-based business solutions; the same team takes care of everything under the IT sun.

If we were talking about the construction industry, your builder would also be responsible for supplying water, power, gas, road maintenance and emergency services. To top it all off, you are left with a single builder, and very little leeway to shop for alternatives.

I’ll let you call this model what you like. The detrimental effects caused by this monopolization of roles are significant and serious. It increases costs, slows the speed of delivery, and of course lowers the quality of deliverables. In the upcoming articles and book, I will describe in more relatable detail the repercussions of this engagement model. The source of these woes can be traced to its most fundamental, foundational root cause: ill-distributed roles. And that’s a promising news, because it has nothing to do with technology and non-IT business people can shift the model to a healthier and balanced geometry where the paying customer’s interests will be better served.


[1] For a deeper dive into the construction industry, its structure, and the wisdom it can impart, take a look at the soon to be published Volume 1 of An Executive Guide to the New Age of Corporate IT . This article is in fact an elevator pitch for Chapters 1 and 2.

Do Not Assume Anything From IT Solutions That (Always) Work

This is where we start: an initiatory revelation that will help you understand many of the everlasting issues plaguing corporate IT.  This truth is one of the most important drivers of lower quality in the work products of the corporate IT function.

Business IT solutions are mainly made of software, and software is highly flexible and malleable. These are characteristics that are difficult to find elsewhere. Fundamentally, software is a series of electrical impulses representing numbers.  All a computer does is add numbers – nothing else. The images on your screen, the voice that you hear on your phone and any other seemingly magical digital phenomenon can be reduced to zeroes and ones.  These numbers are then eaten and processed by an immensely powerful number-crunching machine the size of your thumbnail.

Limitless IT Possibilities

Software exists in a virtual world where the laws of physics, as well as most constraints found in other fields, don’t apply. Of course, applications must remain compatible with the physical characteristics of the human beings or machines who will use them. If an IT business solution interacts with production machinery, perhaps opening and closing garage doors, you can expect it to abide by the laws of physics, and probably some standards and regulations.

But apart from these specific cases, it is fair to say that if the IT experts of most businesses are challenged with questions such as “… but is it doable? Can you make it work?” they cannot honestly answer “no”, because there is always a way to make an IT solution work.

Why Does Your IT Team Say “No”?

You may have painful memories of instances where you were told “no” by your IT teams. Let me assure you that, excluding extreme cases, the reasons for these negative answers were probably that the budget was exhausted, the time left was too brief, the compliance to standards was problematic, or the teams in place were busy doing other things, but not that it wasn’t doable.  There is always a way to make it happen when you’re dealing with the intangibles of software and the immense capabilities of computing hardware.

That’s the good news.

Beware of Alternate Solutions That Cut Corners

Often, making programs work just requires doing things differently.  Since software is so workable, the options available are usually numerous.  Unfortunately, doing things differently does not invariably mean finding a totally innovative, out-of-the-box paradigm.

Most of the time, being imaginative means finding ways to cut corners and make it work still.

The range of options can be further extended by the relative inconsequence of errors.  In the virtual world of corporate IT, there is little risk of human injuries or casualties. Thus far in my career, I’ve never seen anyone drawn into a court of law for a botched design.  External bodies will never audit a project down into its technical details. Events of skimping on quality never get published outside the corporation, and not even outside the project team.

Quality Issues That Translate in More Complexity

Your IT team will find a way to make a solution work: I can guarantee it.

They will get it to work, whether it’s with little effort or a heroic tug, and through the use of best practices or with haywire.  But heroism and best practices require more time and labor.

Hence the end result will most probably be subject to more maintenance, or run slower, or have stability issues, or present learning challenges to future employees, or require replacement sooner, or augment costs in other projects, but it will work.

And if the expected quality levels are not achieved at the finish line, it will be called a fix, a patch, or my favorite, a tactical solution, to convey recognition that it could have been designed and built in a better way.   But these idioms don’t express the truth that such solutions increase unnecessary IT complexity which in turn impedes the agility of the team that created it.

Does it mean that the great powers of information technologies, with their almost limitless applications can also be a hindrance?  I’m afraid so. We’re in a case of the archetypical two-edged sword.

Not Proving Much

Your most important takeaway is the following:

The fact that a solution works proves nothing other than the fact that it works. Do not even contemplate for a second the mere idea that it determines anything about the quality of the end product.

Whatever the depth of your sorrow about this depressing statement, you might be tempted to think that, given all the virtual flexibility of IT, sub-optimally designed solutions can be easily corrected in subsequent projects. But that’s not the way it works, so don’t hold your breath for quality issues to be corrected.  In an upcoming article, I will present another unpublicised truth about corporate IT that will lower your expectations about IT’s capacity to realign after sub-optimal solutions are delivered.

Before you do anything hasty, let me reassure you: there is light at the end, and there is a way to get higher levels of quality that promote nimbleness.  The good news is that it has nothing to do with technology and is within the reach of non-IT business executives.  If you’re interested, take a minute to subscribe and you will get automated reminder when new posts are published.

Suspicious About Your Corporate IT’s Speed?

Are you left perplexed when you compare the technological quantum leaps that humanity has witnessed in recent decades to their net effect on the efficiency and speed of your corporate IT function?

Does your mood range from remotely curious to downright fed up when you assess your IT department’s inability to keep up with the pace of your business?

Are you coming to the same conclusion as me: that when it comes to responding to changing business needs, corporate IT has been, at best, steadily mediocre throughout the years?

Do you have the unending impression that your corporate IT continues to show signs of an immature field, even after decades of experience?

Are you suspicious that behind the curtains of technological know-how lies a monstrous amalgamation of old and new technologies, created by the same corporate IT team whom it baffles?

If you’ve answered yes to any or all of these questions, then I have two pieces of good news for you.

The Good News

Firstly, you’re not paranoid.  After three decades of working in corporate IT, I can assure you that these are matters that you should be concerned about.  Even in different times, amongst different industries, or within different countries, every IT professional I consulted before publishing acknowledged the manifestation of these issues again and again.

Secondly, you will soon have a refreshingly different perspective on the sources of these issues.  The processes that have been used to explain or to deal with corporate IT’s performance issues until now lack a deeper understanding of their non-technological root causes.

Ignore the Technobabble

The answer is not yet another miracle IT solution, vendor, system, or new technology.  You’re probably already under a deluge of technobabble sales pitches, each implying in their own way that big data, disruptive innovation, artificial intelligence, internet of things, machine learning, augmented reality, DevOps, micro-services, or the acronym of the year will propel you into another sphere beyond your current issues.

To get to a new age of corporate IT, a different approach is required: to understand how corporate IT’s underachievement in certain crucial areas — notably, the quality of deliverables — is unrelated to technologies and methods.  You’ll discover the true culprits: basic management and governance issues leading to unwanted behaviors, which, in turn, diminish agility.  We will get to the bottom of a cause-and-effect sequence, and reach a point where everything will look much simpler, for the root causes of it all are areas where non-IT business executives can act upon.

Lead the Next Phase in IT

By changing role distribution, transferring accountability, and reviewing the measures of performance, business leaders can bring about a profound and lasting movement to the next phase in corporate IT maturity.

You can shift the center of mass in your business and then let the technically savvy execs and managers take care of the detailed processes and logistics required to complete the transition  to the next level.  Don’t change the players, change the game.  Don’t engage with the details yourself; change the engagement model.

What’s Coming Up

Over the next weeks, you will learn why your suspicions are far from being groundless.  You will also gain priceless knowledge about how corporate IT operates behind the closed doors of technical expertise. I will provide a fair -and at times brutal- investigation of these concerns, unveiling issues such as the systematic creation of pointless complexity, the over-use of project management principles, the corporate IT “amnesia syndrome”, and many other quality issues that hinder attempts to speedup IT delivery. It should get you primed for a new book available now.