• rm@rmbastien.com

Author Archive rmbastien

Square peg in a round hole

Products, Projects and Failure

Why most products fit in digital projects like a square peg in a round hole, leading to systematic failure.

The introduction hurdles of Agile methodologies in organizations are now things of the past. Any new customer I deal with has had an Agile practice in place for many years, often decades now, at least for their technology-centric endeavours. Despite this wide success —duly earned and founded in genuine improvements— digital transformations and technology initiatives continue to show embarrassing levels of failure. This lack of success manifests both in failing to achieve time and cost targets as well as failing to provide real —or perceived— business value.

Common sense dictates that if a good idea doesn’t show positive results, it’s not such a good idea. When this happens, one usually goes back to previous ways of working or searches for alternatives. Rest assured, that’s not going to happen with Agile. But then, the question remains and, given the billions of dollars enterprises spend on information technologies, is worth being answered: Why Agile methods haven’t solved this decades-long chronic failure issue? The rock-bottom root cause is that IT investments are left to a single unit —usually called the IT function, or just IT. What’s wrong with this is that most IT units are project-oriented while Agile methods are product-oriented. Applying Agile methods to project-oriented goals is like trying to fit a square peg in a round hole. As such, teams working in Agile mode are straitjacketed by the way technology teams are organized to provide their capabilities to your organization.

Single-Desk IT units are project-oriented while Agile methods are product-oriented, and that’s like trying to fit a square peg in a round hole.

What you will learn below is that, as long as the typical engagement model for IT talent in enterprises is not radically changed, digital teams will struggle to deliver value. But first, let’s recap on the fundamentals of Agile product centricity.

Solving Genuine Issues

Acceptance of Agile methods in organizations is no surprise. It solves issues that were once plaguing the progress of digital projects. The classical approach was to analyze everything, then design everything, then build everything that was designed. This type of development approach is called waterfall. A huge proportion of projects conducted this way ended up canned. Too many projects burned lots of gas before organizations realized that dozens of persons were not working on the right thing. The consequences were devastating: white elephant projects that lasted for years before finally being stopped as well as thousands of smaller projects lasting much longer than expected and acting as business boat anchors, slowing down your ability to change.

Iteration to the Rescue

Agile ways of working take a different approach from waterfall sequencing: iterating. When there is uncertainty about an end product, making iterative development reduces the risk. You design and build a smaller version or a portion of the end product to test the adequacy. Then you build another part, or a bigger version, and test again. That way, you burn less gas before realizing you weren’t building the right solution. You can then adjust the target you’re aiming for or completely stop burning gas until you find the right target.

It is simple, smart and proven to be effective. In other fields, such as the automotive industry, they call it prototyping. You analyze, then design a fake car using a chassis from another car, with a body made of wood or putty. You then show the resulting prototype to groups of people to get their reactions. When you think you know exactly what needs to be done, then you embark on the rest of the costly process.

The Product-Project Clash

Without going into the technical details of Agile methods, it’s important to understand that iterative development revolves around a product. The product is whatever you are developing or improving. There are two categories of such products, and the difference is very important to grasp.

Front-End Products

In the first category, a product is easy to picture because your organization is marketing it or it is one of the core services you provide. For example, a software shop that creates phone apps, or a publisher that provides a platform for viewing educational videos or a governmental tribunal that settles residential lease matters that offers a hearing scheduling platform. In these first-category product cases, especially when they generate revenue, the concept of a product is quite similar to any other products such as mortgage loans, life policies, airline tickets or dishwasher soaps. These products have brand names, target customers, cost structures, selling price structures, profitability, life cycles, etc.

Back-End Products

There’s also a second —and widely spread— category. More often than not, the product in an Agile context is something more obscure or hidden behind your organization’s main reason for being: an order entry application for a phone operator, a monitoring system for railway convoys, or an automated loan risk evaluator. These systems are trickier for a product owner (PO) to own. In the order entry example, that system may be used to sell dozens of ‘real’ phone products, which means the revenue stream becomes difficult to evaluate. Whether of the first or the second category, digital teams working with Agile methods consider them as products.

Product Management

There is a very important role in force across all organizations that use Agile ways of working, the Product Owner (PO) that confirms the centrality of the product. The product owner has very important responsibilities, mainly in providing guidance on priorities, making sure that requirements are well understood and that the product evolves according to the known strategies. A PO is supposed to be a permanent role, lasting the whole lifespan of the product. Products are imagined, designed, developed, launched, monitored, measured, maintained, modified and eventually replaced or retired. As long as there is a product, there should be a busy PO close by, pampering the baby. For the first category of products, that comes naturally.

When it comes to the less visible digital systems of the second category, it’s another story. The PO role doesn’t blend that naturally because the baby isn’t sold to real paying customers and is often not well understood because of its technical nature. The product becomes a tech component that digital teams know and maintain, and for which it’s difficult to find a PO outside of the technical ranks. For the second category of products, I’ve seen inexistent POs, acting POs, named but absent POs, POs with little time left for that role, POs with insufficient product knowledge, managers as POs, business analysts as POs and not enough knowledgeable POs with sufficient time to play such an important role.

“Real” products —the ones that you make revenue from— are easy to manage as products. But all the rest are obscure internal or technical creations that are difficult to manage as products.

Measured Performance Drives the Show

The reason behind these role shortcomings is simple: non-IT business people do not see these systems as products, and technical teams are not geared for managing them as products.  There’s a void. Most technical teams are part of a bigger, single organizational unit, usually headed by a CIO whose performance is never measured by how well these products are managed. Still, in 2022, delivering digital change endeavours on time and on budget are among the topmost expectations business executives have for their IT arm. Managing Agile’s products is not even on the list.

Projects Are Not Products

The preferred conduit for delivering technology-dependent change remains the project, and all IT departments are well geared to manage them. Since most organizations delegate all IT responsibilities to one functional unit, it becomes a project-oriented department, meaning that it is organized, staffed and prepared for delivering projects. A central aspect of projects is that they are temporary endeavours by definition. Anything that happens before or after a project is deemed out of the scope of that project. Projects are driven by project managers (PMs) who have in mind the temporary nature of the endeavour they are responsible for. POs, on the other hand, manage permanent assets. In the end, project managers are not product managers.

Project-Driven Agile

Going Agile doesn’t stop your IT function from still being project-oriented. Using Agile methods has never replaced the existence of investments and projects. Agile or not, your organization has an investment cycle, supported by known processes, schedules, gates and committees. These processes have been put in place for good reasons, not the least being that any enterprise has limited financial and human resources. It must then choose what to invest in and carefully follow that investment through projects. However, as paradoxical as it may seem, good project management practices do not mean project success. In the specific case of corporate IT, it is a setup for failure.

Organized for Failure

Project orientation —with or without Agile methods— has perverse effects on organizations: many aspects of the good management of digital technologies are left behind because they deal with longer-than-project-term aspects. The consequences are disastrous. Project-oriented digital teams are mediocre at managing long-term performance. They are dunces at managing as assets the systems they created, since they outlive the projects themselves. They are not very good at understanding the tangible value that digital systems bring. Neither are they shining at controlling quality for criteria that could positively affect other projects or some future position. These shortcomings inevitably boomerang back to subsequent projects and products alike. They create extraneous complexity, more costs and more delays that make projects fail.

Radical Change Needed

We’ve just come full circle within a vicious cycle that creates complexities that, in turn, make projects fail. But how can this cycle be broken? The answer lies in the original, rock-bottom cause behind all this: the organizational model used to engage digital teams in your enterprise. The fact that all IT-related chores are pushed under a single umbrella creates conflicts of roles that are impossible to juggle. One has to make a choice between irreconcilable responsibilities such as delivering projects or owning and managing products. And when performance measures put at the top of the list being on time and on budget on project delivery, then project delivery wins. Field observations concur: short-term, project objectives always win.

Hence, the only way out is to completely rethink the way digital teams provide their services and who’s responsible for what. The possible areas of change vary, but one thing is sure: choosing the status quo and waiting to see will not work. If you’re interested on following the topic, subscribe to my newsletter for upcoming articles, books, conferences and educational events.

Episode Nine – Lisa Woodall on Radical Change in Corporate IT

If you had a magic wand and you could radically modify the way corporate IT engages itself in organizations, what would you change?

That’s the question we asked Lisa Woodall, a very active expert and senior business leader with a track record in shaping, leading and implementing portfolio management, enterprise architecture, business transformation  initiatives, with an impressive career track record of important and strategic roles such as Chief Enterprise Architect in large organizations in several industries.

She is currently the Global IT Transformation and Value Assurance Lead for WPP, a creativity company focussed on building better futures for clients, people, communities and the planet. She also participates in the Intersection Group, a multi-disciplinary community and platform for creating better enterprises. 

Lisa provides smart answers, based on years of field work, about the need to change how IT works in organizations.  Her insights cover topics such as:

  • IT people need to be more curious about the business of the organization  they work in;
  • Digital teams are spending too much time with the wrong people;
  • The business analyst role faces a lack of respect in digital project teams;
  • One of the things to fix: solutions looking for a problem;
  • Businesspeople are underwhelmed by the net result of digital transformations;
  • We aught to learn from venture capitalists on how to look at investment in tech.

The Thread of Your Organization

Digital technology is woven into the very fabric of your organization’s processes. Tossing it into a separate department is like trying to separate the weft from the warp of a fabric: it doesn’t work.

The All-in-One Corporate IT

Where are your digital teams positioned within your corporate organizational chart? If you’re like all customers that I know, they are probably grouped under the oversight of a chief executive. That senior exec is likely part of the executive management committee and their title is most likely Chief Information Officer (CIO), or some variation of it.  

Not Much Change on the Horizon

To be on the safe side, I combed through the publicly available financial statements of a dozen major organizations in both public and private sectors on three continents and in eight different industries.  I took the time for this due diligence to make sure that I hadn’t missed a major shift over the past few years. As I expected, there was no change: my research showed that all medium to large organizations are still—as they were forty years ago—clustering all information technology skills and responsibilities under a single umbrella.

Then, as it often happens, I started to doubt my findings. What if the organizational charts don’t show that the IT skills and responsibilities are, in fact, spread across the organization? What if the CIO has delegated some portions of the digital pie to those accountable for delivering business value through the use of technology?  

All-Inclusive Processes Too

But it seems that isn’t the case either. Below is not an organizational chart, but a list of the highest level processes of a major financial institution.  If you are acquainted with the day-to-day business of a major bank, the chart below will look familiar. What’s most interesting about this chart is that the major processes related to information technologies (highlighted in blue) are all grouped together into only two processes labelled Provide, maintain and support IT services and Manage IT modes of operation.   This is typical of what I refer to as the single-desk IT.

Process list of financial services enterprise

This organizing model doesn’t work very well, and hasn’t for a long time now. 

The Single-Desk IT Model

The delegation of all IT-related tasks and responsibilities to one group may have worked in the olden days when the digital folks wearing white coats were operating machines in an air-conditioned, glass-fronted room in the basement. But in the 21st century, information technologies are now part of the fabric of enterprises. There are many issues with dumping too many digital responsibilities onto the same team, as I described in my first book.

The single-desk IT model is not the result of power-hungry geeks wanting to have it all and control every part of an organization. In fact, more than three decades of field observation have led me to believe that it’s actually the other way around: business people in most organizations are more than happy to toss any IT-related concern, task, or responsibility to someone else.  

This is caused, in part, by a knowledge gap between those that have chosen different career paths. But of course, this will never change: most people in your organization aren’t attorneys, which is why you have a legal department. 

But at the end of the day, it is not the legal department that signs on the dotted line of a contract. If there’s a lawsuit against your organization, someone owns the causes behind it and the effects it has on your enterprise; the friendly lawyers on the 7th floor just support the process. 

Mass Delegation of Digital to One Team

You can find parallels with other fields like HR or Finance, but these have their limits because none of these departments have become so critical to the daily operations of all other departments over the last 30 years. The knowledge gap can be bridged to a certain extent, but most importantly, the mass delegation of all IT concerns to one department has to be recognized as dangerous to the health of your enterprise. Something has to change. And the larger your organization, the greater the danger to your agility as a business of delegating all IT tasks, challenges and responsibilities to one team.

Reading: Digital Ethics

My children -as well as the majority of people that aren’t aware of what’s behind- tend to take the digital applications that they see, touch and use as something that just “is what it is”, as if it was picked from a tree.
It couldn’t be further from reality. Behind all this digital ‘stuff’, there are humans that make decisions on a daily basis.

In a very interesting article on digital ethics from Luciano Flòridi where I found this very interesting quote. It is about any computer program ( called an artificial agent) that seems to have some sort of intelligence:

[…] artificial agents have no intentions, motivations, mind and so on. Therefore, they are part of an ethical discussion centered upon the choices, made by humans, that occur as these systems are built and allowed to operate.

Those (like me) that have studied and worked in the development of digital systems are very much aware of the fact that behind any shiny application, there are flesh-and-blood humans that worked on it.
Humans with moods, and humans with their own set of ethics.

How long of a leash are we willing to give them? Is the sacrosanct quest for innovation an acceptable waiver? Once a program is allowed to be used by one, two or millions of people, who’s accountable for its behavior?

In the construction industry, you cannot use a new technique or a new material to build a house, a school, or a high-rise building, without that innovation receiving some sort of approval from independent authorities. The same could be said for the food or pharmaceutical industries.

Questions: Who is vetting the functionality provided by web browsers used by hundreds of millions of individuals? Who’s checking the code behind websites that have become integral part of your daily life? Answer: the companies that built them.

Is that making you feel any better about the ethics applied knowing that it’s being taken care by limited liability corporations focused on revenue and share value?

Episode Eight – Chris Potts on Radical Change in Corporate IT

If you had a magic wand and you could radically modify the way corporate IT engages itself in organizations, what would you change?
That’s the question we asked Chris Potts, an enterprise architect and designer, who provides work and career mentoring, online, for people all around the world.  Chris is also the author of a series of great books on enterprise architecture.

User the audio reader below, or click here to follow the transcript.

Chris provides great answers, based on decades of field work, about how IT should be viewed, and what organizations need to do to improve value.  His insights cover many topics such as:

  • The four core capabilities of corporate IT;
  • How to find out why you really need your IT department;
  • How communities of interest across the organization can make a huge difference;
  • Why monitoring and managing the value of technology investments shouldn’t be left to those that implement;
  • Change requires that some people let go of some of their beliefs;

Episode Seven: Annika Klyver on Radical Change in Corporate IT

If you had a magic wand and you could radically modify the way corporate IT engages itself in organizations, what would you change?
That’s the question we asked Annika Klyver, a teacher, an innovator in business architecture, the inventor of the Milky Way Enterprise Mapping technique, and co-author of a book on enterprise design patterns.   Currently she is a Business Architect/Designer at Scania and an active member of the enterprise design community of the Intersection Group.  

Annika provides great answers, based on years of field work, about the need to change how IT works in organizations.  Her insights cover many topics such as:

  • Raising expectations towards flexibility of IT systems and designing for change rather than being forced by it.
  • Going away from the handover of deliverables and to the handover of trust.
  • Why value flow roles and product portfolio managers have a weaker position for inducing change.  
  • The need to make people aware of how you progress from an idea to actual change.
  • How  corporate IT is often viewed as shadow business.

Here’s the link to the transcript.

Book Review: Enterprise Design Patterns

There are lots of books about design, design patterns, business architecture, solution architecture, enterprise architecture, etc. They often require lots of personal investment in time and concentration as they usually are thorough and lengthy.

Do not get fooled into thinking that this one, with only 121 of quite airy pages, was lightly conceived. Do not believe for a second that this opus lacks the depth that you need.

It is packed with wisdom gathered through decades of experience. I personally know the authors. I can guarantee that each sentence must have been subject to animated debate on content, form and purpose.

Those that are new to the subject will get valuable advice on ways to proceed. The seasoned practioners will get friendly reminders that succeeding at designing better enterprises calls for keeping oneself aware of a myriad of important aspects to nurture and care.

Enjoy the read: W. Göbl, M. Guenther, A. Klyver, B. Papegaaij, Enterprise Design Patterns: 35 Ways to Radically Increase Your Impact on the Enterprise, Intersection Group, 2020.

Value of Technology Part 3 – Corporate IT’s Value Is in the Crafting

Using business value as a gauge of performance is not only wrong, it is too advantageous for corporate IT teams.  What a predicament: to be measured on something that you have little control over and is created by others!  As long as the other party performs well, you get a free-ride to prosperity. 

Everyone that contributed to, let’s say a 20% reduction in waiting times at the airport gates, should celebrate the business value of this achievement. However, it is by no means a sign that the corporate IT staff involved in the project were any good at doing their part of the job.  

Successfully using technology to make a valuable business achievement doesn’t mean anything about the performance of the IT teams involved in the accomplishment.

As a supporting capacity to the real creation of value by your organization, the IT function requires to be measured on other things.  Within the current typical engagement model, the corporate IT function shouldn’t be made accountable for creating business value. We saw and Part 2 that corporate IT’s business is not banking, insurance, car manufacturing or offshore drilling. In Part1 we covered the important of not confusing the value of an investment in technology with the performance of the tech-teams that create the technology.

I’m sure that you’d agree that corporate IT teams should be accountable for adequately supporting your business endeavors.  And what should you expect from them?  The answer is not that simple because corporate IT is —and always has been— a two-headed beast!

One brain is dedicated to operating the IT assets and the other one focused on changing them.  The ‘operate’ and ‘change’ halves are very different.  Furthermore, their respective performances cannot be evaluated collectively.  One side is dedicated to continuity, short-term actions, and transactional speed.   The other one is dedicated to change and is judged on very different criteria. 

The First Half: Quantitatively Measured and Standards-Based

The IT operations —as they are usually named—are devoted to keeping computer systems running smoothly.   They work with existing assets, the solutions that were put in place by the second half dedicated to change.  

The good news about operations is that, over the last few decades, this half has implemented quantitative measures of performance that leave little space for interpretation. System failures, downtimes, response times and others are undebatable measures of a good job done.  Furthermore, the major part of the IT operation costs comes from purchased, standards-based technology resources.  This means that most of the costs to operate the IT assets are traceable to vendor invoices, so that there is always an auditable evidence of the costs. This means that most of the costs can be compared to the same commodities provided by other vendors.  This opens the door to frequent optimization efforts. They measure themselves, and they genuinely improve.

The Ill-measured Other Half

The other side of corporate IT lives and breathes on change.  The development, software factory, solution engineering  —or any other of the many name it’s given— is dedicated to understanding new requirements and changing the systems to adequately support the evolution of your enterprise.  The expectation toward the second function should be that they do the right thing, at the right pace.  In other words, the evaluation of their performance should be the speed at which they provide their contribution, and the quality of the work done.  

Quality and Speed

Both velocity and excellence of work are important and interdependent. Corporate IT could produce quality outputs at an unacceptable pace or inversely, speedily provide poorly engineered deliverables.  It is not without embarrassment that I’ve witnessed much too often poor quality outputs delivered at a slow pace.

Quality and speed allow your organization to more rapidly respond to market changes, or better, to provoke the disruptive changes that will give you a profitable business edge.    

Cost is voluntarily out of the equation, since most corporate IT costs for the development half of IT are directly linked to speed.  Being slower or delivering poor quality results usually increases costs, sooner or later. Additionally, I suspect that from a purely business point of view, you need more rapid delivery of change, not lower costs —although that wouldn’t hurt.

Poor Measures

The bad news about development head is that this half has no reliable measure for speed, and an incomplete grasp on quality.    Refer to this article to discover why speed in delivering change cannot improve. The reason is simple: it is not measured. It has never been, and unless the engagement model is changed, the second half will continue to be un-measured on speed —and remain mediocre on that front.

You may also want to learn about  why the same faulty engagement model leaves unattended a whole section of quality that later boomerangs back to your business endeavors and slows IT turnarounds.  

Summing Up Value for Corporate IT

Regardless of the state of your organization’s processes to assess corporate IT’s performance, keeping business value in the right place, as an investment assessment tool, is a first step toward clarity upon expected results.

Business value should never be used for appraising corporate IT work, and is no replacement for adequate measures of performance that foster accountability.

The next step is to align IT’s work with quantitative measures of performance that are aligned with you enterprise’s expectation toward their work. Beware however that new meters need to be put in place. The ones in place leave too many important areas unattended, which irremediably leads to underperformances.

Value of Technology Part 2 – Corporate IT’s True Business Is Not Your Business

Business value is not an adequate measure of the value of corporate IT work.  As described in Part 1, business value is great for gauging the value of an investment in technology, but should not be applied to gauge IT’s contribution to your business.

It would make sense if your corporate IT team was spun off into a separate business that serves you and other customers in a true competitive market environment.  Then the value of IT staff work would have a direct link to business value, since it would become the business.    But for now, in the majority of organizations, they remain a support function of the business —the one that makes the money to fund all investments, including IT.

Business Metrics Rarely Apply to IT Work

You could try —as many others did— to relate IT work excellence to the quantitative measures of efficacy applicable to your industry, the indicators that the rest of your organization uses.   Sales revenue, customer attrition, surgery waiting list length, square footage built, average waiting time at the gate.  All these gauges are used to determine how good you are in your business.  Using them for IT would be a loss of time and energy.   These measurements are too unconnected to IT work for any IT staff —or their managers— to relate to them.  On a given workday, it is not their job to improve the sample metrics above, or any dozens of others you could find. IT staff have their plate full of mundane technical chores that need to be accomplished in order to keep their heads above water. 

For IT professionals, the “business value of their work” is almost a view of the mind; at best an interesting viewpoint that bears little to no practical substance.  

If one of your IT employees is asked what business they’re in by a stranger at a social event, they will most likely answer IT-something, not banking, insurance, off-shore drilling, health services, retail or whatever business you’re in.  I’ve done it my entire career, and I don’t recall any fellow geek claiming to be in any business besides information technology.

IT Totally Relies on Business to Create Value

If business leaders tell IT staff that the organization needs to steer left to gain a new market share, they will do what they can do to steer left.  In other words, they completely rely on their non-IT, business savvy colleagues to make business decisions that lead to success.  The contribution of IT resides in the execution of the IT activities that ensue from the business endeavors that will provide the value.

Correlating IT Work to Business Value Is No Help for Betterment

If you nevertheless go down the path of linking the delivery of technical platforms, solutions, applications, etc. to the business value it provides to your organization, then beware that it will remain an accounting exercise, not an accountability one.  The lucky ones that worked on highly profitable endeavors will rejoice. The others will sadden, but all of them will feel quite remote from the concept of business value and how it relates to their achievements.  

Since this is an accounting exercise, apart from the CIO and a handful of executives, most IT staff will never be aware that someone has developed spreadsheets that enable financial analysts to correlate yearly IT spending levels to the organization’s revenue or operating costs.  And that’s a good thing to not tell them, since it would be of little help to devise any course of action for improvement. 

The Business Value Comes from Your Business, not IT

That is why you should not be asking —or hoping for—your corporate IT to provide more business value, or worse, to demonstrate the business value of IT.  Continue to gauge the business value of business endeavours.  Do not lower your guard in assessing the business value of the investments you make in your organization.

Leave the so-called “business value of IT” within your funding arbitrage practices and continue to focus on the business value of your business. 

It’s your business that generates the real value, not IT.

But How Good Is Your IT Team?

That said, it’s still a sound business question to fathom how effective your IT function is at doing what it does.  Corporate IT should be asked to demonstrate its effectiveness at supporting your quest for growth or any other business value you may seek. 

In part 3 of this series, I will reveal that within the current and typical engagement model of IT knowhow in organizations, performance measurement is flawed and doesn’t allow IT to truly improve its contribution.

Value of Technology Part 1 – Investment Value and IT Work

Let’s Get Some Business Value Out of IT

There is a strong belief that corporate IT’s performance should be linked to the business value it provides to the organization.  It is wise to want to bind all members of the organizational family to the common goals that provide value. But we must be careful to not cross a very important line between investment value and work value.  

If IT is the business, and the work products of the corporate IT function can be directly linked to sales and customer satisfaction, then yes, the linkage is healthy.  But for  many cases information technology is not your core business and IT acts as an enabler or a support function. If you’re in entertainment, travel, financial services, healthcare, you can’t assess the value of IT the same way.  In short you cannot —and should not— use business metrics as a means for assessing IT’s contribution to your business.    

Do Not Assess Corporate IT Performance With Business Metrics 

Business metrics are for business.  Unless your organization sells IT products or services, IT’s performance cannot and should not be assessed with business success.

There are two reasons underpinning this proscription. The first is that business value does not connect well to corporate IT work.  The second issue is that it creates distracting noise around the subject of IT performance, pushing corporate IT away from true accountability.    

Assessing business value in itself is a healthy practice, as long as it is used to evaluate the business returns of IT expenses from an investor’s point of view.

Value Is In the Investment

The need to find metrics that relate information technology to business value is not new.   You can find a steady flow of scholar and trade articles from the 80’s up to a few months ago that show the continued interest on the subject.   

This infatuation is well founded. Each organization must allocate a finite amount of resources over a number of initiatives.  It must arbitrage the distribution of limited investment dollars and scarce human resources to maximize business returns.  It starts usually 12 to 18 months before the actual IT work begins, as part of what is often labelled the Investment Governance Cycle.

Whatever the name given, it is the formal process of assessing the relevance of proposed endeavors that require funding.  The archetypical technique used to assess the investment worthiness is the Cost Benefit Analysis (CBA), but there are several other techniques to help guide funding decisions.  

Some business projects have a clear intent of changing business processes  —and their underlying technologies.  It can be revamped customer experiences, new business models, or just good old optimizations of the current ways of doing.  In these cases, the business value of IT should be quite simple to grasp.  The need to change the technology should be based on business outcomes with clear benefits that justify the whole endeavour, including its technology parts. 

Some changes are cross-functional, cross-projects and often initiated by corporate IT as a means of optimizing its technical operations.  Think for example of a knowledge sharing platform or a VPN infrastructure that better support work-from-home.  For these projects, the task of identifying business value is much more difficult.   

The Knowledge Gap

Try to answer this question: What is the business value of moving all your business applications to a cloud-based Dockers operational model?  

As a business person, you’re surely clueless about the actual meaning of this Docker thing.  You’re probably somewhat aware about the ubiquitous but still mysterious cloud.  In all honesty, you might not be 100% sure about the real meaning of the word application.  

There’s an obvious knowledge gap. 

Your response to the risks of making a decision about these types of project is to fall back on standard cost and benefits management practices.  You’d probably ask for quantitatively measured benefits, in units of dollars preferably. You may add an additional multi-year benefits tracking process to ensure that the declared benefits are indeed reaped. 

That’s great, but remember that all of this is investment wisdom, not technology value assessment.  The assessment of the value is ‘detached’ from technology and the teams that work on it.  If you couldn’t understand what a ‘cloud-based, Docker model’ is, there is also a high probability that you will not grasp the true meaning of the CBA that was produced to justify the investment.

Identification of the business value of IT should be kept where it makes sense: assessing the value of the investment.  Do not make the mistake of transposing a funding governance practice to the assessment of the performance of your IT teams.

Investment Worthiness Is Not IT Excellence

As long as you are linking business value solely to the worth of the IT investments, then business value is used within a healthy and advisable practice in your quest to identify the right funding choices.   However, business value is not a valid proxy for assessing the actual work done by your corporate IT function when it all starts, 12 to 18 months after the financing decision is made. 

Business value of a technology investment is no proxy for assessing the work done with that investment money.

In an upcoming article (part 3), we’ll dive into what performance means for corporate IT. But before that, in part 2 of this series, we need to clarify that for most corporate IT professionals, creating business value is difficult to attain and hard to relate to.

Constructing Digital for Deconstruction

The Citation

“The information revolution is sweeping through our economy.  No company can escape its effects.  Dramatic reductions in the cost of obtaining, processing, and transmitting information are changing the way we do business.”

How do you relate to the statements above?  True? False?  Haunting you day and night?  Excited about endless opportunities?    Need help?  All of the above?

Here’s the interesting detail: it comes from a landmark article from Porter & Millar published in July 1985.  Yes, nineteen-eighty-five.  How old were you that year?  That’s 35 years ago! 

Don’t re-read the quote to try to find a flaw related to its age.  There isn’t. 

Change, the Only Certitude

There is wisdom to emerge out of realizing that disruptive technologies like big data, internet of things or artificial intelligence are just the last in a long series of tech-drivers that change the way business is done.   That was true in 1985, and will be true in 2055.    Change is the only certainty.   

How is your organization prepared for that?  More specifically, how well rigged are your digital teams —and the digital assets they created over the years— to sustain constant change for the next 35 years?

If there is one thing that will not change, it’s the certainty that whatever corporate IT does to support a given business shift, it will need to be changed again and again.   Sooner than later, what they’ve created will require to be replaced or retired.   

Keep that in mind for a moment as we side track to a personal experience.

Summer Festivals

In fall 2014, I was attending an annual symposium organized by the Montreal Chapter of the Project Management Institute.  One of the speakers gave a candid presentation on how projects are managed in his business: logistics and physical installation of the infrastructure required for summer festivals.   His job was to transform villages, parks, beaches, or city streets into giant entertainment complexes, with performance stages, parking areas, restaurants, kids amusement gear, etc.   That’s pretty cool.  His business domain appeared to the IT guy I am as both remote and refreshing. There is one portion of his talk that struck me.

It was about timing pressures.  Not the very common pressures of having very little time allotted for complex endeavors.   Nothing new there.  Not the fact that the start dates of these festivals are cast in stone, publicized at least a year before, with no possible way to delay the delivery.  That made the IT guy think about the hundreds of delayed deliveries that I witnessed in corporate IT projects and made me feel somewhat both privileged and ashamed.  But that’s not what struck me.

Guns and Hoses

In order for these happy summer events to occur, streets are blocked.   That’s a nightmare for police and fire departments.   In case of an unhappy event such as a fire, they have to race onsite without hitting pedestrians on their way there.  As such, these civil servants have notably stringent requirements of not only setting up the stages in very little time, but also to get the hell out of there ASAP and have the streets clear and clean before the Monday morning rush hour. 

The speaker explained that many of the techniques, materials, or processes used for setting up the festivals were not just chosen to actually do the job: they were designed and done to favor quick dismantling. For having firefighters and cops breathing in your neck is a good incentive.  They’re very serious about it —and they mean it.

That’s when I got struck. 

Building Stone Monuments

I realized that the assets built by your digital teams are never built with dismantling in mind.    The mindset is more something in the lines of building pyramids or century-defying monuments.    Most systems I have dealt with were never designed to be removed.  Neither were they made in order for their constituting parts to be easily replaced by new ones. 

The first explanation that comes to the mind of most IT experts if that it takes more time and effort to design for easy removal. That’s true.

But haven’t we agreed that change is the only certainty?  That any asset created to support your business is bound to be changed or replace, sooner than later?  Then why a whole industry that knows very well that change is inevitable cannot create things that are easily removable and replaceable?

Incentives for Doing It

The rock-bottom reason is simple: there are no incentives to do any better.  Why would this brand new system be built to be easily dismantled?  Isn’t it the newest and best thing, with the hottest technologies ever, that is going to propel the business to new heights for years to come?  Are you asking your IT team to envision removal of their new baby whereas it is not even born yet?  Without strong incentives, it just won’t happen.  That’s why it is rarely the case that special effort and care are put in all these little details that make the difference for rapid dismantlement.

Incentives for Not Doing It

You might think that acquiring third party software creates these situations. But vendors do not create solutions that are easily dismantled.  They lack inducement for putting in place easy to remove solutions. Furthermore they truly have hard cash incentives for doing the opposite.  They are in business to make money. They have no interest in dismantling their very source of income. 

For internal IT, isn’t the maintenance, and the removal of IT assets also a source of income?  In corporate IT, when time comes to pull out something, it often has to be done by the same staff that built it. And you pay them by the hour.

Against the Grain

No, IT builders do not think about dismantlement, and asking for it would be going against the grain.

That is unless there were nervous cops or firefighters breathing in the neck of corporate IT staff about rapid removal and replacement.    For that to happen, radical change in the corporate IT engagement model has to occur.

Designing Your Stairway to Heaven

Standing the Test of Time

I’ve been an unflagging fan of Led Zeppelin since my early teens. I’ve been a worshiper of their founder and lead guitarist Jimmy Page.  That’s probably why YouTube’s algorithm presented me this 17 minute video from the BBC where Mr. Page describes the intent and the result of Zep’s most iconic composition: Stairway to Heaven.  Saying that this piece has an enduring popularity is an understatement.  Today, teenagers whose parents weren’t yet born when this opus was written are still fascinated by the creation. 

Jimmy’s Architecture

There are certainly a series of reasons why Stairway to Heaven is so good, and not being a musician, I’m not cognizant enough to comment on all of them.  However, at 4:38 into the video, Jimmy said something that struck me:

“All this stuff was planned.  It was not an accident, or everyone chipping in.  It really was a sort of design.”

Jimmy Page

If you listen to the whole video, there will be no possible doubt: Stairway to Heaven is the result of conscious design.  The magnum opus was architected, from the beginning, with a clear vision about the sequence of movements, the textures, the build-up of tempo and the unfoldment of the majestic finale.

Innovation Is Not Design — It Feeds It

Another clear learning from Master Page: this was not the result of some brainstorming session, an unplanned mashup, or random amalgamation in hopes of finding a gem.  Unknowingly, Jimmy brought more fuel to a conviction that I’ve seen building in my mind over the years:  innovations and epiphanies emerge before the actual design of digital solutions begins.  These pieces of enlightenment are then embedded into the greater creation. The innovations —if any— reside in specific areas of the final product, but they are not the final achievement. 

Architecture and Design Make the Masterpiece

This leads to another observation, which is supported by decades of scrutiny and involvement in the world of information systems design: brainstorming sessions, focus groups, innovation dives —and all the good practices that encourage seeing things differently— will not yield a masterpiece.  They will nourish the subsequent process of architecting a creation that uses the innovative gems, but the master work comes from intentional design.

Randomly searching for innovation may lead to interesting designs; but masterpieces that stand the test of time are architected.

If you’re tempted to think that great business systems emerge from innovation, beware that it’s far from enough.  Don’t put all your marbles on the lateral thinking side of things.  Save a few for conscious design.

Beauty in All Creations

In the world of buildings, the importance of architectural beauty is rarely questioned.  Well-designed buildings inspire us, comfort us, and ignite seldom felt emotions.  The widely recognized merit of beauty is, in part, founded on the fact that human constructions are tangible creations. We live in them, work in them, and look at them.  We can relate the design to what we see or know and understand the value of beauty.

The Many Faces of Beauty

This very interesting video, sent to me by Wolfgang Göbl,  is an emotionally compelling reminder that beauty can take many forms.  But just because beauty can take many forms does not mean that it is anything. If beauty can be anything, it loses its significance. Something repulsive or ugly is not beautiful just because someone, somewhere may find beauty in it.

Beauty is Important

This video is also a reminder that beauty is far-reaching.  Having been designing for a few decades now, I feel compelled to make a bold statement about it:  beauty should be a sought after attribute in everything that is worth the time and effort to be designed.  

And that’s not just me saying that after some sort of epiphany.  Business architecture expert Mike Rosen once reminded me that back in 40 BC, Marcus Vitruvius postulated that all buildings should have three attributes: firmitasutilitas, and venustas, which could be translated to durability, utility and beauty

The Many Names of Beauty

I use thesaurus.com every day, so,  I searched for the related meanings of the word beauty.  I found that one of the synonym tabs was labelled advantage.  That’s interesting, I thought.  I clicked on  ‘advantage’  and a world of related meanings appeared: feature, importance, value, asset, attraction, benefit, blessing, boon, merit, and worth. 

These synonyms  and the video remind us that beauty is not just visual and can be found in the value that something brings. Isn’t that beautiful?

Beauty in the Intangible Creations

The designs that architects create for information systems and digital technology solutions are chiefly abstract and not visual.  Saying that what you see on your computer screen is just the tip of the iceberg is an understatement.  These designs are impossible to relate to for any user of the system. In fact, these designs are hard to link to for the majority of computer-literate geeks that work in one IT field or another. That’s why beauty in these types of designs is not just a hard sell; it is often viewed by the uninitiated as a ludicrous quest rooted in some form of designer’s vanity. 

But it’s there.  Some information technology designs bear beauty because they bring value, asset, attraction, benefit, resilience, intelligence or wisdom.   

Beauty in All Designs

I strongly believe that the quality criteria for architecture and design in information technology creations needs to include beauty.  A corollary of this belief is that those who declare themselves designers or architects should understand the importance of beauty, what beauty means for their design, and seek to achieve it… or else leave it to others that care.

“They Don’t Know What They Want!” and a Few Ruthless Questions About Estimation in Corporate IT

Estimating how much effort is required for digital transformation projects is not an easy task, especially with incomplete information in your hands. If one doesn’t know in sufficient detail what the business solution to be built has to do, how can they estimate correctly?  In face of such an unchallengeable truth, my only recommendation is to look at the problem from another angle by asking these simple but ruthless questions: 

Q1: Why are there so many unknowns about the requirements when estimation time comes?

Instead of declaring that requirements are too vague for performing reliable estimation, couldn’t we simply get better requirements? My observations are that technical teams that need clear requirements aren’t pushing enough on the requesting parties. This could be rooted in a lack of direct involvement in the core business affairs, an us-and-them culture, an order-taker attitude, or all of the above. Whatever the reason, there is a tendency to take this lack of clarity as an ineluctable fact of life rather than asking genuine questions and doing something about it.

Q2: Why do IT people need detailed requirements for estimation?

There are industries where they get pretty good estimates with very rough requirements. In the construction world, with half a dozen questions and square footage number, experts can give a range that’s pretty good — that is compared to IT projects. I can hear from a distance that IT projects are far more complex, that “it’s not comparable”, etc. These are true facts, but they do not justify the laxity with which your corporate IT teams tackle the estimation process. In the construction industry, they have worked hard to get to that point and they relentlessly seek to improve their estimation performance.

Couldn’t IT teams develop techniques to assess what has to be done with rough requirements, then refine those requirements, re-assess estimates, and then learn from the discrepancies between rough and detailed to improve their estimation technique?  Read carefully the last sentence: I did not write ‘improve their estimates’ but rather ‘improve their estimation techniques’. Digital teams are good at the former but mediocre at the latter. IT staffs know how to re-assess when more detailed requirements are known, but they are clueless about refining their estimation techniques.

Q3: Is IT the only engineering field where customers don’t know in details what they want at some point? 

Of course not!  All engineering fields where professionals have to build something that works face the challenge of customers not knowing what they want, especially at the early stages.  Rough requirements can be as vague as “A new regional hospital”,  “ A personal submarine”, “A multi-sports stadium”, “A log home”, “A wedding gown”.  Professionals in these other fields genuinely work at improving their estimation skills and techniques even with sketchy requirements. But no so in corporate IT.

Q4: Who’s accountable for providing the requirements? 

The standard answer is that it should come from the user or the paying customer, and that’s fair. The problem is that IT folks have pushed too far such a statement and distorted it to a point where requirements should fall from the skies and be detailed enough for precise estimation, or else be rejected! Which has led to an over-used statement that “Users don’t know what they want!”  And that’s not fair, especially when it is used to declare that estimating is a useless practice.  Which leads to the next question.

Q5: Who’s accountable for getting clear requirements?

That’s the most interesting question.  The query is different from the previous question, read carefully.  It’s about getting the requirements and being accountable for getting clear requirements.  Digital systems are not wedding gowns or log homes.  Non-IT people often have a hard time understanding how and what to ask for.  Whose responsibility is it to help them? If the requirements aren’t clear enough, who’s accountable for doing something about it?  The answer to all these questions should be those that have the knowledge, and that’s generally the IT folks.  What I observe in the field is that IT staff are too often nurturing an us versus them culture where they don’t know what they want.  Let’s turn for a moment that statement around to: “We don’t know what to do”.  Isn’t that an interesting way to see things? It’s not anymore that they don’t know what they want, but rather that the IT teams don’t know what to build to provide the outcome that the organization needs.

Q6: Who’s accountable for knowing what to do? 

We all know who they are. Seeing the problem from that end and with another lighting may substantially reduce the cases when “they don’t know what they want” is a valid point.

Agile™ and Iterative Development to the Rescue! Or is it?

The clarity of requirements issue has lead smart IT people to use iterative prototyping to solve it for good.  The idea is ingenious and simple: let’s build smaller pieces of the solution within a short period of time, show that portion to the users and let them determine if that’s what they thought they wanted.  That’s great, and that’s one reason why the Agile™ methods have had such a widespread acceptance.  However, iterative prototyping doesn’t solve everything, and it certainly avoids a few important issues:

Q7: Are users getting better at understanding their requirements with Agile™?

Are sponsors and users getting any better at knowing what they need before they get any technical team involved? Of course not. Things haven’t improved on that front with Agile™ methods or any iterative prototyping technique for that matter.

Q8: Could prototyping be used as a means for improving how people define requirements

It certainly could, but that is not being taken care of.  Worse, it encourages laxity in the understanding of the requirements.  After all if we’re going to get something every 3 weeks that we can show our sponsor, why should we spend time comprehending the requirements and detailing them?  That’s a tempting path of least effort for any busy fellow.  The problem is that thinking a bit more, asking more questions, writing down requirements, having others read them and provide comments takes an order of magnitude less effort than mobilizing a whole team to deliver a working prototype in 3 weeks. The former option is neglected at the expense of having fun building something on the patron’s cuff.

The False Innovation Argument

Iterative prototyping is used across the board for all kinds of technology-related change endeavors, including those that have little to no innovation at all.  Do not get fooled into thinking that all what the IT teams are doing is cutting edge innovation. 

In fact, I posit that for the vast majority of the work done, the real innovation has occurred in the very early stages, often at a purely business level, totally detached from technology.  What I see for most endeavors, is IT teams building mainstream solutions that have been done dozens or hundreds of times within your organization or in others. Why then is iterative prototyping required? In those cases, using iterative development methods is less for clarifying requirements than to manage the uncertainty around teams not knowing how to build the solution or not understanding the systems they work on.

In many cases, using Agile™ is a means for managing the uncertainty around IT folks not knowing how to do it.

Did I ask this other cruel question: who’s accountable for knowing the details of the systems and technologies in place? You know the answer, so it’s not in the list of questions. It’s more like a reminder.

And finally, the most important question related to estimation:

Q10: Is iterative prototyping helping anyone get better at estimating?

Of course not.   The whole topic is tossed on the side as irrelevant when not squarely labelled as evil by those that believe that precious time should be taken to develop a new iteration of the product rather than guessing the future.

The Rachitic (or Dead) Estimation Practice

The consequence is that there is no serious estimation practice developed within corporate IT.  Using the above impediments about ‘not knowing what they want’ to explain why estimations are so often off-mark is one thing.  Using these hurdles as an excuse to not get better at estimating is another.  IT projects are very good at counting how much something actually costed and comparing it to how much was budgeted.  But no-one in IT as any interest in comparing actual costs with what was estimated with the genuine intent of getting better estimations the next time. 

This flabbiness in executing what should be a continuous and relentless quest for improvement in the exercise of estimating takes its root in a very simple reality:  corporate IT is the one and only serving your needs, providing to your organization everything under the IT sun.  While in the infrastructure side of IT, competition has been around and aggressively trying to offer similar services to your organization as alternatives to your in-house function, the other portion of corporate IT –the one driving change endeavors and managing your application systems—operates in a dream business model: one locked-in customer that pays for all expenses, wages and bonuses, and pays by the hour.  When wrong estimates neither make you lose your shirt nor any future business opportunity, the effort for issuing better ones can safely be put elsewhere, where the risks imminent.

Don’t Ask for Improvement, Induce It

These behaviors cannot be changed or improved without providing incentives for betterment. Unfortunately, the current, typical engagement model of corporate IT in your organization is a major blocker. Don’t ask your IT teams to fix it: they’re stuck in the model. The ones that can change the game are not working on the IT shop floor.

Want some sustainable improvement? Start your journey by understanding the issues, and their true root causes.

Episode Six – Joe Peppard on Radical Change Ideas for Corporate IT

If you had a magic wand and you could radically modify the way corporate IT engages itself in organizations, what would you change?
That’s the question we asked Joe Peppard, Principal Research Scientist at the Center for Information Systems Research of MIT Sloan School of Management.
Joe provides awesome answers based on years of field research about the need to change the IT structure. His insights cover many areas such as:

  • Why the classical IT department holds back organizations in their quest for digital success;
  • Where do the real opportunities lie;
  • Managing information technologies vs organizing for success with information technology;
  • Comfortable positions and longstanding behaviors;
  • The inwardly focussed self-management regime of IT departments.

Listen with transcript at this link.

Episode Five – Bard Papegaaij on Radical Change Ideas for Corporate IT

If you had a magic wand and you could radically modify the way corporate IT engages itself in organizations, what would you change?
That’s the question we asked Bard Papegaaij, Chief Change Facilitator at Transgrowth, and former Research Vice-President at Gartner.

Bard gives awesome answers about the need to change corporate culture when it comes to IT.  He also provides insights in many other areas, such as:

  • The great divide between IT and the rest of the organization;
  • How silos are nurtured by the people inside them;
  • The socializing geeks;
  • The IT knowledge gap;
  • The social responsibility of the IT community;
  • How social change can replace a magic wand and create a joyful experience.


For complete transcript, use this link.

Note on the Notes

Notes on the Synthesis of Form

This book is in my view the equivalent of the Old Testament for designers and architects. It dates 1964. Although another Alexander book from 1974, The Timeless Way of Building, has been raised to quasi cult level as it paved the way to very important principles in software design, I believe that this seminal work from the same author is more profound.
In its 1971 preface, Alexander wrote this:

“No one will become better designer by blindly following this method, or indeed by following any method blindly. On the other hand, if you try to understand the idea that you can create abstract patterns by studying the implication of limited systems of forces, and can create new forms in free combination of these patterns – and realize that this will only work if the patterns which you define deal with systems of forces whose internal interaction is very dense, and whose interaction with the other forces is very weak – then, in the process of trying to create such diagrams or patterns for yourself, you will reach the central idea which this book is all about.”

That’s the high-cohesion-low-coupling principle in its most earliest form. The fact that I can just read the preface and grasp what he meant in this dense sentence is both a sign of the influence he has had on future generations, and the importance of the principle.

You will also note the wise recommendation about following methods without thinking.

The man is born on the same year as my father: 1936

Episode Four – Mike Rosen on Radical Change Ideas for Corporate IT

If you had a magic wand and you could radically modify the way corporate IT engages itself in organizations, what would you change?
That’s the question we asked Mike Rosen, Chief Scientist at Wilton Consulting Group, and co-founder of the Business Architecture Guild.

In answering the question, Mike provides loads of wisdom on:

  • The new drivers of the Digital Economy;
  • Accountability as a means of achieving cross-enterprise results;
  • The Learning Organization: what you have to become, or succumb to;
  • The IT knowledge gap;
  • The enterprise architect’s role and the true value of their work;

For a view of the transcript, use this link.

Episode Three – Scott W. Ambler on Radical Change Ideas for Corporate IT

If you had a magic wand and you could radically modify the way corporate IT engages itself in organizations, what would you change?
That’s the question we asked Scott W. Ambler, author and Chief Scientist of Disciplined Agile at the Project Management Institute.

Scott shares his wisdom on many important topics, including:

  • Integrated roles in organizations;
  • Collaboration and the great IT-Business divide;
  • Leadership;
  • Outsourcing;
  • Cultural gaps and teamwork;
  • Measurements;
  • Humility as a means of improvement…

You can also read the transcript with this link.

Episode Two – Wolfgang Göbl on Radical Change Ideas for Corporate IT

If you had a magic wand and you could radically change the way corporate IT engages itself in organizations, what would you change?
That’s the question we asked Wolfgang Göbl, founder and President of the Architectural Thinking Association®.

Wolfgang provides great answers to the question, with a focus on accountability.
In addition, he shares his opinions on a several important topics, including:

  • The CIO, a role that is bound to become irrelevant;
  • Why certain key tasks should not be left to a distinct IT team;
  • The death of the corporate IT department;
  • The importance of the vocabulary when designing business change;
  • Why the devil may hide in too much detail;

You can also follow the episode with transcript.


Anything Missing When Measuring Corporate IT Performance?

Let me provide some reassurance about corporate IT: all the accountabilities that are linked to quantitatively gauged measures of performance are subject to rigorous management and are never neglected.

The two broad categories of clearly defined and clearly measured performance objectives are KTLO and OTOB, acronyms for Keep The Light On, and On-Time On Budget, respectively.

The first category relates to IT operations. Corporate IT’s first and foremost responsibility is to make sure that what has been purchased, leased, built, installed, and has proven to work the first time, actually continues to do so, continuously and as long as your business runs. IT operations are less glamorous from an innovation point-of-view.  IT Ops – as it is often called – doesn’t invent new customer experiences. Neither does it re-architect your organization through radical business design.

But Ops is by far the most critical information technology function because its failure directly impacts the survival of your business in the very short term. If your organization cannot deliver the services to your customers and partners, it literally ceases to exist.  As such, IT operations should be taken very seriously; everything IT does or manages is monitored and measured quantitatively, down to fractions of a digit. Expectations on the quality, stability, and performance of operations are quantitatively defined up-front. Failure happens, but if the frequency or length of missteps is above the agreed-upon performance levels, some people will get seriously nervous about their jobs.

“With the quantitatively measured performance objectives of IT Operations, if failure happens too often, people get  nervous about their career.”

The second category, OTOB, relates to the execution of business change endeavors. Over the past few decades, there have been many scholar and trade discussions about the measurement of project performance, and how adequate – or not – the traditional triple evaluation scheme of cost-schedule-scope actually is. The model may have its limitations for those that are intimately involved in the execution of the endeavors that result in business change, but for those that command the change, assume the risks and reap the benefits – that is you, the paying customer – this performance measurement triad makes a lot of sense. The cost is how much money you need to spend to get what you want or need. The schedule is the time required to get it. And the scope is the extent of what you get for your money.

Scope can be subject to much discussion since the knowledge of what you want and what you really need in the end may differ quite a bit between the pre-project and end-of-project phases. To further complicate matters, there are as of yet no universal units of measure for scope for IT change projects. This imprecision contrasts with the universally understood measuring of cost and schedule.

That’s why many business people fall back on the sole use of on-time-on-schedule as a comprehensive tool for assessing the performance levels of IT in delivering change, assuming that what is delivered (scope) should be roughly what it ought to be for some business value stream to transmute to its new state.

“Scope of what is delivered by digital change projects is hard to measure and compare.  That’s why most business people will fall back on what they can grasp: on time and on budget.”

The importance of managing change is not an acute necessity for IT operations. Failure to be on-time or on-budget doesn’t have the same impact on personal and team performance evaluations, but performance is fathomed nonetheless and delivery dates are being managed.

So What’s Missing?

The major issue is that there are very few other quantitatively measured signs of excellence. The rest of IT is either subject to non-standard and qualitative evaluations or simply not measured at all. Non-quantified evaluations are debatable and easy to challenge on contextual differences. Non-standardized gauges are hard to compare.

In the end, IT measures itself for only a portion of what it does, focusing on improving what literally counts: where there are unchallengeable numbers with universally understandable units of measure. The rest is left to good intentions, or to how it is believed to positively impact OTOB or KTLO.

Notice that both KLTO and OTOB are measures of either immediate (KLTO) and short-term performance (OTOB). ‘Keeping the lights on’ means continuous operations, or short transactional tasks. Change projects are by definition temporary endeavors with a beginning and an end. What happens after the project is finished is completely irrelevant. Even the major transformation programs are split into manageable chunks that often fit into a civil year.

The IT management repercussion of short-termism is that the lasting impact of ITs work on your organization is veiled by short-range prerogatives.

The IT aspects that get the hit on the flank by short-term measures are quality and assets. More precisely, it is a hit on the quality of the work done that impacts the quality of the assets you get as a result.

The impact on your organizational capacity to adapt itself or respond quickly to changes in its environment is highly dependent on the quality of the assets. Asset readiness for change will suffer from lower quality work done in previous projects.

Get the bigger picture in this book about things executives need to know about IT – it will help you understand how most IT teams are evaluated today. These typical metrics have a direct impact on what gets improved, but also, what isn’t being taken care of.  Enjoy!

NYC Marathon

Is You Corporate IT Good Enough? You Get What You Measure For.

I jog.

But honestly, I’ve never liked jogging; I only do it because I need to exercise and there are times when other physical activities aren’t feasible or require too much engagement to get started. Cycling when it’s pouring rain or cross-country skiing with bad snow conditions are not good ideas. Jogging, on the other hand, usually requires no more preparation than a quick change of clothes and lacing up running shoes. Since I don’t really like jogging, I just run and that’s it. Don’t ask me how many minutes it takes me to run a kilometer. Don’t ask me how many Ks I ran this morning or this week. Don’t ask me if I’m improving either. I don’t know because that’s not important to me. I don’t bring my smartphone running. My watch doesn’t calculate my heart rate. I just run for forty-some minutes until it makes me feel good and then I’m done.

If ever someone succeeded in convincing me to enroll in a marathon running club where members have to successfully finish at least one official 42.2-kilometer race per year—or else they lose their membership— things would be different. I would wear a smartwatch, track all my runs, plot my progress on a chart and be very serious about measuring.

“When it’s really important, you measure it. The same thing can be said about paid work.”

If the attainment of a certain goal is critical enough to be linked to a bonus, a promotion or keeping your employment, chances are it is appraised with numbers —or will be soon enough. Conversely, failure to attain an expected performance level that is gauged quantitatively is more likely to be subject to a performance problem. We all get the hidden message when a boss is giving us numerically metered objectives: these goals are undoubtedly important.

This is universal enough to be appropriate to your digital teams as well.  Applying this common wisdom to corporate IT, three questions should be asked and answered:

  1. What are the outcomes expected from the work performed by corporate IT? Can you associate a set of objectives to these outcomes?
  2. Is the attainment of these objectives assessed? And if so, is it gauged quantitatively?
  3. And finally, do these measures of performance relate to the actual work performed and lead to empowered improvements?

The first question is the most crucial one because it directly impacts the answers provided to the next two. There’s nothing wrong in defining strategic objectives such as “driving business value through digital excellence” or any other objective that can be shared between IT folks and the rest of the organization. Bridging the great divide between technical teams and business stakeholders is certainly an objective that many —including yours truly— are craving for.

“At a very high level, business objectives for IT teams to spur a culture of cooperation is fine. But to drive performance improvement, that’s far from enough.”

Given the huge distance between business objectives and the actual services provided by your IT function, such a link becomes arbitrary and out of reach from an IT viewpoint.  Although it may be wise to link CIO performance to the organization’s success as a whole, the chosen business criteria would have to be translated into other measures of performance that IT teams can relate to and know they can improve upon. Market share or customer satisfaction index do not provide IT staff any clue for betterment.

Then whatever the chosen set of objectives, is it measured quantitatively? It’s not jogging, but it nevertheless needs to be metered. And as discussed above, metrics that are too far from what technical staffs are actually working on, won’t drive much improvement, and you may well get debatable numbers.

In this previous article —or better by looking at the bigger picture in this book about things executives need to know about IT— you will understand how today most IT teams are evaluated on. These typical metrics have a direct impact on what gets improved, but also, what isn’t being taken care of.  Enjoy!

Silo Generator #3

The two previous silo types could be labeled as structural silos. They are almost permanent and vary only after major reorgs or when applications are introduced or retired. The third one, the project silo, is the most damageable type of silo.
Although projects and their rigorous management are an absolute must for any organization to govern change endeavors, their very nature and the absence of strong counterbalancing mechanisms make projects temporal silos.
Because all projects are temporary endeavors with a start and an end date, anything that happened before is not managed within the project, and all that happens after is not either taken into account.

Learn more about how silos of all sorts that hamper business agility: https://rmbastien.com/book-summary-the-new-age-of-corporate-it-volume-1/
#CorporateITGameChange #ITMeasures #ITQuality #Sustainability #BusinessAdaptability

Silo Generator #2

The IT function is usually structured in a way to align itself with lines of business, which extends the LOB silo into equivalent silos within corporate IT.

Furthermore, IT teams are also aligned around applications, an ensemble of IT components that serves a LOB or a major business process.

Technology and knowledge walls are nurtured by those IT application teams that slowly build little fiefdoms based on the uniqueness of their application and a privileged relationship with the sponsoring business unit.

Learn more about how silos of all sorts hamper business agility: https://rmbastien.com/book-summary-the-new-age-of-corporate-it-volume-1/
#CorporateITGameChange #ITMeasures #ITQuality #Sustainability #BusinessAdaptability

Silo Generator #1

In most organizations of a certain size, IT budgets are allocated by business units, roughly following the organizational chart.  Business units need to present their investment projects which invariably require to purchase, develop or modify some information system component.  In many industries, business projects are often quasi exclusively made of IT activities.  The investment projects presented do need to show positive business returns when not clearly proven financial returns on investment (ROI).

But very often, the costs and the benefits of a given project are aligned with a corresponding business unit.  This impedes the sharing of IT resources across these lines of business.  It also hampers the creation of IT assets that could be eventually shared with other units.

Regardless of the benefits they provide, the more IT assets you have, the more you will spend in future change endeavors.

Learn more about how silos of all sorts hamper business agility: https://rmbastien.com/book-summary-the-new-age-of-corporate-it-volume-1/

Episode One – Milan Guenther on Radical Change Ideas for Corporate IT

If you had a magic wand and you could radically change the way corporate IT engages itself in organizations, what would you modify?
That’s the question we asked Milan Guenther, founding partner at Enterprise Design Associates, author of the Enterprise Design Framework found in his book Intersection.

Not only does Milan provide a great answer to the question by positioning the leadership role that corporate IT has to play, but he also shares great insights on many important topics affecting organizations:

  • Collaboration
  • The IT mess
  • Start-ups and scale-ups
  • Opportunity seeking
  • The true nature of innovation

You can also follow the episode with transcript.

Here’s the link the to Milan’s favorite design author, also mentioned in the podcast:Design Driven Innovation

Complex But Not-So-Adaptive

How the Usual Engagement Model Doesn’t Foster Quick Self-Adjustment in Corporate IT

Your organization is a complex, open system[1]. Open, because it needs to interact with its environment to exist. Complex because it is made of a great number of interacting components, is hard to understand, is difficult to change and often yields unpredictable results. 

The General Systems Theory and its adaptive cousin Cybernetics have been around since the mid-20th century and still provide a useful, high-level understanding of what systems are and how they work. At the most distilled level possible, adaptive systems —such as your organization— can be viewed with as little as a box and a few arrows, as shown in figure 1.  

Your organization is a complex, open system[1].  Open, because it needs to interact with its environment to exist, and complex because it is made of a great number of interacting components that are hard to understand and change with unpredictable results.  The General Systems Theory and its adaptive cousin Cybernetics have been around since the mid-20th century and still provide a useful, high-level understanding of what systems are and how they work.

At the most distilled level possible, adaptive systems —such as your organization— can be viewed with as little as a box and a few arrows, as shown in figure 1.

Figure 1

The grey box is your organization, a complex system.  What’s inside the grey box is of little importance at this point; it simply represents daily interactions happening within your business.  Your organization is a system, with its inputs, its outputs. Inputs can be anything that gets in, from the obvious resources (e.g. financial, natural, human, etc.) required to produce the output, but also, all other environmental inputs that your organization requires to take into account (e.g. legal, social, cultural, and more).  Outputs are what your organization aims at providing to justify its existence.  These outputs are targeted at customers or whomever benefits from what you are producing.    

This highly-simplified view of your organization is to allow me to draw your attention to something very important: your organization is an adaptive system.  It adjusts itself, as most adaptive systems do, or it would have died long ago.   Adapting means changing the internals of the system —the grey box— so that it continues to receive its inputs and to produce its outputs.  In order to survive through adaptation, your organization benefits from a feedback loop, which provides useful data about how well the system is doing. 

For most organizations, this response takes the form of ‘customer feedback’.  Feedback is sought through mechanisms such as surveys, focus groups and other tools to get better information on how happy the customer is about what your organization is doing.  But the most important feedback information type of all —the most effective one at triggering actual adaptation of your organization— is sales. 

Sales can be called market segment penetration, gross revenue, property taxes, or traveler miles, but let’s use that word to represent revenues coming back from your organization’s outputs. Positive feedback of this nature will signal your system to continue to operate in the same fashion.  Negative results in sales will trigger rapid change.  An important detail about sales: not only does it provide a very effective feedback, it also has a short to mid-term impact on one crucial input to your system —namely, financial resources.  If sales plummet, so do revenues, and revenues from sales represent, for most organizations, the sole source of financial inputs. Sales feedback is directly linked to the survival of the system, to the viability of your organization. This is one of the reasons why this type of feedback is so effective at affecting change.

The feedback loop allows your business to adapt.  One of the most effective types of response is revenue (or any other variation), since it has a direct impact on the existence and survival of your organization.

What about corporate IT in this simple but effective scheme? One could rightfully point out that the IT function of the organization is simply one more component within the complex system.  That’s true, but it would not serve the demonstration well.  Furthermore, IT’s business is not your business —unless of course your business is IT, in which case this whole demonstration does not apply to you.  Corporate IT’s business is to provide to your organization goods and services that compose the technological solutions that support the value streams and capabilities that allow your organization to still be in business.  Corporate IT has always been and still is a support function to the greater whole, regardless of the levels of collaboration and teamwork between IT and non-IT staff in your organization.  This statement might look hard or even outdated in light of the current trends of meshing business and IT and declaring out-loud that it sums up only ‘one team’.  This has great value at the shop-floor level to create processes and instill a culture that promotes efficiency. Whether you’re in the road construction industry and make very little use of information technologies, or you’re in the banking industry and your operations are totally meshed with IT, I nevertheless insist: it does not change IT’s position as a support function to the greater whole.  Corporate IT teams are composed of individuals with clearly different education, training, work culture and career paths than the rest of your organization.  They are not in the business of off-shore drilling, commodity investment, communications, banking, or social caretaking.

Now let’s add to this simple scheme another complex system representing corporate IT.  This function of yours can be viewed as a provider to your business within a larger value chain.  IT’s inputs are likewise resources comprised of technical apparatus, skilled individuals, and projects that provide requirements and funds for the IT function to create as outputs the technological solutions that will mesh into your bigger business’s value streams. This interaction is depicted in figure 2.  

Figure 2

So far so good, right? The latter figure looks like a copy-paste of figure 1. It is indeed very similar, at least from a cybernetics point of view. But there is a catch, and it hides itself in the feedback loop.  The most effective feedback mechanism, sales, is not part of the equation. 

In the case of your own business, customer feedback through focus groups or surveys is collected, analyzed and leads to action because it will help improve sales, or at least help you understand why sales are not what they should be.  If you don’t make money, your business imperatively has to change, or will soon die.   But in the case of the IT function, something crucial is absent for there is no survival component; no presence of the ultimate incentive for improvement.  

Figure 3

There isn’t any survival component that could keep your IT staff on the alert. But there is more: they witness, from year to year, a steady flow of IT investments coming from your business. Projects come and go, priorities fluctuate, business strategies evolve, but the level of IT investment is in general correlated to the financial health of your business as a whole.  It is certainly not tied in any way to the level of satisfaction you have towards the corporate IT subsystem, or to the feedback given.

Corporate IT’s business success is totally dependent on your own business success.  In three decades of working in corporate IT, I’ve seen budgets vary, waves of layoffs, staff optimizations, outsourcing, offshoring and nearshoring.  But never have I seen IT-budget variations based on pure IT performance.

That is why the IT complex system can delay adaptation for quite a lengthy period of time. As long as the mother system —your business— can adapt itself and survive in its respective industry, corporate IT has little to no survival twist to the feedback it receives.

How effective would your organization be at listening to your customers, collecting feedback, or analyzing their behavior if it wasn’t linked, directly or indirectly, to sales increase?  How good would you be at rapidly adapting your business if you had witnessed, for the past decades, an uninterrupted flow of yearly funding, always commensurate with your client’s financial health, regardless of its satisfaction towards your products or services? Probably a pale imitation of what your organization can do today to improve customer experience or optimize revenue streams. 

And pale it is, indeed.   In this article, I describe the truly important measures of performance in corporate IT.  You will discover that job-keeping accountabilities are always paired with quantitative measures of performance.  There are two broad categories of accountabilities for corporate IT teams. One represents the run-of-the-mill responsibilities for which there is little space left for interpretation. These include the availability of systems, application response time, pace of deployment of new versions, call-center wait time, new employee set-up delay, etc.  This category is definitely the one for which corporate IT is best equipped, tooled and prepared. It is no coincidence that the run-of-the-mill chores of IT are also the ones that are best supported by cross-industry standards, are regularly purchased from third parties, can be outsourced more easily, are the most easily auditable and are supported by the most comprehensive set of benchmark services.  When performance issues in your work can directly lead to dismissal or when the provided service can be purchased from external sources, you’ve got the survival component mentioned above.

The second category of corporate IT accountabilities is related to its capacity to provide business change through new or revamped IT-based solutions.  This includes the realm of investment projects, deployment of new platforms, digitalization, and all the names given to the endeavors that require mobilizing business and IT to deliver something new that will make your business strive. This category is supported by little-to-no cross-industry standards, is highly customized to your company, doesn’t get outsourced easily, is costly and difficult to audit, and is supported by benchmarks that are too high-level or don’t fit with the peculiarities of your organization. That’s why most business people fall back on the only quantitative feedback they can understand: on-time and on-budget metrics. But there is a severe limitation to the effectiveness of these enshrined measures; namely the budgetary and temporal targets are set by the same team that is being measured for their attainment.  IT people do lose their jobs for not attaining such targets but it is fair to say that these are the rare cases. The impact of failure in this second category is nowhere near the acuteness of the repercussions of a faux pas in the first category.   There is no survival component in the latter.  

The rest of the feedback for change comes in the form of qualitative appreciations that are admittedly useful but do not make the cut since their impact on triggering adaptation within the IT function rarely represents a threat for people or teams. Moreover, because of the huge knowledge gap that exists between tech-savvy team members and the rest of your organization, most of these qualitative feedback items require substantial effort to be translated into actionable improvements at the technical level. No-one is against improvement, but when time comes to put the effort, the exertion is in direct competition with project priorities and the short term objectives of on-time and on-budget delivery.

The feedback loop that helps corporate IT to adapt and improve the delivery of change projects is weakened by the absence of a component that links it to true survival. Compared to most businesses, corporate IT has little skin in the game and not much to lose by perpetuating the status quo or making only small changes to its modes of operation.

The engagement model between corporate IT and the rest of your business is at the deepest root of many issues that impede their ability to provide more value to your organization. It is also one of the fundamental reasons why you may have the impression that the IT function is in an everlasting state of immaturity.  To get a better understanding how your corporate IT works (or isn’t working), I invite you to take a quick read of this book: What You Should Know About Corporate IT But Were Never Told.

You will realize that changing these patterns requires radical change in the way corporate IT engages with the rest of the business, and more specifically, how accountabilities are distributed and measured.  Nothing less than a major revolution, triggered by business people, will allow IT to become a true adaptive system that can change itself to provide what you deserve.



[1] The working definition at MIT for complex systems is: “A system with numerous components and interconnections, interactions or interdependencies that are difficult to describe, understand, predict, manage, design, and/or change.”
– Magee, C.L., de Weck, O.L., Complex System Classification, Fourteenth Annual International Symposium of the International Council On Systems Engineering (INCOSE), June, 2004


Small, Autonomous and Fast-Forward to Lower Quality

I am a jack-of-all-trades.  Admittedly  — and proudly  —  I realize that a lifelong  series of trial and error, crash courses, evening reading and incurable curiosity have  resulted in this ability to do many things  —  especially things involving some manual work. I feel self-satisfaction to think about all the situations that could arise in which I would know what to do, and how to do it.  I can help someone fix a kitchen sink on a Sunday afternoon.  I can drive a snowmobile, sail a catamaran, or connect a portable gasoline generator to your house.  My varied skill-set affords me a serene feeling of power over the random hazards of life.  That, and it’s also lots of fun to do different things.

There is currently an interesting trend in many organizations to favour highly autonomous teams.  The rational is quite simple: autonomy often translates to an accrued operational leeway that offers a better breeding ground for innovation.  By not being weighed down by other teams, there’s a hope that the group will perform better and yield more innovative ideas.  There is also the expectation that the team, using an Agile™ method, will produce tangible implementations much faster if it can be left alone.  The justification is founded in  the belief that small teams perform better.  Makes sense:  the smaller the team, the easier the communication, and we all know that ineffective communication is  a major source of inefficiency  —  in IT as well as in any  other field.  And if you want your team to be autonomous and composed of as few individuals as possible, then there is a very good chance that you need multi-skilled resources. 

Jacks-Of-All-Trades and Interchangeable Roles

You need jacks-of-all-trades or else either the number of individuals will increase or you will need to interact with other teams that have some of the required skills to get the job done. As a result, you will not be as autonomous as you’d like.  

But there is more: the sole presence of multi-skilled individuals is not enough to keep your team small and efficient in yielding visible results at an acceptable pace.  You must have an operating rule that all individuals are interchangeable in their roles.  If Judy —a highly skilled business analyst— is not available in the next two days to work on sketching the revamped process flow, then Imad —a less skilled business analyst, but a highly motivated jack-of-all-trades nevertheless— needs to take that ball and run with it.   You need multi-skilled resources and interchangeable roles.  That’s all pretty basic and understandable, and your organization might already have these types of teams in action.

For a small and autonomous team to keep its low size and independence upon others, it needs to be made of jacks-of-all-trades and roles must be interchangeable, or else it will either grow in size or depend on outsiders.

Conflicts of Roles in Small Autonomous Teams

Before you declare victory or you rush into hiring a bunch of graduates with a major in all-trades resourcefulness and let them loose on a green field of innovation turf, read what follows so that you also put into place the proper boundaries.   If you want to ensure maximum levels of quality and sustainability of what comes out of small, autonomous, multi-skilled teams, you need to ensure that there are no conflicting roles put on the shoulders of the individuals that need to juggle them.

Conflicts of role occur when the same person needs to do work that should normally be assigned to different persons.  The most obvious —and, in corporate IT, the most abused — combination of conflicting roles, is creating something and quality controlling that same thing.  This can be said of any field, really  — not just IT.  Industrial production process designers have understood for centuries now that he who quality checks should never be the one that is being checked.  Easily solved, might you think. You just need to have Judy check Imad’s work in two days when she’s available, and the issue is solved!  Maybe  —but there’s a catch. 

No Accountability and No Independence

Proper quality control requires at least one of these two conditions: (a) the person being checked and the controller must be both accountable for the level of quality of the work, or else (b) the person doing the quality control must be able to perform the reviews independently.  If Imad and Judy are both part of a team that is measured on the speed at which it delivers innovative solutions that work, then there is a good chance that quality is reduced to having a solution that works, period.   Other quality criteria are undoubtedly agreed upon virtues that no-one is against, but are not as important as speed.  As described in another article, in IT more than any other field, a working solution might be ‘under-the-hood’ a chaotic technical  collage, hardly holding itself with haywire and duct tape— but it can still work. 

These situations often occur when IT staff are put under pressure and are forced to cut corners.  As such, speed of delivery becomes in direct competition with quality when assigning the person hours required to deliver excellence.  If the small, autonomous, multi-skilled team’s ultimate success criterion is speed, then Judy’s check on Imad’s work is jeopardized if the quality of his work has no impact on speed.  In this case, because Judy and Imad are both part of a group that must deliver with speed, then none of them is really accountable for any other quality criterion than simply have that thing work. As long as it doesn’t impede delivery pace, any other quality criterion is just an agreeable and desirable virtue, but nothing more. Judy is not totally independent in her quality control role and worse, there is no accountability regarding quality.

When a small and autonomous team’s main objective is to deliver fast, any quality item that has no immediate impact on speed of delivery becomes secondary, and no-one is accountable for it.

And it doesn’t stop there: considering that quality control takes time, the actual chore of checking for quality comes in direct conflict with speed, since valuable time from multi-skilled people will be needed to ensure quality compliance.  After two days, when she becomes available, Judy could check on Imad’s work, yes, but she could also start to work on the next iteration, thus helping the team run faster.  If no-one is accountable for quality, Judy’s oversight will soon be forgotten.  Quality is continuously jeopardized, and in your autonomous teams there is a fertile soil for the systematic creation of lower quality work.  

There’s No Magic Solution: Make Them Accountable or Use Outsiders

So, what precautions must be taken to ensure maximum levels of quality in multi-skilled, autonomous teams?   The answer is obvious: either (1) the whole team must be clearly held accountable for all aspects of the work —including quality— or (2) potentially conflicting role assignments have to be given to individuals who are independent; that is accountable and measured on the work they do, not for the team’s performance.  

If you go with the first option, beware of not getting trapped into conflicting accountabilities again, and read this article to understand how quality can be challenged by how it is measured.  To achieve independence (second option), you will require having team members report to some other cross-functional team, or allow an infringement to your hopes of total autonomy by relying on outsiders.  Although multi-skilled and autonomous teams are an enticing perspective for jacks-of-all-trades, the agility they bring to the team should not be embraced at the expense of the quality of the assets you harvest from them.

Lower Quality at Scale

If you want to understand how and why unwanted behaviors such as those depicted above are not only affecting small autonomous teams, but are also transforming the whole of corporate IT into a mass complexity-generating machine that slows down business, read this mind-changing book.  It will help you understand why lower quality work products are bound to be created, not only in small, autonomous and innovation-centric teams, but almost everywhere in your IT function.

Innovation: Where IT Standards Should Stand

The use, re-use or definition of standards when implementing any type of IT solution has very powerful virtues. I’m going to outline them here so you can see how these standards play into the (often misunderstood) notion of innovation in corporate IT. We’ll then see where IT innovation truly happens in this context, while underpinning the importance of using or improve IT standards to support overall innovation effectiveness.

The Innate Virtues of IT Standards

  • Sharing knowledge.  Without standardization, each team works in its own little arena, unaware of potentially better ways of doing things and not sharing its own wisdom.  It is much easier to make all IT stakeholders aware of practices, tools or processes when they are standardized. Systematic use and continuous improvement of IT standards act as a powerful incentive for knowledge sharing.
  • Setting quality targets. Standards minimize errors and poor quality through the systematic use of good practices.  They encompass many facets, from effectiveness to security, to adaptability, to maintainability, and much more.
  • Focusing on what counts.  A green field with no constraints and no prior decisions to comply with might entice your imagination, but it can also drive you crazy if everything has to be defined and decided.  IT standards allow you to focus on what needs to be changed, defaulting all other decisions to the use of the existing standards.  
  • Containing unnecessary complexity.  The proliferation of IT technologies, tools, processes and practices in your corporate landscape is a scourge that impedes business agility.  Absence of standards interferes with knowledge sharing and mobility of IT resources.  Multiplicity of similar technologies makes your IT environment more difficult to comprehend, forcing scarce expert resources to focus on making sense out of the existing complexity rather than building the envisioned business value.

The use and continuous improvements of IT standards is one of the most effective cross-enterprise safeguards for IT effectiveness, IT quality, and in the end your business agility.

Despite all these advantages, there is a trend emerging in many organizations that puts these virtues at risk of not being present.

The Lab Trend

In the last few years, it has become mainstream strategy for large, established corporations to create parallel organizations, often called “labs”, that act as powerhouses to propel the rest of the organization into the new digitalized era of disruptive innovations.  This article is not about challenging this wisdom, which may be the only possible way —at least in the short-term— to relieve the organization from the burden of decades of organic development of IT assets and processes that slow down the innovation pace. 

Unfortunately, there are people in your organization who associate standards with the ‘old way’ of doing things.  After all, aren’t all standards created after innovation, to support the repeated mainstream usage of innovative tools, processes or technologies that came before them?

Making the leap that IT  standards should not be considered in the innovation process, not included in the development of prototypes or proofs of concept, or   — more simplistically  — not be part of anything close to innovative groups, is a huge mistake.

 The decision to use or not use a given IT standard depends on what you are innovating, and at what stage of the innovation process you are in.   The IT work required to implement business innovations is rarely wall-to-wall innovative.  Standards cannot —and should not— be taken out of the innovation process from start to finish.  I’d a go step further: standards should always be used except when the innovation requires redefining them.  But the latter case is exceptional.  To help you grasp the difference between true business innovation and its actual implementation, here’s a simple analogy:

The Nuts and Bolts of Innovation

In the construction industry, there are well known standards that determine when to use nails, when to use screws, and when to use bolts in building a structure.  It stipulates the reasons to choose one over the other (e.g. because nailing is much faster to execute and cheaper in materials costs). The standards also spell out how to execute: how many nails to drive, the size and spacing between them, safety precautions, etc.

Now suppose that your new business model is about building houses that can be easily dismantled and moved elsewhere. Let’s say to support a niche market of temporary housing for the growing cases of climate-related catastrophes.   You decide to build whole houses without ever using nails or screws by bolting everything.  You would make this decision to simplify dismantlement, easily moving the house and rebuilding it elsewhere.  The technical novelty here lies in the systematic use of bolts where the rest of the industry normally uses nails.  Bolts are slower to install and more expensive, but they would allow you to easily disassemble the house.  

But when a worker bolts two-by-six wood studs, the actual execution of bolting is not an innovation; it has been known for centuries and the execution standard can be used as is.  In other words, when a worker is on the site and bolting, the innovation has already occurred when the choice was made not to use nails or screws. The market disruptive strategy was determined before, and it is now time to apply bolting best practices and good craftsmanship.

No Ubiquitous IT Innovation in Corporate IT

For IT based business solutions, when the teams are in the phases of implementing the processes, systems and technologies, most of the business innovation has probably occurred in the previous phases.  

When IT staffs are actually building the technical components of your new modes of operation, the business innovation part has already occurred: it lies in the prior choices made during design.

The techies might be testing the innovation through some sort of a prototype, but it doesn’t make their work innovative. When you look at it from a high enough viewpoint, isn’t implementing a new business process with information technologies what corporate IT has been doing for decades?  

When building the IT components of innovative business solutions, where is the actual innovation?  Is it in the new business processes or in the way they are technically implemented?  Chances are that the real value is in the former, not the latter because your initial intention was to aim for  business value, not technical prowess.    

It may very well be that, at the IT shop-floor level, what needs to be done is to apply good practices and standards that have been around for years, if not decades.

In our era of multi-skills, cross-functional, autonomous, self-directed and agile teams  —  which are all busy growing new solutions that support constantly evolving business processes  —  there is a line that should not be crossed: thinking that innovation applies to everything, including the shop-floor level definition of good craftsmanship.  

Don’t Pioneer Without IT Standards

My observations are that when IT practitioners are part of teams dedicated to innovative business solutions, they often become overzealous, abandoning standardization and tossing tried-and-true practices out the window.   I’ve seen IT people making a clean-sweep of all established standards and proclaiming every part of a solution as innovative.   I’ve seen technical staff blindly pulling so-called innovative technologies into the equation with little understanding of their real contribution to business value.  This has a direct impact on the quality of the resulting work. Here’s how:

  1. : IT staff end up using bolts where nails would be fine or using nails where they should have used bolts;
  2. : new platforms are built with no standards used or defined.

In both cases, the impact on your future change projects is catastrophic: lack of shared knowledge, unknown quality levels, lost time and effort reinventing the world, and most importantly, creation of more unnecessary IT complexity.  The resulting assets will be hard to integrate, impossible to dismantle, incomprehensible by anyone else but those that created it, and costly to maintain.  In other words, your business agility will be seriously jeopardized.

The results from innovation without standards will fast-track you to the same burdensome position you tried to free yourself from with your old, outdated platforms.

The only way to avoid this unhealthy pattern is to make sure that the mandate is not just about innovating at any cost.  It must include the use and creation of standards, and limit the scope of change to what creates business value.

Set the Standard

First, your innovation team should not only devise new ways to do business: it must make it a priority to use and reuse standard practices and technologies, unless required to innovate. When a given standard is not applicable, their job should include to define the replacing one.  The idiom “to set the standard” earns all its significance: re-inventing business models that others will now run to catch or match, and defining the standards for your organization and future projects to use and leverage.  Your future business agility heavily depends on the systematic application of good craftsmanship in your current innovations.

New Technologies Need to Bring Value, Not Novelty

Secondly, your new parallel ‘lab’ organization should bear the onus of justifying the use of any new or different technology. How will it contribute to the innovative, business-oriented end-result that you seek?   When technologists are put in front of the enticing prospect of having no obligation to the use of any of the standards in place in your organization, they will jump at it.  It will often lead to the introduction of new technologies for the sake of it, based on no other justification than hunches, hearsay, or how attractive it may look when printed on a resume.

The use, reuse, and redefinition of IT standards should always be part of your innovation team’s mandate.  If not, your future business model will be made of foundational assets built as if there was no tomorrow.

Beware of falling into the trap of catching the contagious over-excitement about the scope of innovation.  Most of IT processes and components that result from business innovation can use mainstream practices and standard technologies. The legitimately innovative portion  — the one that really makes a difference —  is just a fraction of the whole undertaking, and very often, the truly novel part is simply not technological.

Provide Leeway But Set Quality Expectations

So, even if you rightfully decide to go down the path of creating parallel organizations, don’t allow these organizations to have too much leeway when it comes to standards..  Do not sign the cheque without a minimal set of formal expectations regarding sustainability, which must include standards compliance.  

The key is in clear accountabilities and coherent measures of performance. If you want to learn more about how poorly-distributed roles can sabotage the work of your corporate IT function, read this short but mind-changing business strategy book.

IT Project Failures Are IT Failures

While conducting research for Volume 1 of my first book[1], I wanted to investigate the root causes of IT project failures. I was completely convinced – and still am– that these failures are significantly related to the quality of the work previously done by the teams laboring on these endeavors. In other words, the recurring struggle that IT teams face, often leading to their inability to successfully deliver IT projects on time, is directly linked to the nature (and the qualities) of the IT assets already in place. I found a wealth of information relating to project failures, as well as a disappointing revelation.

The Puzzling Root Cause Inventory

This disconcerting realization was that the complexity of existing IT assets is rarely mentioned. By far, technological issues do not appear frequently in the majority of literature on project failure. Just for the sake of it, I performed an unscientific and unsystematic survey of professional blogs and magazines, and came up with a list of 190 determinants of causes for failure. The reasons range from insufficient sponsor involvement, to faulty governance, communications, engagement, etc. I found nothing really surprising, albeit depressing in some ways.  Of these reasons, a mere 11 were related to the technology itself, while one, and only one, referred to the underestimation of the complexity.

This number inaccurately reflects reality.  It doesn’t make sense that, for technology-based projects, there is such a thin representation of technology-related issues. The proportions don’t match.  It doesn’t fit with the reality in the corporate trenches on a day-to-day basis. If your platforms are made of too many disjointed components, or were built by siloed teams; if their design and implementation was poorly documented to cut on costs, or standard compliance practices were ill-controlled, then they are bound to contribute to failure. If your internal IT teams have a hard time understanding their own creations, or frequently uncover new technical components that were never taken into account, how can you be surprised when schedule slippages occur in their projects?  The state of what is in place plays a major role —and it’s definitely not in a proportion of 1:190.

A Definite Project Management Skew

This gap in the documented understanding is due to a project management bias in the identification of root causes of IT project failure.   This is quite understandable, since the project management community is at the forefront of determining project success and failure. Project managers are mainly assessed for on-time and on-budget project delivery[2]. They consider underperformance seriously, and that is why available knowledge on root causes is disproportionately skewed toward non-technical sources.

Project managers tackle failure as a genuine project management issue, and the solutions they find are consequently colored by their project management practice and knowledge.

I wouldn’t want to undervalue the importance of the skills, the processes and good practices of project management. But we need to recognize the foundational importance of the assets that are already in place. They are are not just another risk management plan variable to take into account.  They the base matter from which an IT project starts from, along with business objectives and derived requirements. On any given workday, IT staff are not working “on a project”; they are heads down plowing through existing assets or creating new ones that need to fit with the rest.

The Base Matter Matters a Lot

If IT projects were delivering houses, the assets in place would be the geological nature of the lot, the street in front of the lot, the water and sewage mains under the pavement, the posts and cables delivering electricity, and the availability of raw materials. Such parameters are well known when estimating real estate projects.  If you did not take into account that the street was unavailable at the start date of the construction project, that there was no supply of electricity, that the lot was in fact a swamp, or that there was no cement factory within a 400 mile radius of your construction site, you can be sure that the project would run over-schedule and over-budget.  The state of your existing set of assets creates “surprises” of the same magnitude as the construction analogies above.  When your assumptions about the things in place are confounded because quality standards weren’t followed or up-to-date documentation was unavailable, your estimates will suffer.

Any corporate IT project that doesn’t start from a clean slate[3] —and most aren’t— runs into issues related to the state of the assets already in place.

The unnecessary complexity induced by poorly documented or contorted solutions is not a view of the mind.  It is the harsh reality that corporate IT teams face on a daily basis.  It is the matter that undermines their capacity to estimate what has to be done, that cripples their ability to execute at the speed you wish they delivered.

IT Quality Is an IT Accountability

Although project success is, by all means, a project management objective, the state of an IT portfolio isn’t.

The quality of what has been delivered in the past, and how it helps or impedes project success is not a project management accountability. It’s a genuine corporate IT issue.

So tossing it all to project management accountabilities is an easy way out. If important business projects are bogged down by an organization’s inadequate IT portfolio, it’s primarily an IT problem, and secondly a project risk or issue. Project Managers with slipping schedules and blown up budgets took failures seriously enough to identify 190 potential root causes, and devise ways to tackle them.  Nobody in Corporate IT has ever done anything close to that concerning IT complexity or any other quality criteria applicable to IT assets.

This vacuum has nothing to do with skills, since IT people have all the expertise required to identify the root causes and work out ways to reduce unwanted complexity.

It’s all about having the incentives to fix the problem.  Reasons to solve are not just weak, but outweighed by motivations to not do anything about it[4].


[1] More details on the book available on my blog’s book page.

[2] Also detailed in the book, or in this recent article.

[3] See this other article the clean slate myth.

[4] For more details on this, take a look at my latest book.

The Word is Out… and It’s in a Book!

Ready to read, share, and think about.   Available at Amazon in various formats.

If you believe these ideas should be shared, please write a quick review on Amazon, so that the book has more chances to appear on search results for potential readers.

Special thanks to all those that helped or supported me in this voyage.

I  will now try a totally new thing before the dive in volume 2:  summer time leisure…

Available on July 24th

In less than two weeks, all loose ends should be tied up and Volume 1 ready to read, share, shock or mull over.  Get it on amazon.com, either as eBook, paperback or hardcover version.

The Inconsequential Repercussions of Poor Estimation in Project-Oriented IT

Estimating – the art of practicing educated guesses on how much time and money are required to perform something – is a difficult task, particularly in corporate IT.  I have provided them, collected them, validated them, compiled them, suffered from them and abided by them, and let me assure you that this whole estimation business is far from trivial.  Being a difficult task is one thing, but it should not be a reason to push the subject aside.

So let’s look at a classic scenario that I have seen in all corporate IT projects that I’ve been involved with:

  • The first estimations are made with very little knowledge about the requirements during the IT investment budgeting cycle, starting six months to more than a year before the project is effectively launched.
  • The budgeting cycle directly involves the IT managers who will be responsible for building the solution. It is their opinion that carries the most weight in the balance.
  • In the best-case scenario, technical experts, designers and architects will be involved in a quick tour of the requirements and a high-level design of the solution. In other, less ideal cases, the managers will make the estimates.
  • Estimates are made with very little time allotted for the exercise, with managers and experts busy delivering current-year projects and dozens of other projects to evaluate within just a few weeks.
  • No quantitative method is used because the IT team has never developed such methods. There is little usable historical data, apart from the actuals of past projects. The identification of analogous projects is left to the memory of people rather than a rigorous classification of past expenses.
  • After several rounds of investment prioritization, the remaining investment projects will be challenged on estimates.
  • Based on the same limited knowledge of the requirements and with still very little quantitative data to back-up their argument, IT managers, sometimes with the help of their experts, will come up with more stringent assumptions in order to reduce the estimates and fit the expected budget.
  • At this point, the fear of having a given project cut from the investment list will have a definite effect on the level of optimism of the involved parties, both on the business sponsoring side and the IT team.
  • If the project makes it through the cuts, then in the next fiscal year a project team will be assembled. Only then will the true requirements be fleshed-out with the help of business experts, leading to a more complete IT architecture.
  • This detailed knowledge will lead to re-estimation of the cost and schedule. Most of the time, the new estimates will be higher than the ones from the budgeting cycle estimates. If the budget cannot be trimmed, then features will be cut.
  • In some organizations, a gating process may be put in place to reassess the net business value of the IT investment in view of the more accurate costs and schedule. The project may not pass the gate, at which point it is cancelled.
  • However, in many organizations, IT investment gating is avoided – or is nonexistent – and the business sponsor, project manager and IT managers will work on the expected scope and schedule in order to deliver something of value within the current year.
  • If the business value cannot be achieved within the available budget/schedule, a change request may be issued, frequently justified by the falsehood of one or more of the original estimation assumptions.
  • Since there is no formal quantitative estimation model in place, there is no process to assess if the change requests are caused by flaws in the estimation practice, nor is there a way to address how it could be improved for future projects.
  • Upon completion, the project may deliver fewer functions or less business value than expected, but since the original requirements were pretty vague, it is difficult to assess the delta.

This typical and classical sequence of events is one of the many variations that occur in IT organizations.  Estimation-wise, the most important characteristic of this scenario is that the estimation duty and its accompanying tools and data suffer from little rigor, no repeatability, absence of relevant data collection, and archaic tools.

In short, the corporate IT estimation discipline is so immature that it can’t be called a practice.  Things are mostly left to good intentions and experience.   

Even the Agile™ tidal wave isn’t bringing much improvement in that area.  An iterative development method is a blessing for avoiding large projects to become white elephants.  It is also a benediction for eliciting requirements when complexity, unknowns, or ignorance significantly raise the risk levels.  But the Agile deployments I have seen are misleading many actors into thinking that the need for knowing in advance how much something is going to cost has suddenly become obsolete.  There is always someone investing some amount to get some result.  I have yet to see, read or hear about any improvement in the rigor and effectiveness of the estimation process and its results provided by any development method, Agile or other.  The agile way of tackling IT-related change has taken the ignominious waterfall method and sliced it to shorten delivery times, and allow to reorient work.  But still, work has to be estimated before action and calling it Poker Planning or T-shirt Sizing doesn’t make it more rigorous than any other technique I’ve witnessed in the past 30 years.

Agile™ methods have brought tangible improvements in corporate IT’s delivery effectiveness.  But from an estimation point of view, apart from cool names, the techniques are still based on good intentions and experience.

Corporate IT is nowhere close to being mature in the estimation practice. If someone in your IT function ever tries to talk you into the difficulties of building a reliable estimation process due to the newness of IT, spare your tears and start with this interesting quote:

False scheduling to match the patron’s desired date is much more common in our discipline than elsewhere in engineering. It is difficult to make a vigorous, plausible, and job-risking defense of an estimate that is derived by no quantitative method, supported by little data, and certified chiefly by hunches of the managers. […] Until estimating is on a sounder basis, individual managers will need to stiffen their backbones, and defend their estimates with the assurance that their poor hunches are better than wish-derived estimates.

This may look like an excerpt from a blog or a recent report from one of the IT observatories, and may appear quite apropos and contemporary. But here’s the embarrassment: this quote is from a landmark book, The Mythical Man-Month[1], published in 1975!

Does this mean that the estimation practice in corporate IT has been at a standstill for 40 years?  I’m afraid so. 

This standstill has occurred despite research on the subject, text books and the development of estimation software. It’s happened in the face of the pitiful track record of corporate IT for being on-time and on-budget.  All of this while some organizations spend hundreds of millions of dollars on IT projects over multiple investment cycles.  To make it short: accuracy of estimates is secondary, and it explains the generalized laxity on this topic across organizations and over decades.

How can such a serious weakness with such considerable monetary consequences not be the driver of a relentless quest for improvement? The answer is simple: there are no incentives to get any better.

There are very little consequences in corporate IT for bad estimates.  Worse, there are tangible benefits not to improve.   As I explain in my first book  there is no such thing as an IT Machiavellian plan to entrench in your organization a system to milk your hard earned funds.  There is simply an engagement model that doesn’t foster improvement in several key areas, estimation being one of them.  By changing the game, IT will need to improve, will adapt and develop what it needs to get much better at estimating.

[1] F.P. Brooks Jr., The Mythical Man-month: Essays on Software Engineering, Addison-Wesley, 1975.

Let’s Start Fresh with a Digital Platform!

A dichotomy seems to be emerging in corporate IT strategy and enterprise architecture. Let’s take a closer look at what looks like a seemingly promising strategy to propel your business into the digital era.

A Reliable but Inflexible Set of Operational Assets

In the right corner, we have the technology backbone of an organization’s operations. This platform must be robust and standardized, and allow the business to shift toward new paradigms as seamlessly as possible. These types of platforms have been around for decades; some of them outlasting many foundational technology changes. The ability of these operational backbones to support transactional operations effectively and speedily is, in my view, a success for the IT world. Unfortunately, their flexibility and agility in face of changing business needs are mediocre, at best. Decades of chaotic short-sighted design deployments have transformed these backbones into liabilities. Such backbones are not always able to sustain change, being highly sensitive to heterogeneous, non-standard, or stove-piped solutions that find their way to production status through the loose mesh of ineffective quality governance.

In the right corner, your operational backbone: rapid, robust, but inflexible yet indispensable.

A Fresh New Platform for Digital Integrations

In the left corner we have a newer concept, often labeled the Digital Platform, envisioned as an innovative way to achieve flexibility and quick turn-around times. The central idea is smart: let’s put in place an IT infrastructure that allows the rapid development of new business integrations at the level of the extended enterprise.  By extended enterprise, I am referring to an obvious focus on external partnerships, including opportunities created by social media, Internet of Things, or the entire array of cloud-based services that relentlessly expands every month.

One of the most appealing features of the platform in the left hand corner is that it provides a clean slate for a project to start from. Absent is the burden imposed by legacy systems like those constituting the technology backbone we have in the right corner.  The novelty provided by the Digital Platform is bound to create a fertile soil for agility to blossom, inspired by the bold, highly publicized start-ups known as Market Disruptors.

In the left corner, your digital platform: new, nimble, source of promising business value.

The second promising characteristic of this paradigm lies in its function as a foundation where things can be not only rapidly developed, but easily removed. When something developed last year doesn’t make business or technical sense anymore, you can unplug it and work on a more promising integration. With no legacy artifacts slowing your momentum and your ability to continually apply and reapply integrations, you will surely be able to provide enhanced customer experiences or new amalgamated products, positioning yourself as the disrupting player.

So are Digital Platforms the way to go? Yes of course! I strongly recommend our contender in the left corner.  That being said, I also feel compelled to share a few very important words of caution on how to introduce it, as well as some caveats regarding your expectations.

I never believed in miracles – at least not in corporate IT.  Most of great things therein come from discipline and hard work.

Parallel for a Long Time

My first piece of advice relates to the right hand option. Do not think for a second that the sudden wave of hype and excitement stimulated by your new digital platform will cause your operational backbone to disappear.  Your new products, new markets, and new ideas, however promising, will not replace all the current products and existing markets overnight. The base of older technology already in place might very well continue to produce the bread and butter used to feed new ideas.

You may wish it were the case, your business is not a start-up, and your new platform will co-exist with the older ones for a long time.

Don’t assume that the latest technology replaces the previous one entirely; they must exist in parallel. Anything you do to position your business on the cutting edge of the market is a step in the right direction, but you must take responsibility for the amalgamation of your platforms. Remember: he who wills the end wills the means.  Your new platform absolutely needs the old one.

If your new digital platform were a vehicle, it would be a lightweight 4×4 truck. But you mustn’t forget that if this were the case, your operational backbone would be a train on rails with a tanker car containing the fuel for your 4X4. The implementation of a new digital platform comes nowhere near any form of rationalization.

The Old Impacts the New

My second warning involves the state of your right corner assets. The apparent separation between the two sides of corporate IT strategy, and the expected leeway provided by a clean slate solution may not last that long.  Sooner than you think, or maybe right from the onset, you will need to use data or function provided by your operational backbone; this requires an integration point like any other that your IT has created in the past. The fact that it takes its source in your new digital platform will make little difference regarding speed or limitations.

Any dependency between the old and the new will be as easy to implement as the state of the weakest link in the chain allows.

Depending on the flexibility and agility of your right hand corner backbone, the new link may be implemented in a breeze, or become the boat anchor that slows everyone down.

Magical Thinking is of Little Help

My last point concerns the business agility of the new digital platform. The new infrastructure’s ability to adapt gracefully to changing business requirements is based on two factors. Firstly, the fact that it’s new means that it hasn’t deteriorated into the state of entropy found in older assets; the clean slate is a benediction.  Secondly, the architecture patterns used to develop the new digital solutions offer evolutionary possibilities.

But declaring a new platform’s ability to support nimbleness, by easily adding, replacing, or removing components is no guarantee whatsoever that it will actually happen. Why? Because easily adding, replacing, or removing portions of a solution, an application, or a platform have been desired outcome from the onset of the operational backbone sitting in the right hand corner! Creating malleable products has been a central focus of IT architecture for longer than I’ve been working in the IT field.

Malleability has been a desired characteristic for as long as IT solutions have been designed.  This wish hasn’t shielded your current platforms from becoming what they are today.

It takes more than wishes and strategic statements to ensure that you get the agility that you expect from the new digital platform.  For that to happen consistently beyond the first 18 months of the new platform’s introduction, you need talented and forward-thinking IT architects designing assets that can be quickly rolled-out, easily replaced, and painlessly removed. These disciplined IT teams must also define and abide by strict quality standards. Finally, you need healthy governance processes to guide your decisions and determine whether or not you have successfully achieved the coveted agility ideal.

Careful design and quality work in your new digital platform are as needed as ever.

If you don’t have all the right safeguards in place, your new digital platform may organically grow into an inextricable tangle that will eventually collapse under its own weight.  It’s been often witnessed before[1], and nothing suggests that the left corner is shielded from unwanted complexity.

[1] To understand how it systematically happens, see this easy-to-read, non-technical book.

Perennial IT Memory Loss

There is a strange thing happening in corporate IT functions; a recurring phenomenon that makes the IT organization lose its memory. I’m not talking about a total amnesia, but rather a selective one afflicting corporate IT’s ability to deal with the current state of the technical assets it manages. This condition becomes especially acute at the very beginning of a project focussed on implementing technical changes to drive business evolution. Here’s how it happens:

It all starts with project-orientation. As we discussed in another article, the management of major changes in your internal IT organization is probably project oriented. Projects are a proven conduit for delivering change. Thanks to current education and industry certification standards of practice, managed projects are undoubtedly the way to go to ensure that your IT investment dollars and the resulting outputs are tightly governed. Unfortunately, things start to slip when project management practices become so entrenched that they overshadow all other types of sound management, until the whole IT machine surrenders to project-orientation.

The Constraints of Project Scope

As you may know, by definition, and as taught to hundreds of thousands of project managers (PMs) worldwide, a project is a temporary endeavor. It has a start date and an end date. Circumstantially, what happens before kickoff and after closure is not part of the project.

The scope of the project therefore excludes any activity leading to the current state of your IT portfolio. The strengths or limitations of the foundational technical components that serve as the base matter from which business changes are initiated are considered project planning inputs. The estimation of the work effort to change current assets, or the identification and quantification of risks associated with the state of the IT portfolio, will always be considered nothing more than project planning and project risk management.

Further excluded from project management are considerations that will apply after the project finish date. These factors encompass effects on future projects or consequences for the flexibility of platforms in face of subsequent changes. Quality assessments are common project related activities, likely applied as part of a quality management plan. But a project being a project, any quality criteria with impact exclusively beyond the project boundaries will have less weight than those within a project’s scope – and by a significant margin. Procedures directly influencing project performance – that is, being on-time and on-budget (OTOB)– will be treated with diligence. All other desired qualities, especially those that have little to do with what is delivered within the current project, become second-class citizens.

Any task to control a quality criterion that does not help achieving project objectives (OTOB) becomes a project charge like any other one, and an easy target for cost avoidance.

This ranking is more than obvious when a project is pressured by stakeholder timelines or in cases of shortages of all sorts become manifest. Keep in mind that the PM is neck-deep into managing a project, not managing the whole technology assets lifecycle. Also remember that the PM has money for processes happening within the boundaries of the project. After the project crosses the finish line, the PM will work on another project, or may look for a new job or contract elsewhere.

When all changes are managed by a PM within a project, with little counter-weight from any other of type of management, corporate IT surrenders to project-orientation.  When no effective cross-cutting process exists independently from project management prerogatives, your IT becomes project oriented.  I confidently suspect that your corporate IT suffers from this condition unless you already have made a shift to the new age of corporate IT.

Project Quality vs. Asset Quality

Project orientation has a very perverse effect on how technology is delivered: all radars are focussed on projects, with their start and end dates, and as such the whole machine becomes bounded by near term objectives. The short term project goals in turn directly impact quality objectives and the means put in place to ascertain compliance. Again, since quality control is project funded and managed, the controls that directly impact project performance will always be favored, especially when resources are scarce.

In project-oriented IT, quality criteria such as the ability of a built solution to sustain change, or the complexity of the resulting assets don’t stand a chance.

The result is patent: a web of complex, disjointed, heterogeneous, and convoluted IT components which become a burden to future projects.

It’s here that the amnesia kicks in.

All IT Creations Are Acts of God

When the next project dependent on the previously created or updated components commences, everyone acts as if the state of these assets was just a fact of life.

Whatever the state of the assets in place, at the beginning of a new project, it’s as if some alien phenomena had put them place; as if they were the result of an uncontrollable godly force external to IT.

Everyone in IT has suddenly forgotten that the complexity, heterogeneity, inferior quality, inflexibility, and any other flaws come from their own decisions, made during the preceding projects.

This affliction, like the spring bloom of perennial plants, repeats itself continuously. At the vernal phase of IT projects, when positivism and hopes are high, everybody looks ahead; no one wants to take a critical look behind. This epidemic has nothing to do with skills or good faith, but can instead be traced to how accountabilities are assigned and the measurement of performance.

When all changes are subject to project-oriented IT management, the assets become accessory matter. Your corporate IT team delivers projects, not assets.

The Latest Change in Vocabulary Doesn’t Turn Liabilities into Assets

In last week’s article we saw that you should be very prudent concerning IT Tactical Solutions. They are often presented by your IT teams as temporary situations; sidesteps that must be taken before the envisioned strategic situation can be reached. But more often than not, these patches are permanent. Since these dodged solutions work, most business people aren’t keen to invest in further revisions to develop an optimal design. Hence, these enduring fixes lower the quality of your digital platforms and compromise the agility and speed in future business projects.

The effect of the repeated production of sub-par assets – regardless of the name they’re given – is nothing less damaging than the continuous creation of unnecessary complexity, leading to the progressive decline of your IT platforms.

Let’s Get Financially Disciplined

The cumulative detriment to IT assets has recently inspired some smart IT people to come up with a new idiom: Technical Debt. If an IT colleague has ever uttered a sentence to you including that pair of words, you should read the following.

The Technical Debt idea entails that an IT person will document cases of sub-optimally built solutions into some sort of a ledger. Each individual occurrence, as well as the sum of everything in the register, is referred to as a technical debt. With each new IT hiccup added to the books, an official process makes the paying business sponsor officially aware of the added technical debt. The message from IT sent to the client in such situations means something like this:

  1. “For technical reasons, the project cannot be delivered according to the original blueprint and/or customary good practices within the allotted time and budget.
  2. This may impede the agility of the platform, or create additional costs in future projects. Hence there is a technical debt recognized.
  3. We all acknowledge that this debt should be corrected.”

Technical Debts are Fine for Communicating

This is great from a communications point of view. There are, however, caveats regarding such a well-intended message:

  1. The project will deliver something anyway, and it will work[1].
  2. But you won’t have a clue about the problematic “technical reasons” used to justify inferior quality; you’re held hostage by a single IT desk, holder of all technical knowledge.
  3. The debt is declared, but the impact is not evaluated. There is no reliable forecast suggesting the amount of the added deficit to write off.
  4. There is probably no transparent process in place to check the ledger at the end of a project in order to track and contain the global deficit.

Loans 2.0

This whole concept of indebtedness in IT doesn’t make sense from the start. It leads any business people to falsely believe that the deficit is managed. So you have a debt? As a businessperson, the following questions probably come to mind:

  1.  Who is the lender?
  2. Who is the debtor?
  3. What are the interests made of?
  4. What is the interest rate?
  5. How and when is the principal being reimbursed?

The answers are brutal:

  1. You.
  2. You.
  3. Budgetary increases or lost speed pertaining to future business projects.
  4. Nobody knows.
  5. At an undefined date, when you ditch your platform and pay for another one.

Call ‘em Whatever You Want – You Pay for Everything

Short term management, conflicting accountabilities, or any other good or bad reasons to cut corners will foster the creation of lower quality assets by your IT team.

Your IT staff can call these situations fixes, patches, tactical solutions, or technical debts, but the result is always the same: the customer pays for everything, now or in the future, in hard cash or in reduced business agility.

As for the assets in question, you will always keep them for a longer time than you’d want to, whether they are true assets or debt-ridden liabilities[2].

Measuring Quality

The gloomy outcome I’ve been describing is not inevitable – there is hope. But only if you work to change how accountabilities are distributed. In this book you will have the opportunity to look more closely at the reasons why accountability on IT asset quality is missing and afflictive.


[1] For more details on why it will always work, refer to this other article.

[2] The IT Liability idiom is borrowed from the work of Peter Weill & Jeanne Ross from MIT Sloan’s Center for Information Systems Research, and refers to the fact that IT investments may create liabilities rather than assets if these so-called assets become a burden under changing business conditions.

The Tactical Steps Sideways That Keep You On the Sidelines

Things happen in IT projects.  At times, some quality elements will be sacrificed in order to offset the vagaries of the project delivery scene.  A solution that works of course.  But as discussed in a previous article, a working solution brings no comfort regarding its quality, since almost anything can be done in the virtual dimensions of software and computers. And when issues arise to put pressure on IT teams, a suboptimal alternative will be presented as a fix, a patch, a temporary solution, or as the most wickedly named: the tactical solution.

In circles of experienced IT managers and practitioners, the ‘tactical solution’ sits somewhere between fairy tale and sham.

The word suggests to the non-IT stakeholder that the chosen tactic is a step sideways, and that once the applicable steps are taken, the product should attain the desired state, which is often labelled as the strategic or target solution.

Because the tactical solution works (since anything in IT can be made to work), it could be viewed as a small step in the right direction.  After this dodged solution is implemented, we simply need to perform a few extra steps to reach the strategic state, right?

Not really.

Tactical Solutions Waste Work

The solution does work, and common wisdom says “If it isn’t broke, don’t fix it”. Besides, how could it be broken if it works? Unfortunately, and I know that I am repeating myself, the fact that it works does not guarantee of anything.

Tactical solutions are never presented to you as a step in the wrong direction or a step back, but most of the time they are, and here’s the logic:

Once a tactical solution is delivered, the next step is not a move forward, but rather a revision of the sub-optimally designed part. The system will often have to be partly dismantled and then rebuilt, throwing away portions of the previous work. That’s not a step in the right direction.  That’s not tactical.  That’s wasted work.

Assets Built on Hope Aren’t Enough

Not many business people are keen to pay for throwing away something that works, and as such, when money for the next phase becomes available, there is a good chance that the sponsor will want to invest in an effort that brings more business value, rather than redoing what’s already completed. Moreover, in many cases the bewildered customer will need to pay an additional fee for the removal of something that was paid to put in place. That’s a stillborn path to the strategic state.

Hence, to get there, the IT team has to hope for luck, or must fall back on secrecy. Hope to correct the situation in the lucky event that the tactical solution breaks, or count on a forthcoming major project to allow them the opportunity to openly (or discretely) administer the needed rework effort.

Next time you hear a friendly IT person confidently talk about a tactical solution or any of its synonymous labels, don’t jump too fast to the conclusion that it will elegantly be transmuted to a strategically positioned investment based on a greater plan to get there.

Most of the time, a so-called tactical solution is in reality a permanent solution that sacrifices agility and becomes an IT liability¹ for many years to come.

If you know -or vaguely heard of- the technical debt concept and hope that it will prevent sideways steps that keep your IT assets on the sidelines of the strategic investment field, stay tuned for next week’s article.  You will realize that processes designed for the continuous development of software sold directly to customers don’t always propitiously apply to the delivery of business solutions in support of what your organization makes a living from.


[1] The IT Liability idiom is borrowed from the work of Peter Weill & Jeanne Ross from MIT Sloan’s Center for Information Systems Research, and refers to the fact that IT investments may create liabilities rather than assets if these so-called assets become a burden under changing business conditions.

The Unmeasured and Inconsequential Aren’t Getting Any Better

In part 1 of this article, we saw that what really counts in corporate IT is not only measured, but also metered quantitatively, with standardized gauges that leave as little space as possible for misinterpretations. Through exploring a parallel with the pizza delivery business, I attempted to show that anyone can be assigned conflicting accountabilities, such as delivery speed on one hand, and driving regulation compliance or fuel consumption mindfulness on the other. The only way to juggle these clashing devoirs is through the application of control measures, and the establishment of personal or team-based incentives linked to the resulting indices.

Incentive-Based Performance

Now, if one of the controlled expectations is quantified and directly linked to next year’s bonus, but the other anticipated behavior is not numerically evaluated, what will happen? The result will be the same as it would within our pizza delivery example. If you don’t measure the time it takes for each driver in your team to deliver pizza, then respecting driving rules (because the controls are already in place) and minimizing fuel burnt (assuming this is metered) become the top priorities. When the time comes for yearly performance reviews, delivery time will be left to the manager’s memory of the past 12 months and the driver’s ego. You already know that the manager’s memory will be focused on the most recent weeks, and that the drivers will naturally overstate their delivery speed.  This just wouldn’t work; you would get safe, low-carbon footprint, legally respectful driving, but slower delivery times that would jeopardize customer experience –and your competitive edge.

What’s Measured and What’s Important

In Part 1, I presented a table illustrating the usual assessments of performance for the IT function. These indicators are measurable and precise. They also represent the true gauges of personal performance.  Failure to perform adequately in the KTLO (Keep The Light On) category, can rapidly lead to dismissal.  Underperformance in the OTOB (On-Time On-Budget) category may take more time to notice, but will eventually translate into career changes. I’ve charted this reality in a simple but eloquent figure.

At the end of Part 1, I related a simple question: “What about all the other good things you should expect from your corporate IT function?”.  You should now grasp that any such remaining features will fall in the lower left hand quadrant of this figure. They are not measured quantitatively or not even gauged, and they have little impact on IT staff keeping their jobs.  If you believe that IT’s performance should cover much more than KTLO or OTOB accountabilities, then I strongly suggest that you scale back your expectations concerning behaviors unassociated with the upper right hand categories.

I strongly suggest that you scale back your expectations concerning behaviors associated with anything else but KTLO or OTOB accountabilities.

The next burning question is obviously: “What falls under ‘The Rest’?”  As its name implies, this category encompasses all other desired duties: the mundane and less significant ones, as well as the crucial virtues that seriously impact the quality of corporate IT’s output.

Another Problem For IT to Solve?

In several upcoming articles you will discover that the perception of quality and the means of its control are significantly related to its position in the chart above. Quality controls specifically associated with quantitatively measured KTLO performance objectives will be defined and applied.  I can safely bet that your IT function is pretty good at those tasks. I can also confidently speculate that the quality controls which play an active role in delivering products on-time and within budget are taken seriously and applied systematically.

The remaining controls are mostly subjective, or plainly nonexistent, thanks to the few repercussions that inefficiencies in these areas have on people’s jobs.

Unfortunately, many missing measures can have a direct impact on your organization’s capability to react promptly in an ever-changing environment.  Important areas such as compliance to your own standards, ease of maintenance of platforms, reuse of existing assets, adaptability, or documentation have little impact on people’s jobs, and are, at best, qualitatively measured, if measured at all. These areas fall under “The Rest”, and are probably poorly managed.

But if you think that you simply need to demand that your IT organization be better at those things, you are misled.  The performance criteria in The Rest have been neglected for decades.

All attempts that I have seen or heard of were either weak, unevenly applied, or didn’t last very long.  As long as the current hierarchy of rewarded behaviors reigns, it won’t happen.

But expanding what really counts above and beyond KTLO and OTOB requires to remove the conflicting accountabilities.  As described in a previous article, your IT function is stuck in an engagement model where, for convenience and historical reasons, a single desk is given all accountabilities.  As you will see in my upcoming book, your IT has little means for implementing a healthy segregation of duties, and has cashable incentives to remain mediocre in several key areas.

IT’s Quantitatively Measured Duties Are What Really Matter

Despite corporate IT’s renowned penchant for solving complex problems, there are some issues for which you should not count on them.  The ones that involve conflicting accountabilities.  Finding the clashing duties is not obvious, but this series of two articles will guide you to them. The first step is to understand what really counts.

Clashing Accountabilities in Pizza Delivery

In order to help you grasp the type of conflict at stake, let’s look at a simple example: pizza delivery.  Let’s say you’re the proud owner of a high-end pizza restaurant and your delivery team is accountable for delivering orders in the shortest time possible.  This makes sense, since your customers expect prompt service.

How do you make sure that speed is the top priority?  Easy: with measured controls.  Each driver is equipped with a wireless device that the customer signs upon receiving the ordered food. But this speed-related accountability could conflict with two other goals: minimization of fuel consumption and compliance with driving regulations. The faster the driver gets the awaited meal to its destination, the more fuel she burns while traveling the same distance.  In addition, the shortest and fastest route to her destination would require the driver to ignore one-way streets, left turn allowances, and speed limits. So how do you address these additional factors while keeping customer experience at its highest level with short waiting times?  The answer is once again with measured controls.

Controls are already in place for driving regulations: there are well known law enforcement bodies that will catch a driver ignoring these rules and give her a ticket or suspend her driver’s licence, leaving her jobless.  Respecting the driving code may slow down the delivery process, but the non-compliance risks are such that everyone in the community agrees that all vehicles should conform.  I’ll refer to this as an independent control mechanism.  Street patrols cannot make the driver’s performance objectives theirs.  Moreover, their own performance measures are at stake if they do not encourage or enforce strict observance of driving rules.  As human beings, they may have compassionate feelings about the driver, but they have a job to do that is very well delineated from the pizza delivery business.  Hence, from the point of view of attaining this objective, albeit conflicting with customer satisfaction, your pizza delivery process is adequately covered.

For the fuel usage objective, the situation is slightly different.  You cannot count on an external body to take care of this.  You’d probably put in place physical devices to continuously monitor fuel consumption on delivery cars.  This device would show live consumption rates on the dashboard to favor driver learning of goof habits, and you’d get weekly reports cross-referenced to each driver’s on-duty periods.

With both delivery times and consumption ratios in hands, you and your drivers have what it takes to balance these conflicting objectives:  (1) measured delivery times for each run, (2) legal safeguards for careful driving, and (3) gauged fuel consumption for each driver’s work shift.

The last thing you need is to find a way to motivate your drivers to harmoniously juggle these conflicting targets.  I’ll let you imagine how you’d do it.  There is a wide range of options, from warmly felt taps on the back to annual hard cash bonuses.

There is one last point to draw your attention to: data on the attainment of these objectives are both independently gathered and quantitatively measured.

Now that we’re warmed-up, let’s drop the pizza delivery industry for a moment and get down to the corporate IT business.

Corporate IT Accountabilities Made Simple

Corporate IT is made up of a wide variety of roles.  If we took all of these jobs and analyzed how performance translates into measures, we’d fill hundreds of pages.  Furthermore, these duties are, for the most part, fairly technical and non-IT people have a hard time relating to what achievement or efficiency practically mean.

But this is not your field of expertise, and your expectations from IT are at another level.  So we need to elevate ourselves to the highest level of anticipation toward the IT function, the one where what IT does makes sense to a business person. Incidentally, this exercise allows us to prune through an intricate mesh of techno duties and get to the real, business-rooted measures of achievement.

What kind of technology-related achievement will make your IT executives shine?  What type of counter-performance would be career threatening for senior IT staff?  Easy! Find out the measureable indices, for which data is systematically collected and that use standardized units.

The typical performance indicators and their accompanying measures are summarized in the table below.  Take a moment to have a good look:

Recognized Performance Indices

There are in fact only two sets of standardized, quantitatively measured duties: they are labelled Keep The Light On (KTLO), which deals with operational stability and efficiency, and On-Time On-Budget (OTOB), which covers efficiency in managing major changes.

There are of course other counter-performance issues that could lead to dismissal, such as skill retention issues or leadership problems, but the table above deals only with the core accountabilities that apply exclusively to the IT function.

One striking point about the table is that the accountabilities are quantitatively measured; not by approximate measures, but rather by highly precise gauges, which in some cases are within three digit decimal fractions! Also remarkable, all of these metrics use standardized units of measure applicable to all possible cases.  They are easy to understand, both from the side that delivers (IT) and the side that pays (you).  Universality and the quantification of the measures of performance both indicate the importance of any given accountability. One last important observation, albeit less obvious, is that these measures are easily auditable.  You could decide to have these metered by independent parties to avoid that the counting party isn’t also the one that is being evaluated.

In your organization, there are certainly other gauges in place, but how do they measure against the ones above in terms of business criticality?  Are they qualitatively evaluated or hard-numbered?  Are they related to IT accountabilities or general measures applied to all functions?

My guess is that the really important stuff is what is closely related to the table above: flawless execution in support of the operations, and managing change within planned budgets and time frames.

Are You Satisfied?

Now, what about all the other good things that you should expect from what your corporate IT function delivers? For example, what about adaptability to change, compliance to standards, or maintainability of delivered assets? How about speed?  What about quality?  Isn’t IT delivering tangible “stuff” that should be counted, trended and compared, like any other corporate function? Why aren’t these other elements represented in the table above?

They are absent, along with the many other expectations that you may have in mind, because the conflicting accountabilities of the usual corporate IT engagement model push them to third – and far behind these two categories – regardless of their innate virtues.  That’s what we shall see in Part 2 of this article.  And we’ll come back to pizza delivery too!

Corporate IT’s Non-Speed Formula

A crucial aspect of your organization’s agility lies in the speed at which your IT function can deliver change.  Not the small run-the-mill types of change, but the mission-critical delivery of the new enabling technologies, digital platforms and IT solutions that your business needs to strive.  Speed gives you a competitive edge in your respective markets, and as such the momentum of your corporate IT team stands as a key strategic enabler.  Let’s be honest however, corporate IT is often branded with all sorts of depreciatory qualifiers related to the pace at which it can deliver.

But what is corporate IT speed?  And how is it measured?  The answers you’ll find below are probably not what you thought, and are certainly not what you’d want them to be.

In the case of cars, trains or marathon runners, the formula is the one we’ve learned at school: distance traveled divided by the time it takes to travel that distance.

That’s why we often use kilometers per hour to gauge the speed of travelling things.  All of this is obvious.  It is evident because we all have a sense of what distance means since it’s part of the tangible world we live in.  Same for time: even if some of us (you know them!) have an elastic conception of time, there are a standardized measures and tools such as a clock.

That’s fine for transportation, but speed can be so many other things.  The “speed” at which an automobile factory produces cars is measured by the number of cars built, divided by the time it takes to build them.  In the end, speed can be viewed as the measurement of some achievement divided by the time taken to reach it.

Now that we have a formula applicable to any situation, let’s try to answer the questions above (what is corporate IT speed and how is it measured).  The divisor is always time, so we can forget about it for now and focus exclusively on the dividend.

To assess IT speed, you need to know what an achievement is and be able to measure it.  But to be eligible, achievement measures must have certain characteristics:

  1. They have to be measurable quantitatively; and
  2. Their units of measure must be standardized.

That’s sensible since measures of speed should not be left to qualitative interpretations and should be applicable to all solutions yielded by IT.  Same for the standardization of the units of achievement, an absolute must if you want to compare speeds.  After all, what’s the point of measuring speed if you cannot draw comparative conclusions?

That’s where the whole corporate IT speed thing collapses.  In the case of the car factory, you count cars, but in the case of corporate IT, what are the units?  There are documented units of productivity for some types of IT work, but that’s not sufficient because:

  1. these units vary from one work product to the other;
  2. they also vary from one part of your IT to the other;
  3. they do not cover the whole process that yields what you pay for; and
  4. I suspect that the processes to systematically measure them aren’t implemented.

So what is the equivalent of the cars that you count on the shipping dock of the automotive factory?  The sad but true answer is that there is likely no such equivalent in your IT shop. Hence, everyone falls back on project delivery or the tangible outputs delivered through them.  Speed gauges become statements such as: “We delivered the new version of the CRM in 14 one-month sprints,” or “Release 3 of system XYZ took four months to deliver, compared to six months each for releases 1 and 2.”

But you cannot fairly compare the new version of the CRM with the preceding one.  What you delivered in releases 1, 2 and 3 may be quite different in their nature and size.  Neither can you compare anything between system XYZ, your CRM application and the majority of the hundreds of disparate business solutions you own.  Thus, this gauge of speed is not sufficient either, because the units are not standardized.

When units of achievement vary from one project or one team to the other, that’s not usable as a valid measure of speed. That’s anecdotal evidence, nothing more.

Regardless, someone still needs to show that something has been provided at a certain speed.  Since IT deliverables vary so much in size and nature, the only thing left to assess speed is money.  You have to make the leap of faith that on average, higher-priced projects (or phases, releases, or whatever units of delivery you choose) yield more throughput.  By doing so, cost actuals become a proxy to measure what has been delivered.

Assuming you can bare the assumption, the result is disconcerting: speed of delivery becomes the budget size of what has been delivered, divided by the time it took to deliver it.  When we factor that into the formula above, it yields the following:

In other words, corporate IT speed is measured by the speed at which money is burnt.

Which also means that if you ask your corporate IT function to get any faster, the only thing they can do is spend your money sooner, leaving you the onus of believing that more was achieved per unit of time.  This is far from a valid measure of speed.

Corporate IT’s unenviable reputation with respect to pace is not unrelated to the formula above.   You have within your organization a function for which speed of delivery is a critical competitive element, but it is not measured adequately.

We all know that what is not measured will not improve, and measuring it in such a grotesque way as in the formula above is like not measuring it at all.

This is the reality of corporate IT today because no one has ever had enough motivation to develop better and more accurate ways to measure throughput. Do not get sweet-talked into the difficulties of developing such measures.  It’s neither because such measures don’t exist, nor that your IT staff doesn’t have the skills to make it happen.  Furthermore, it has nothing to do with technology, rather with how accountabilities are distributed and how team or personal performance measures are defined.

Next week’s article will provide more insights on what performance really means in corporate IT.  My book gives a broader view of the problem and a deeper understanding of the non-technological root causes behind the poor state of speed in corporate IT.

Joseph’s Machine and the Unnecessary Complexity of Business IT Solutions

The best non-technical analogy to explain the extent of the complexity of corporate IT assets, and by the same token why a working IT solution doesn’t prove anything about its quality (the subject of a previous article) appeared on my LinkedIn feed last week:  https://www.youtube.com/watch?v=auIlGqEyTm8.

After watching this two-minute video, your first reaction is probably like mine: amusement and awe over Joseph’s ingenuity. But once I was over the toddler’s cuteness, it came to me that Joseph’s machine can teach a lot about IT solutions.

Am I insinuating that your IT solutions are like Joseph’s machine?  You bet!

Yes, IT business solutions’ engines often look like this under the nice, shiny hood of sleek user interfaces.  What you see is the final product, the cake you want to eat.  What you don’t see are the contorted paths taken to get it to you.

So why are we IT people making things so complicated?

There are many reasons. My first book will give you a broader view of the problem and a deeper understanding of the non-tech root causes. In the meantime, here are three key pointers:

First, Joseph is dealing with the laws of physics – in a brilliant way I should add. In the virtual world of software-based solutions, such laws don’t apply. Furthermore, I suspect that Joseph had to go to a dozen stores to buy all this apparatus and spend a lot of time finding the right gizmos to fit his process.

In software-based solutions, you just click, download it, resize it, or copy and paste it ad infinitum if you wish.  It is usually simple, often effortless.

It can also go in all directions, augment the overall complexity, but still your IT staff will find a way to make it work. 

In other words, the drawback of computer-based solutions is that it is easy to “clog your kitchen” as in the video.

Second, after Joseph is done with video-making, he cleans the kitchen before the in-laws come for dinner. Your IT-based solutions support your business and they stay there as long as you’re operating. As easy as it is to fill the kitchen with software-based components, it is proportionately as difficult to empty the room – unless it was planned for.

Roles distribution and performance indicators do not promote designs that make your systems easy to remove. Most of the time, you’re stuck with it.

Finally, Joseph’s machine works and it delivers the cake. The same can be said about your IT business solutions. The current hierarchy of performance measures for corporate IT is dominated by short-term focus with the sempiternal “Keep-the-Light-On” (KTLO) and “On-Time-On-Budget” (OTOB) efficiency gauges.

If your sole expectation is to get the cake on your plate before the competition gets it, then you’ll receive your pastry all right, but do not hope for more. 

It doesn’t have to be this way.

With a more balanced distribution of accountabilities and performance measures that extend beyond short-term expectations to the intrinsic quality of what is built, you can earn a significant competitive edge with your IT solutions. The added benefit?  The next time, when you need pudding or ice cream instead of a cake, you’ll reduce the probability of your IT team telling you that you need to buy a whole new kitchen. The kitchen-building industry is a prosperous one these days, but it takes your investment money, and precious time to beat the rivals.

What Drives Quality

Making parallels between corporate IT work products and those of other fields is adventurous. Nevertheless, I need to find a way to explain what quality control means for corporate IT without getting technical.

Imagine for a moment that your corporate IT team was not delivering technology solutions to your business, but rather automobiles.  Also assume, for the sake of the parallel, that your usual corporate IT quality controls would be applied to these cars.

The car would be put on a tarmac track and a test driver would start the car, accelerate, turn right, turn left, and brake. She would also open all doors and windows, check the fuel gauge, engage the lights, turn on the radio, open windows, tilt seats – everything.  In short, all features would be tested for their practical effectiveness. The car would then be handed off to its owner. That’s it.

Are you tempted to say that it’s enough? If all features and functions are operational, then the quality is where it should be? Of course not.

Fortunately for car makers and owners, some important points are missing from the quality control plan I’ve outlined above; especially determining how well the car is built and assess its ability to handle sustained use over a period of time – long after its sale to a customer.  In the automotive industry, these procedures will address a world of additional concerns such as: will the car be plagued with rust holes in 12 months?  Will the brakes require changing every 1000 miles? Will the corner garage mechanic need to drill a hole in the engine pan to make an oil change?

Carmakers have understood long ago that features are not enough if the product does not show many other qualities like longevity, safety, maintainability or reliability. But corporate IT is a strange beast whose behaviors often defy common sense.

So strange that the IT equivalent of drilling a hole in the oil pan is not that farfetched.

Project-Oriented IT and Quality Control

The scope of quality control on technology solutions can be qualified as business requirements centric. Far be it from me to downplay the extent of the tests required to ensure that all requirements are fulfilled, but that’s far from enough. The resulting output can only suffer from inferior levels of excellence when certain areas aren’t duly inspected. It’s true for cars and applies universally to any situation where there’s a mix of human beings and tight schedules.

How simple will it be to expand the solution? How much effort will it take to retire that solution? Will future generations of IT staff have crystal clear technical documentation at their finger tips? Can this solution easily integrate with other systems or technologies? These questions cannot be answered by controlling the correctness of features and functions.

To understand the dynamics responsible for deficient quality control of corporate IT output, one must first recognize that any change to existing assets or new asset creations are made within the context of a project. This makes sense, since nobody wants multi-million endeavors governed by anything less than good project management practices.

The issue doesn’t lie with the use of project management wisdom. The problem is that corporate IT decision-making processes are heavily skewed toward the use of project management logic, even in cases where different rationales should be applied. I call this ubiquitous pattern Project-Oriented IT.

Remember that a project is, by definition, a temporary endeavor[1]; it must have a start date and an end date, or else it’s not a project. This also means that anything happening before the project start or after its finish will not be considered part of the project.

So, within our carmaker analogy, the project end date will be when the automobile is delivered to the customer with all promised features functional.  An IT project will be deemed complete when the solution and all of its components are successfully tested to make sure that every feature works properly.

These tests do not acknowledge issues that may (or may not) arise months or years later. A few moons after the IT solution is delivered, the project will have been closed for a long time. Long-term quality does not fit easily into a project.  In project-oriented IT, considerations equivalent of car maintenance costs, body rust, or the premature wearing of parts are rarely a concern.

QA Skills and Independence

“Aren’t corporate IT quality control processes intended to check all these things?” You might be tempted to ask.  The sad but true answer is: not really. For all aspects of quality to be checked systematically and consistently, there needs to be a certain degree of separation between those that build quality and those that control its presence. In most cases, the independent quality controls cover only business features and are being carried out by the only unconnected parties in the equation: non-IT folks working for the business sponsor.

These individuals will conduct checks according to their skill sets, which don’t include the technical knowledge required for looking under the hood.  Those that have the skills to inspect the engine and the cabling are probably busy welding another car (working on another project). Even when the internals of the solution are checked, the reviewers are rarely independent enough because they are working under the auspices of project-oriented IT where many quality concerns are of a lesser importance.

In an upcoming article, I show that conflicting roles lead stakeholders to quickly pushback against any quality criterion that doesn’t directly help a project within its immediate lifecycle. You will also discover that these same accountability issues are killing the independence required to perform quality controls covering all aspects of the value of what is delivered.

Your takeaway from this article is simple: when it comes to controlling the quality of what you get from your IT investment, you hardly get anything better than a test drive.

To change this, the distribution of measured accountabilities must change in such a way that all aspects of quality are evaluated, not just those that directly impact a project’s delivery. In this book, I dive in all aspects of IT that impede the creation of quality assets, all of them being rooted in the roles distribution, the accountability given to these roles, and the associated measures of performance.

[1] As defined by the Project Management Institute and applied by its hundreds of thousands of certified professionals.

No One is Accountable for What Is Not Measured

In a previous article on the construction industry’s distribution of roles, I demonstrated that centuries of cumulative trials and errors have led to a clear delineation between the main stakeholder’s responsibilities, all to the benefit of the paying customer and the public in general. In corporate IT, as we saw in the following article, things are quite different: the paying customer deals with a single desk that plays all roles.

The healthy segregation between those that define the solution and those that build it, those that set standards and those that use them, those that deliver excellence and those that control that quality, is unquestionably absent. 

It would be a mistake to believe this is due to the nature of the solutions being built, as segregation of roles was not always present in the construction industry either. Roles definitions were once an issue, as we can see by this citation from Philibert Delorme [1514-1570], architect and thought leader of the Renaissance:

Patrons should employ architects instead of turning to some master mason or master carpenter as is the custom or some painter, some notary or some other person who is supposed to be qualified but often than not has no better judgment than the patron himself […]”[1]

In my career in IT, I have seen it all: projects without architects, improvised architects with skills issues, true architects without any architecting accountability, architects left to themselves with no organizational support, IT managers architecting, project managers architecting, customers architecting, programmers architecting. These cases are not exceptions, but rather the norm, in one form or another.

There are two main reasons for so much laxity in the execution of such an important function as IT architecture: conflicting roles and lack of measures.

First, the conflicting placement of the architect, often located in a quarter where he/she isn’t able to truly defend the customer’s interests, is subordinate to line managers or project managers that have higher priorities than architecting solutions the right way.

Second, expectations towards the quality of the architecture are neither set nor gauged, again, because there are more urgent and measured accountabilities hanging in the balance.

With little consequences for wrongdoings, it’s no wonder the architect’s role is so easily hijacked by whoever wants to have a say in that area. 

IT architecture is a field where anyone can be elected, or self-elected, to the status of an architect, as long as he/she can make things work. But as we saw in a previous article, a working solution doesn’t prove much. Everyone can have an opinion on the right way to design but is never held accountable for the quality of it.  Opinions without accountability on the subject are as relevant as any other conversation around the coffee machine.

Fortunately, by balancing the distribution of roles with healthy segregation, measures of performance can move toward a healthier equilibrium, so that coffee machine discussions don’t become IT strategies that put at risk million-dollar projects.  The architect’s role will stop being usurped, for doing so will then entail being accountable for it.  An in-depth analysis of these insights and more will be available in my first book.


[1] Catherine Wilson, “The New Professionalism in the Renaissance,” in The Architect: Chapters in the History of the Profession, University of California Press, 1977, p. 125.

The Impossible Polygon Behind the Single Desk

In Part 1 of this series of two articles, I presented a high-level description of the engagement model used in the construction industry, and how the 3 main stakeholders share accountabilities and duties. Although these three poles have diverging concerns which often lead to conflicting viewpoints, the system works because (a) the roles are clearly defined, and (b) there are institutionalized mechanisms in place to safeguard the stakeholders from potentially detrimental misbehavior.

Let’s look at the most interesting part of this comparison, focusing on the relationship between stakeholders[1].

The Construction Industry Engagement Model

In the construction industry, a customer hires an architect to define the specifics of the structure to be built. The customer then hires a builder, often collaborating with the architect during the selection process. It is quite customary for the architect to perform worksite inspections within the construction engagement model, in order to ensure that the builder has conformed to the drawings and specifications.

The Turn-key Alternative

There is an alternate engagement model in the construction industry called a “turnkey” project. In this model, a customer hires a builder (usually a general contractor) to take care of everything, including architecture, engineering, building, landscaping, and even the procurement of permits. There are two major advantages for the customer within a turn-key project: engaging with a single point-of-contact, and getting a single price that includes all costs.

There are, however, major risks for the customer choosing this type of project: he is placing complete trust in a single party, while forfeiting the independent quality control available through an architect’s worksite inspections.

Industry Safeguards at Play

Most customers are aware of these potential liabilities, which is why many of them chose the standard A-B-C engagement model. But if one chooses to go with a turnkey arrangement, there are many structural mechanisms to protect the customer in a standardized industry such as civil construction, as described in Part 1 and depicted in the figure below.

Even when the customer deals with only one provider who monopolizes project operations, the city inspectors, trades certifications, building codes, and professional orders remain independent. As for standards compliance, an architect can lose her license to practice if building codes aren’t respected; professional order disciplinary committees or judges will demonstrate little empathy for the fact that she was working for a general contractor who signed directly with that customer.

This variation of the construction engagement model (turnkey) is very important because it mirrors the usual relationship between your organization’s business sponsors and corporate IT. This similarity only exists on the surface, however. There is a huge difference:

there are no external, independent bodies that oversee, standardize, or control the activities and the outputs of your IT department.

The IT Engagement Model Applied to Construction

If we were to apply the IT engagement model to the construction industry, it would resemble the figure below:

The IT systems builder who you engage with is, in fact, responsible for literally everything: gathering requirements, designing the architecture, engineering, managing all the various specialty skills, and of course delivering the solution that you need.

But that’s not all. Your IT builder takes care of the (not so) independent controlling bodies in our construction parallel. The IT counterparts of the construction industry safeguards described above are embedded in that same team.

The corporate IT function determines all standards, establishes mid and long-term plans, baselines the required skills for all IT trades, assesses the adequate knowledge level of staff, delineates roles and their respective accountabilities, and last but not least, oversees its own compliance to the quality standards it defines.

It’s Worse Than You Think

If you think it’s already too much for your definition of segregation of duties, there’s more. Corporate IT is not just responsible for building technology-based business solutions; the same team takes care of everything under the IT sun.

If we were talking about the construction industry, your builder would also be responsible for supplying water, power, gas, road maintenance and emergency services. To top it all off, you are left with a single builder, and very little leeway to shop for alternatives.

I’ll let you call this model what you like. The detrimental effects caused by this monopolization of roles are significant and serious. It increases costs, slows the speed of delivery, and of course lowers the quality of deliverables. In the upcoming articles and book, I will describe in more relatable detail the repercussions of this engagement model. The source of these woes can be traced to its most fundamental, foundational root cause: ill-distributed roles. And that’s a promising news, because it has nothing to do with technology and non-IT business people can shift the model to a healthier and balanced geometry where the paying customer’s interests will be better served.


[1] For a deeper dive into the construction industry, its structure, and the wisdom it can impart, take a look at the soon to be published Volume 1 of An Executive Guide to the New Age of Corporate IT . This article is in fact an elevator pitch for Chapters 1 and 2.

The Geometry of the Corporate IT Engagement Model

Part 1 – Century-Old Wisdom

If there’s one business where they’ve got their angles set right, it’s the construction industry. I am not referring to the fact the workers all carry carpenter’s squares in their tool boxes, but to their organized management of roles, responsibilities, and accountabilities*. I came to this realization a few years ago while digging through construction contractual agreements, governmental regulations, and professional order bylaws. The structure of stakeholders in the construction industry and their relationships can be summarized and abstracted as follows:

  1. The customer (C). His responsibilities are to provide the requirements, supply the funding, and approve the specifications.
  2. The architect (A). Her duty is to define what has to be built, based on the customer’s requirements, in compliance with codes and regulations. The customer hires the architect.
  3. The builder (B). His job is to build the conception based on the architect’s specifications, while adhering to codes and regulations. The customer hires the builder.


The Three Poles Dance

In general, the ABC business model works fine, despite the following natural antagonisms that exist between these polarized roles:

  1. The customer often knows exactly what needs to be done, and feels he could manage without the high-priced services of an architect. He also feels that the builder, especially if the chosen one was the lowest bidder, is trying to transform any unspecified detail into a change order and will do everything to get his profit back. The customer’s objective is to maximize his return on investment, whether it be sales, rent, property value, family happiness, or social pride.
  2. The architect often has the impression that the customer changes his mind too often, doesn’t have the means for his ambitions, or is inaccessible when time comes to sign-off on blueprints. She is also convinced that the builder will cut corners to raise the profit margin, or consult directly with the customer and deviate from her specifications. The architect’s objective is to maximize customer satisfaction while remaining profitable, given the time spent on the project. The architect works in a market where reputation and past experience are crucial.
  3. The builder is convinced that he knows how to build with or without the architect: an extraneous player who often can’t drive a nail into a 2X4. He also believes that customers take advantage of him by constantly requesting incidental extra work while remaining resistant to official change orders. The builder’s objective is to remain profitable by maximizing effectiveness, minimizing rework, and optimizing material usage. The builder operates in a highly standardized and competitive market where price is the main differentiator.

Fences Around the Construction Playground

The polarized roles described above could lead to discord or confrontation. But the construction industry, through centuries of feuds and casualties, has developed a series of safeguards to control the behavior of the stakeholders. The main defenses against misconduct are:

  • Building codes that define standards required to ensure minimum levels of quality and safety.
  • Town planning departments that provide additional guidance regarding building sites, optimized use of shared infrastructures, and development expectations of community residents.
  • Constituency construction inspectors who independently examine the project’s compliance with the approved plans, building codes, and local regulations.
  • Trades certifications that legally enforce safe standards of practice for builders.
  • Professional orders that endorse the right to practice for architects and engineers, define a code of conduct and manage alleged or confirmed misbehaviors.
  • Duly voted laws and published regulations that legally enforce all of the above, from building code compliance to the letters patent of professional orders.

All of these safeguards have the obvious objective to protect the customer and the public in general.

When Things Go Awry, Roles Are Never Questioned

The construction industry has a pretty heavy legacy of issues that often finish in courts of law. These range from petty misunderstandings to serious worksite casualties and building collapses. As in any other business, when things go awry in construction, fingers point in all directions until guilt is determined and restitution is achieved.

But in such cases there is one thing that is never questioned: the engagement model and the respective duties of the three roles. The customer owns, pays, and approves. The architect understands, designs, and complies. The builder constructs and complies. Each stakeholder knows what the other has to do, and can recognize when another steps out-of-bounds. These crystal clear roles leave little space for interpretation, and are often textually described in contracts.

Century-Old Wisdom

Houses are buildings, but a software-based business application is not a building. Corporate IT is a very different business than the civil construction industry; processes, tools and, most importantly, the base matter to work from are worlds apart. I nevertheless strongly believe that corporate IT could benefit from focusing on the applicable similarities rather than technical differences with this model from another field. Centuries of trial and error have yielded highly reusable practices:

  1. Clearly delineated roles that minimize misunderstandings and quickly expose conflicts.
  2. Defined standards, which are understood by all.
  3. Independent bodies that control compliance of the work.
  4. Incentives and penalties that are directly linked to responsibilities.

In corporate IT, these four elements are either absent, neglected, or ill-defined.  In my opinion, these deficiencies can be traced to the engagement model used internally for corporate IT. And this is the subject of the second part of this article.


* For a deeper dive into the construction industry, its structure, and the wisdom it can impart, I invite you to read my book, Volume 1 of An Executive Guide to the New Age of Corporate IT.  This article is in fact an elevator pitch for Chapter 1.

Do Not Assume Anything From IT Solutions That (Always) Work

This is where we start: an initiatory revelation that will help you understand many of the everlasting issues plaguing corporate IT.  This truth is one of the most important drivers of lower quality in the work products of the corporate IT function.

Business IT solutions are mainly made of software, and software is highly flexible and malleable. These are characteristics that are difficult to find elsewhere. Fundamentally, software is a series of electrical impulses representing numbers.  All a computer does is add numbers – nothing else. The images on your screen, the voice that you hear on your phone and any other seemingly magical digital phenomenon can be reduced to zeroes and ones.  These numbers are then eaten and processed by an immensely powerful number-crunching machine the size of your thumbnail.

Limitless IT Possibilities

Software exists in a virtual world where the laws of physics, as well as most constraints found in other fields, don’t apply. Of course, applications must remain compatible with the physical characteristics of the human beings or machines who will use them. If an IT business solution interacts with production machinery, perhaps opening and closing garage doors, you can expect it to abide by the laws of physics, and probably some standards and regulations.

But apart from these specific cases, it is fair to say that if the IT experts of most businesses are challenged with questions such as “… but is it doable? Can you make it work?” they cannot honestly answer “no”, because there is always a way to make an IT solution work.

Why Does Your IT Team Say “No”?

You may have painful memories of instances where you were told “no” by your IT teams. Let me assure you that, excluding extreme cases, the reasons for these negative answers were probably that the budget was exhausted, the time left was too brief, the compliance to standards was problematic, or the teams in place were busy doing other things, but not that it wasn’t doable.  There is always a way to make it happen when you’re dealing with the intangibles of software and the immense capabilities of computing hardware.

That’s the good news.

Beware of Alternate Solutions That Cut Corners

Often, making programs work just requires doing things differently.  Since software is so workable, the options available are usually numerous.  Unfortunately, doing things differently does not invariably mean finding a totally innovative, out-of-the-box paradigm.

Most of the time, being imaginative means finding ways to cut corners and make it work still.

The range of options can be further extended by the relative inconsequence of errors.  In the virtual world of corporate IT, there is little risk of human injuries or casualties. Thus far in my career, I’ve never seen anyone drawn into a court of law for a botched design.  External bodies will never audit a project down into its technical details. Events of skimping on quality never get published outside the corporation, and not even outside the project team.

Quality Issues That Translate in More Complexity

Your IT team will find a way to make a solution work: I can guarantee it.

They will get it to work, whether it’s with little effort or a heroic tug, and through the use of best practices or with haywire.  But heroism and best practices require more time and labor.

Hence the end result will most probably be subject to more maintenance, or run slower, or have stability issues, or present learning challenges to future employees, or require replacement sooner, or augment costs in other projects, but it will work.

And if the expected quality levels are not achieved at the finish line, it will be called a fix, a patch, or my favorite, a tactical solution, to convey recognition that it could have been designed and built in a better way.   But these idioms don’t express the truth that such solutions increase unnecessary IT complexity which in turn impedes the agility of the team that created it.

Does it mean that the great powers of information technologies, with their almost limitless applications can also be a hindrance?  I’m afraid so. We’re in a case of the archetypical two-edged sword.

Not Proving Much

Your most important takeaway is the following:

The fact that a solution works proves nothing other than the fact that it works. Do not even contemplate for a second the mere idea that it determines anything about the quality of the end product.

Whatever the depth of your sorrow about this depressing statement, you might be tempted to think that, given all the virtual flexibility of IT, sub-optimally designed solutions can be easily corrected in subsequent projects. But that’s not the way it works, so don’t hold your breath for quality issues to be corrected.  In an upcoming article, I will present another unpublicised truth about corporate IT that will lower your expectations about IT’s capacity to realign after sub-optimal solutions are delivered.

Before you do anything hasty, let me reassure you: there is light at the end, and there is a way to get higher levels of quality that promote nimbleness.  The good news is that it has nothing to do with technology and is within the reach of non-IT business executives.  If you’re interested, take a minute to subscribe and you will get automated reminder when new posts are published.