← Journal·Essay No. 02 · May 2026

Essay No. 02

What Was Promised, and What Was Done

A note on the transformations that look like they landed, and the question nobody quite asks afterward.

12 May 2026 · 11 min read · Transformation

There is a moment, eighteen months after a major transformation finishes, when somebody asks a question the room is not quite ready for. It is usually a board member, or a new executive who wasn't in the building when the programme started. The question is some version of: did we actually do what we said we'd do? The room goes a little quiet. The four-column report on the screen says yes. The savings landed. The engagement scores are up. The data quality is materially better. The capability targets for the new operating model were hit. But the person asking knows, the same way the room knows, that none of that is the answer to the question.

What the question is really asking is whether the organisation remembers the version of itself it was trying to become. Whether the thing that was promised — to the board, to the people, to whoever wrote the original case — has actually been done. The standard ledger says the deliverables shipped. The harder question is whether anything has happened. And the longer the room sits with it, the more obvious it becomes that nobody can quite say.

This is the moment I have come to recognise across twenty years of HR transformation work, and the moment this firm exists to address. It is not a moment of failure on the standard measures. By every metric anyone has, the transformation succeeded. It is a moment of absence — the absence of the felt thing the work was supposed to leave behind. The organisation has the new system, the redesigned roles, the rationalised processes, the higher engagement scores. It does not have the feeling that something was made. There is no memory in the building of what the transformation was for.

I want to say something direct about what this absence means, because the standard frame quietly resists it. Hitting all four of the headline metrics does not mean the transformation succeeded. It means the transformation succeeded at the four things the ledger was designed to measure. Whether the transformation actually did what it set out to do is a separate question, and one the four-column ledger cannot answer.

The verdict on this separate question rarely arrives at go-live. It arrives months later, sometimes years later, in the form of attrition the engagement scores never predicted, or a regulatory event nobody saw coming, or a competitor who has done something distinctive while the organisation was busy hitting its metrics. The four-column win is the snapshot at the finish line. What determines whether the transformation actually lasted is a fifth thing the snapshot does not include.

· · ·

The conventional ledger has four columns. Cost efficiency — did we save what we said we would. Employee experience — does the new system feel better than the old one. Future readiness — do we have the capabilities the operating model demands. Data quality — do we now have a single source of truth we can defend to a regulator. These are real measures of real things, and most organisations track them carefully.

The transformations are still failing. McKinsey's research, across multiple survey waves over more than a decade, finds that only about thirty percent of organisational transformations sustain their intended outcomes. The figure has not moved. Bain's 2024 work pushes the number further: eighty-eight percent of business transformations fail to achieve their original ambitions. The gap is not between competent execution and incompetent execution. It is between hitting the headline metrics and actually doing what was promised. The four columns are being hit. The transformations are still failing on the question that matters.

Microsoft's own 2026 Work Trend Index lands in the same territory from the technology side. Organisations are deploying AI tooling faster than any previous wave — fifteen-fold year-on-year increase in active agents in their ecosystem, rising to eighteen-fold in large enterprises — and the value is still not arriving. Microsoft's commentary on the gap is direct: employees are ready to reinvent how they work, but the system around them — metrics, incentives, and norms — continues to reinforce the old way. The technology is not the bottleneck. The measurement architecture is.

The problem is not the four columns. The problem is what is not in them. There is no column for what the organisation is still capable of doing afterward — not in the future-readiness sense of named capabilities, but in the deeper sense of what we can still do that nobody else can. There is no column for what the organisation has lost. There is no column for the unwritten practices that left with the long-tenured HRBP, or the informal network that thinned when the reorganisation redrew the boxes, or the rituals that quietly disappeared because the new system did not surface them, or the language for what was happening that the organisation has stopped using.

I have started calling this missing measure the fifth column: what the organisation is still distinctively able to do, after the transformation, that it was distinctively able to do before. The phrase is not the point of this essay. The point is that the four-column ledger can say with confidence that the transformation succeeded, and the organisation can simultaneously have lost the ability to be itself. The fifth column is what determines whether what was promised has been done.

· · ·

The harder version of the same observation applies when one of the four metrics has not been hit.

When cost overruns, or engagement disappoints, or data quality slips, the temptation is enormous to attribute the disappointment to the lagging metric. The programme failed because the project ran thirty percent over. The transformation didn't land because engagement stayed flat. The case for the next investment hinges on getting that one metric right next time. The lagging metric becomes the explanation and the focus of the remedy.

The fifth column is rearing its head there too. The lagging metric is almost always a symptom, not a cause. Cost overran because the workarounds compounded faster than anyone counted, and the workarounds compounded because the relational network that used to spot them early had been thinned in the previous reorganisation. Engagement stayed flat because the language for naming what was happening had gone, and the survey was picking up the silence rather than the satisfaction. Data quality slipped because the why behind the original architecture was lost, and the new system was being asked to enforce rules nobody could remember the reason for.

None of those underlying losses is in the four columns. All of them are in the fifth. And when the remedy is built against the lagging metric without addressing what is actually producing it, the next transformation produces the same shape of failure — sometimes the same metric lags, sometimes a different one, but the pattern recurs because the underlying loss has not been counted, has not been named, has not been remedied. The four columns offer the wrong post-mortem of their own failures. The fifth column is what would have told the room why.

· · ·

A few years ago I worked with a large global retail organisation that had been running transformation programmes more or less continuously for a decade. Each programme had hit its four-column metrics. The latest had introduced a new HR architecture across multiple disconnected legacy systems. The four columns reported a successful programme, on time, broadly on budget. The board had moved on to the next thing.

What had not been reported was that the workflows were never recognised as a mismatch for the new ways of working. The transformation introduced a new architecture without examining whether the work itself fit the new shape. So teams kept doing what they had always done and added the new processes on top — with no one remembering what the original intent of any of it had been. Each successive transformation added new processes on top of older processes, without retiring the rationale for either. Long-tenured HRBPs had been re-scoped, made redundant, or had quietly disengaged in their seats. The relational network that used to move work cross-functionally had been disrupted with each restructuring and never deliberately rebuilt. The organisation had lost the ability to trace why anything was being done in any particular way. None of this was in the four columns of any post-implementation review. Nobody had been asked to count it.

The cost surfaced, eventually, in the form of regulatory fines running into the millions. The regulator was the first person in many years to be told what the organisation could no longer do — track the root cause of a compliance failure, account for the data lineage in its own systems, explain why a particular process existed. The fines were the visible bill. The losses that produced them — the voice that stopped being heard, the knowledge that stopped being captured, the connectors who stopped being replaced — had been invisible because no column had ever asked for them.

The four columns, across the years, had reported success. The fifth column, across the same years, had been quietly red. Nobody had been looking.

· · ·

It is not a failure of intent. Every executive I have worked with would, if asked directly, say that they care about what the organisation is still capable of after a transformation. It is not a failure of resource. The organisations that produce these absences are routinely well-resourced, well-staffed, served by competent consultancies. It is a failure of measurement architecture, which is to say a failure of the conversation. The board pack does not have a fifth column. The post-implementation review template does not have a fifth column. The consultancy engagement does not have a fifth column. The people who would have noticed the loss — the long-tenured HRBP, the connector who held two parts of the organisation together, the line manager who could feel the temperature changing — are often the ones the optimisation programme has removed, or quieted, or moved.

What is left in the room, when the board member finally asks the question, are the four columns. And the four columns can only answer questions they were designed to answer.

The alternative is not a heroic new measurement framework. The organisations I have seen come out of major transformations with the feeling of having made the thing they intended to make are not the ones with the best dashboards. They are the ones that kept the question of what are we becoming alive throughout the programme, in a small number of named conversations, with a small number of named owners, willing to be honest about what was being lost as well as what was being gained. The fifth column, in those organisations, was not a column. It was a discipline.

That discipline is harder to build than a column on a board pack. It requires the steering committee to want the answer — which means accepting that the answer might be uncomfortable. The organisation may be losing something it cannot afford to lose. It requires the people closest to the work to have permission to name what they are noticing. And it requires somebody — a CHRO, a transformation lead, an internal sponsor with enough seniority to be heard — to own the question across the duration of the programme. That ownership is rare, because the owners of most programmes are paid against the four columns and not against the absence.

· · ·

I have built a short diagnostic that probes for the patterns that produce this absence. It takes about ten minutes. It does not score the organisation; it produces a reading. It asks fourteen questions about the kinds of things that tend to be quietly leaving the building during transformation: concerns going sideways, processes running without their rationale, networks thinning, tools proliferating faster than the organisation can absorb them, language for loss disappearing. At the end, it tells you which two or three patterns your answers point toward, and what they tend to cost when they are not addressed.

The diagnostic will not tell a CHRO whether their transformation has actually done what was promised. That is a harder question, and the honest answer is that it usually takes a conversation with the people closest to the work, not a quiz. What the diagnostic will tell a CHRO is whether the patterns that produce the absence are present in their organisation right now — and whether the next eighteen months are likely to be ones in which the board member's question becomes harder to answer, or easier.

If the patterns the diagnostic surfaces feel familiar, the firm exists to help. If they do not, the diagnostic itself will have been useful. Either way, the question of did we actually do what we said we'd do deserves to be asked in advance of the moment when a board member asks it, in front of a room that is not quite ready.

Take the diagnostic.

Fourteen scenario questions drawn from twenty years of observation across HR transformations. A personalised reading at the end. About ten minutes. No score, no dashboard, no follow-up sequence. The reading is yours either way.

What it surfaces is the patterns that quietly produce the absence this essay describes — the things that leave the building without anyone counting them.

Begin the diagnostic →
Also in the Journal