Psst. If your boss won’t invest in training you in Test-Driven Development, I’m running out-of-hours workshops on April 7 and 11 specifically for self-funding learners. £99 + UK VAT.
Once upon a time, in a land far, far away (just north of London Bridge), there was a medium-sized financial services company with a problem. A big problem.
For 9 months, they had been trying to deploy a new release of their core platform, and every single attempt had exploded in their – and their customers’ – faces.
Teams of testers worked round the clock trying to find the bugs. Developers worked 12-hour days, 6 days a week trying to fix them. But the software just kept getting more and more broken. (As did the teams.)
Because, while the developers were fixing the bugs, the changes they were making were introducing new bugs – along with reintroducing some old favourites – that the testers weren’t finding until weeks later, if they found them at all. Users were 2x more likely to find a bug than the testers, and the stability of the platform was so bad that every release ended up being rolled back within a couple of days.
They were beached.
I’ve mentioned before that the DevOps Research & Assessment classifications of software delivery performance need a new level below “Poor”, which I called “Catastrophically Bad“. This is when every deployment fails, problems can’t be fixed – just reverted – and lead times are effectively infinite. Nothing’s getting delivered – well, nothing that sticks – and there’s no light at the end of that tunnel.
In desperation, an army of consultants and coaches was brought in who – at vast expense – set about “fixing” the process. Backlogs were groomed. Daily meetings were attended. Velocities were tracked. Estimates were tweaked. Test plans were optimised. Graphs and charts were prominently displayed.
And at the end of all that, the consultants and coaches confidently concluded, “Yep, you’re basically f***ed.” They exited the building with… well, let’s just say there wasn’t a leaving card. At least now they had the graphs to prove it. Money well spent, I’m sure we’d all agree.

That’s where I came in. I politely declined to engage with management on their “agile process” – as I have for many years now with all my clients.
Instead, I asked the developers to change one thing about how they worked. I took them into a meeting room, and showed them how to write NUnit tests.
In the next couple of weeks, we wrote some system-level smoke tests that they could run a few times a day just to provide some basic assurance that the thing wasn’t totally borked. Which, for a long time, it was.
Then I instructed them to write an automated “unit” test before any change they made to the code. Fixing a bug? Write a test for it first. Making a change to the logic? Write a test for it first. Refactoring the architecture? Write a test for it first. (And if you can’t, then refactor it with careful manual testing until you can, and then write a test for it.)
It took a couple-or-three months, but as the unit test coverage gradually increased – and in the areas of the code that were changing and breaking most often – the software stabilised enough to finally produce a release that stuck; a first foot forward in over a year.
The second foot forward – the next stable release – came a month after that, and a third a month after that. The platform – and the business built around it – was walking again.

This is where interesting things start to happen to the overall development process. With releases now happening fairly incident-free once a month instead of, well, never, the “agile process” adapted around that. Product managers were now thinking about next month more than they were thinking about next year. (Specifically, where they might be working next year.)
Again, I was invited to engage with them, and again I politely declined. I wasn’t done with this innermost feedback loop yet.
Tests covered about 40% of the code – basically, the code that had been changed in those 3 months – but that coverage was growing every day. By the time it got to around 80% six months later, release cycles had accelerated from monthly, still with a significant reliance on manual regression testing, to weekly, with a small amount of manual testing. And the backlog had been paid down to the point where lead times were in the order of a couple of weeks.

And while the test assurance grew, and the architecture became more modular to accommodate it – which significantly reduced the “blast radius” of changes and merge conflicts – lead times shrank even further.
And, again, the wider process adapted to this new reality, shrinking user feedback loops and becoming more willing to gamble and experiment, knowing that they were now placing much smaller bets.
The management process at this point looked dramatically different to how they were doing things when I arrived. And I hadn’t engaged with it at all, beyond recommending a couple of books about goals.
The organisation had evolved from plan-driven, big-bang releases (that always went bang), to iterative, goal-driven releases that enabled experimentation and learning.
And that all happened in response to shrinking lead times, enabled by accelerating release cycles, made possible by very fast, very frequent automated testing.
We didn’t need to ask for extra budget. We didn’t require software licenses. We didn’t need management to engage or give permission. We didn’t ask anybody outside of the dev teams to change anything. We just f***ing did it – this one little change to how the teams wrote code – and the entire organisation adapted around it.
In the proceeding years, that innermost feedback loop got tightened even more, with more investment in skills and automation. And, of course, the wider processes changed, as did the makeup of the teams, but not because we demanded they should. The delivery cycle accelerated, and the business adapted around that. They became more goal-oriented, more feedback-driven, and more experimental, and the organisation started to reflect that.
They now release changes multiple times a day, testing one change in the market at a time, and rapidly feeding back what they learn. Or, as you may know it, actual agility.

Sure, it took a few years for them to get there. But consultants and coaches had spent a year effecting no change at all at a very high cost. I spent a few days a month with them for a couple of years.
They made the mistake that almost all agile transformations make: they try to improve software delivery capability without actually engaging with it. You can’t plan or manage your way to daily releases.
You’re looking at the wrong feedback loop – the wrong cog in the clockwork. The big cogs aren’t driving the little cogs. The little cogs are driving the whole system.
