#39: Improving predictability in Software Development by asking the team to do less

I find that quite often business owners and leaders desire a level of predictability that isn't being achieved by their software development teams.

The want reliable costs and timelines for the delivery of software. But the software development teams fail to achieve them.

In this episode I will discuss that, to improve that predictability, you need to ask your teams to do less.

I will also question if that desire for predictability is even an appropriate goal when talking about getting better ROI from your software development.

Or listen at:

Published: Wed, 06 May 2020 16:06:22 GMT

Transcript

Predictability in software development is a fault idol - one which too many people chase. But I'll come back to that in a moment.

If you do however want to increate predictability, you need to do one thing, give your teams less work.

Lets start by framing the problem.

If you are investing money - generally in the form of your staff's time - you want to know when it will be done - and, by extension, how much it will cost.

Maybe you have a specific event by which you want this software.

Maybe you've agree something with a client.

Maybe you are trying to establish if there is ROI in the work.

Maybe your team is often quoting shorter times than it actually takes.

Maybe its simply a trust issue - you struggle to trust your development team.

Ultimately you are feeling a level of frustration at having any level of confidence or control.


I can understand why you may want predictability.

For me however, it always has been, and always will be, one of the biggest sources of dysfunctional behaviour and poor ROI in software development.

I've talk previously about how any software work we do is surrounded by guesses;

• We thinking this change will generate this much revenue
• We thinking this change will save this much cost
• We think this change will take this long
• We think this change will cost this much

Everything around the work is guessing.

There are simply too many variables involved for any of those answers to be anything more than educated guesses.

And yet somewhere along the line we turned them into something akin to commandments written onto stone tables.


And the dysfunction and poor ROI is generated by attempt to turn those guesses into certainties.

We have traditionally tried to wrap extra processes and bureaucracy around our guesses.

We produce documents to detail exactly what is being done at every stage.

With every stage having sign offs and approval gates.

We have invested considerable time, money and effort into ... Well, producing documents.

And have our guesses achieved any greater level of certainty for it?

Generally not.

And, if they have been improved, has the benefit achieved been enough to justify all that additional effort?

No, not at all.

I've talked about this previously; the act of producing documents and having hand offs between teams is simply a massive waste of investment.

Its just fraught with issues;

People misinterpret the documents ... If they read them at all.

Weeks, months, years can be lost in the production and agreement of those documents.

Its just such a massive waste of investment.


And this is why I'd push against over focus on "predictability"

It often pushes us back down that traditional route.

The well meaning, but misguided, belief that the more we write something down the more predicable it becomes.

And when it doesn't turn out to produce any greater level of predictability, do we question the process?

No we assume the delivery teams are at fault.

They haven't worked hard enough or they didn't get it right or they are failing to meet their commitments.

We are fundamentally misunderstanding the problem and blaming the symptoms of something that is destined to fail.

And that is quickly where trust starts to fall down.


So rather than the traditional approach of more documentation;

I would personally always recommend the bite site experimental approach that I've discussed frequently in past episodes.

We create a hypothesis on what small thing will bring us benefit, we do that thing, we assess if it did what we expected, and we repeat.

This creates a constant stream of change that we expect to be beneficial to us much quicker.

Which, with small bite sizes changes, can be daily or weekly.

Rather than the traditional approach being closer to quarterly, yearly or even greater.

And with this constant stream of change, our question becomes more about the predictability of how many bite size experiments we can run.

We still may not have predictability on how correct our hypothesis is - but we can have a bit more in how many experiments we can run.


And to obtain that level of predictability in those experiments we need to do two things;

Firstly split the work into a similar small size.

Secondly ask our development team to do less.


Getting them to a similar "size" allows us to compare apples with apples.

It allows us to produce a historical trend of how many of those bite size experiments we typically achieve within a given period.

We build this historical trend over months, observing the team in how they deliver these experiments. The longer and more stable the team is, the better the trend we gather.

In Software Development we call this the team's velocity.

It is a prediction of how many of those bite size experiments was can perform in a given period based on historical performance.

Note that I use the word prediction.

There is still guessing in this.

But with keeping the experiments small - ideally somewhere between a day and a week. Then we have a greater chance of our guesses to be more accurate.

To get the experiments to a similar size, we need to decide on the size - a day say - then we need to hold a conversation with the whole team to talk though the experiment.

We need it to be team a conversation to arrive at what realistically represents that size.

Without doubt, many experiments will start out being too big for your standard size.

This is when the team need to break the work down into smaller experiments until such point that they do.

And as each smaller experiment is completed, then the team should assess if they are heading in the right direction.

Each chunk of work should help us establish if we are on the right path.

One of the most valuable things we can do is fail early because our experiment showed us that our hypothesis was wrong.

So much better for ROI to prove a hypothesis incorrect after a days work than 6 months.


The second one, asking the team to do less may seem at odds with common management thinking.

If the team is failing to be predicable, then common management thinking would be to push them to do more.

To catch up with whatever expectation is on them.

This has however proven time and time again to be the wrong approach with Software Development.

Its too easy for leadership, middle management and the development team themselves to try to do too much.

And, in doing so, actually fail to achieve the core objective.

If the team are consistently missing predictability; the first action should always be to reduce the work.

The team will, naturally, find its own velocity. It may not be as high as you would like, but that's a different conversation.

The velocity is what it is.

Almost certainly to increase it, there will need to be investment in process, practices or tooling to produce that.

To start with though, we need get an accurate measure for what that velocity currently is.

Its important that no-one tries to game the system by trying to artificially improve the velocity.

As with any measurement, it has to be honest to be useful.


After a few weeks, you should get an idea if the velocity is feeling consistent.

If it is, then great - you have now achieved predictability.

If it isn't feeling consist yet, then reduce the workload further.

And keep doing that until the consistent velocity is achievable.

Nobody turns up to work thinking "I'm going to do a poor job today".

So lets treat the team as adults and reflect that in our understanding.


During this measurement stage - for which I'd advise running for about 3 months, don't try to make improvements to "increase" that velocity.

While I certainly encourage small iterative changes further down the line to help the velocity, it really should be done to quickly.

To quickly and you will never get an idea of the velocity baseline.

Constantly adding change before you can establish that baseline will mean that you never really establish a baseline - the data will be too messy.

This takes time ... And patience.


In this episode I've talked about how to improve predictability in our software development.

I've talked about establishing the teams velocity baseline - this historical trend of how many of certain size activity can be performed in a given period.

I've talk about the steps to achieve this - splitting the work into small size tasks and reducing the team workload until you measure a consistent value.

I'd advised patience in establishing that historical baseline While you maybe eager to find ways to improve it, you need to have good empirical data to understand your starting point.

And I've also warned about how predictability can cause us to look to traditional techniques; which in turn lead us into dysfunctional processes and poor ROI