Measuring and defining risk with Cynefin

Measuring and defining risk with Cynefin

A common way of describing risk is using a likelihood vs impact metric. For example, there is a very low likelihood of an asteroid big enough to wipe out the earth actually doing so, however, the impact would be very high.

Many tools give you a lot of granularity when describing risk, one tool I used recently, called Kivue (which is a crock of shit), used integers. Likelihood = [1,2,3,4,5] (what is a likelihood 4, how is it different to a 3?). We used actual words and threw away two of the integers to make it more useful:

Likelihood:

  • 1 – Unlikely,
  • 3 – Likely,
  • 5 – Definite

There were also integers for the impact, so we used actual descriptions for those instead:

  • 0 – No impact (why would this be important?)
  • 1 – Low Impact
  • 3 – Medium Impact
  • 5 – High Impact

Once you have your likelihood and impact, you can then work out your exposure:

Likelihood x Impact = Exposure

If you then rank your risks by exposure, with the highest at the top, this will give you a good idea of the risks you should be looking at first. However, having this still didn’t give us a real handle on how to assign a likelihood or impact to risk – one persons 3 might be a slightly more anxious and pessimistic persons 5!

I’d recently been reading (again, it seems to leak out of my head whenever I read about it) about Cynefin and it occurred to me that you can use it define risks. If you’re not sure what Cynefine is, this post may not be useful to you – you should start here and then here.  Basically, Cynefine is a decision making framework, where the problem spaces are known, unknown, complex and complicated. There’s a lot more two it than my pithy one-liner and I’d encourage you to take a look at the work of Liz Keogh to find out more.

Anyway, once you’ve up to speed…

Defining Risk

Simple

This risk is so obvious it might not even be a risk. We know the inputs and the outputs, it’s not complicated (as in, there are few moving parts and no dependencies) and it has two or fewer steps to complete the mitigation.

Impact: 1,3
Likelihood: 0, 1

Example:

  • Someone is on holiday. Someone else covers.
  • Hardware is needed. Hardware is bought.

Complicated

This risk has more than two steps, but those steps are well known. The inputs and outputs are also well known. There may also be dependencies, these are also well now.

Impact: 0, 1, 3
Likelihood: Any

Example:

  • A team member doesn’t know how to use a technology. They need training from a peer.
  • Availability of a resource is unknowable. Seek alternate resource.

Complex

Multiple steps, multiple inputs and outputs, multiple dependencies. Some or all of which may be unknow. Work is required to move this kind of risk to “Complicated”. Although, this may not always be the case if we choose not to mitigate this risk, or there are simply too many unknowns.

Impact: 3,5
Likelihood: Any

Example:

  • A choice of technology is unconfirmed, work needs to happen to confirm the choice is appropriate.
  • Industry-wide definition of product format is changed.

Chaotic

Unknown steps, inputs and dependencies. More of a gut feel used as an early warning system. Chaotic risks should probably never be added to Kivue, although they could be added to a separate RAI register. A chaotic risk should be worked on until it can be move to complex or complicated.

Example:

  • Competitor is releasing a new, secret product.
  • Annual C-Level board meeting in which the entire company strategy is usually changed.

Impact: Any
Likelihood: Any

Conclusion

This worked for us and might work for you to. If not, then I’d implore you to come up with a common way of describing risk, it makes it easier to assess the relative “riskiness” of each risk and understand better where you should apply your mitigation efforts.

You may have other ideas on how to describe your risk impact and likelihood, and, if so, please comment below!

Advertisements

Should you have a set number of acceptance criteria?

Should you have a set number of acceptance criteria?

A conversation came up today in which a statement was made about having a set number (three, in this case) of acceptance criteria per user story. My gut reaction was that I didn’t agree, how can you define that? But then I got to thinking that maybe it wasn’t such a bad idea.

I turned to the internet and asked twitter, the replies were heartening.

I agree with these replies, however, I’m not convinced on having a fixed arbitrary limit. I think it’s a case of being vigilant and for someone to say “Hey, that’s a lot of A/C, maybe we should split this?”

It’s also relatively simple to avoid the limit by loopholes. For example, your A/C could be described as Given, When, Then. In this, you could add “and then … and then… and then…”. It’s still one A/C, but has multiple steps.

As with everything in the world of building software, it’s about balance and discipline.

Nothing is immutable

Nothing is immutable

“But they’re the project goals!”
“So what? They’re not appropriate any more.”
“But they’re the project goals, you can’t change them!”
“Watch me.”

Challenge everything and remember, nothing is immutable. If you make your project goals redundant (and you should always try and prove the negative hypothesis), you’ve made a huge discovery.

Nothing is above the laws of science – hypothesise, test, learn.