A great retrospective!

A great retrospective!

I’ve blogged before about remote teams where I am now and how well we manage to make it work, even with people in another timezone. Because we’ve got people remote, our retrospectives are, naturally, done over video chat (hangouts in our case) and we use webwhiteboard (by none other than Henrik Kniberg!) to make it as interactive as possible.

Today, we had just such a meeting and I really wanted to show you the output. Nothing much to learn from this post (except how we retro), but I thought the webwhiteboard worth sharing – just for fun.

Selection_003

Advertisements

Are you intelligent? Do you do intelligence?

Are you intelligent? Do you do intelligence?

Well, are you? Are you intelligent? Or perhaps you do intelligence. Do you? You do? No.

No, you don’t do intelligence just as you don’t do clever and you certainly don’t do agile|Agile|aGiLe.

I read this article today Kanban Is Not Agile and I’ve read a few others like it in the past. No, Kanban isn’t agile. But neither is scrum or DSDM, or crystal or SAFe or whatever.

You cannot DO principles and values, you are principled and you hold values. When someone says “We’re agile!” challenge them, ask “What does that mean?” because I can almost guarantee they’ll say “We do scrum|kanban!”. You’re following practices that stem from upholding the values and principles of the manifest, but you’re not doing agile.

What’s more and what will hold business back until they realise it, is that while these values and principles are software-centric, they certainly aren’t software dependent. If you distil the values and principles, you get things that any one can uphold, regardless of industry.

You don’t DO agile, you ARE agile.

BDD: Starting With Quality

By Shannon Kringen @ flickr
Quality with a star and stuff

Quality isn’t defined by a lack of defects or bugs. You wouldn’t get a cup of barista coffee and exclaim, “This coffee is quality because it doesn’t have a problem with it.” (No bugs in my coffee). You can get a cup of coffee from a petrol station that doesn’t have any problems with it. It’ll be hot, wet and coffee flavoured – there are no defects with this coffee. So, where does this idea of the “quality” of something come from? It’s a subjective thing. A quality cup of coffee to me may not be the same quality cup of coffee to you. If you favour a black americano, your definition of quality is not going to involve a milky heart drawn into the top of it, which is how I like my coffee.

When we talk about the quality of a feature, the fact that it doesn’t have any defects is an implicit part of that quality, but it’s not where quality starts or stops. What makes the quality of a feature is whether or not it does what it’s supposed to do and whether it provides the amount of value to the user that they expect. Again, this is different depending on your context: what you application does, who it does it for and why.

Quality Assurance (QA) plays a fairly key role in any software development team. I know some schools of thought suggest that there shouldn’t be a QA role, and while this is probably the subject of a separate blog post, I feel that this is wrong. We have a QA in the team, just the same as we have a designer in the team. It’s a specialist role that requires certain skills I don’t expect engineers to necessarily have.

That said, I’ve always been troubled with the way that the QA role is executed in a team. Let’s suppose that we’ve got a scrum team that performs well. They commit to a given number of independent stories, work on them sequentially, so they finish the first story before starting the second and so on. Once the feature has been completed, the work of the QA starts in earnest (until that point, the QA will put together a test execution plan and a strategy for dealing with the tests during the sprint). They will begin exploratory testing and creating or updating automated tests. This is all well and good and will ensure that the feature meets the minimum, implied, level of quality. In most cases, it’s enough that it’s free of defects.

For me, this is where the problem lies. But how do we solve the problem?

We realised that actually, we never really discussed what quality meant to a particular story or sprint. We had made assumptions about the quality based on the content of the story and the acceptance criteria. As long as the story met those acceptance criteria and didn’t have any defects, we assumed we were done. In reality, we weren’t really thinking about what constitutes quality but just what constitutes the feature.

So we decided to start with quality. It made sense to talk about what we thought quality meant to any particular story before we talked about anything else. At the beginning of planning a story in sprint planning, we would spend some time discussing what quality meant to this feature. Using the example of a login screen, the story might be:

  • As a user,
  • I need to log in to the site,
  • to access all the features.

Before we chose to start with quality, we might discuss what the feature looked like, or we may already have a design for it. But then we’d just jump straight into the technical planning: how do we implement it, what code do we need, database schemas – that kind of thing. Instead, now we talk about the feature from a users’ point of view:

  • What happens if they get their password wrong?
  • How do they reset their password?
  • How long should the password be? Should it have characters?
  • What happens if a user doesn’t have an account, how should we direct them to sign up?
  • What kind of error messages do we want to show?
  • Etc.

This opened up a whole new discovery phase. Product Owners cannot think of everything when writing stories and this discovery allowed us to offer our insight into how the feature works, ask questions about how it should work and these are often based on technical knowledge of the platform that the Product Owner may not have. We began by adding these new requirements to the conditions of satisfaction, but they soon become long and arduous to check. So we looked for a better solution than acceptance criteria.

The solution we chose was to use a new tool. BDD (Behaviour Driven Development) is a method of functional testing which allows you to describe the functionality of a feature in a “scenario” file in plain english:

  • Given I am on the login page
    • When I enter ‘mikepearce’ into the username field
    • And I enter ‘butteryballs’ into the password field
    • And I click “login”
    • Then I should see my dashboard.

We can then use a BDD framework to run these scenarios against our web application. We use Behat as a framework and it works really well (apart from our problem of data fixtures…). It pretends it’s a browser (and, will use Firefox if it needs to work with javascript) and does what your user would do with your system. This allows us to have automated functional testing too. Awesome.

So, when we’re doing this extra discovery step, we record our findings as these step definitions, instead of acceptance criteria:

  • Given I am on the login page
    • When I enter ‘mikepearce’ into the username field
    • And I enter ‘wrongpassword’ into the password field
    • Then I should see the message ‘Sorry, your password is wrong’
    • And I should see the link ‘Did you forget your password?’

We slowly build up a specification file for this feature, which is mostly centred around the “happy path” and add edge cases or problem scenarios if we think of them. It’s important to note that we don’t expect to think of EVERYTHING in this session as we time box it to ten minutes and expect other features or ideas to emerge during the sprint.

Once we’ve finished, we’ve got a specification file that we can run against the web app with Behat. The first time we run it, it will fail, because the feature isn’t there – but this is good! This is Test Driven Development for the masses! As the team slowly builds the feature and keeps running the Behat tests against it, it will slowly become more and more green. If new things emerge during the sprint, we add extra steps to the scenario file. By the end of the sprint, all Behat tests will be green and we can have confidence that, not only is the feature defect-free, but it also does what the user expects it to and provides them value.

So, now we have a way of assuring that our software has quality. Not only do we have a slick set of automated functional tests, but we’ve also added a low-friction, low effort step of discovery that allows us to really understand AND define what quality means to us in the context of this feature for our product.

I’d encourage you to try this in your teams. Actually having Behat (or any other BDD framework) isn’t really a requirement to get started. You can start by just writing your scenario file as the first step of your team planning a story and storing it somewhere for future reference. The value is in the discussion you have to define the quality. The artefact you create is useful afterwards for checking that you meet those requirements. The added benefit is that it’s written in a language that anyone can read – from your team and product owner, to stakeholders and anyone else in the business who is interested to learn more about your product.

(The benefits of using BDD are outside the scope of this article for more than I’ve described, there are plenty of articles on the web to suit your style, language and infastructure and I would encourage you to learn more about it. Start with this presentation from Gojko Adzic and this from Dan North)

How to estimate, pitch for and charge for a project in an agency enviroment

When I talk about estimation and planning, people often ask me the above question. The styles of estimating I talk about are more frequently used in a non-agency environment, that is, somewhere maintaining a single, existing product, or creating one, for your own company from scratch instead of the agency method which delivers software per project, per client.

The problem

The problem with this kind of work is that of estimates. Typically, as an agency, you’ll be bidding or pitching to a client to get the work. The client will probably be soliciting bids from multiple agencies, so you need to either stand out from the crowd in some way, or be the cheapest (or, preferably, both).

The client will generally be looking for a fixed price too, this is so they can go back to whoever is commissioning the work and get funding, or raise a PO or whatever.

Sometimes you’ll be given a brief, which you might then turn into a functional and/or technical spec, which then goes to the client along with the quote, then there’s some to-ing and fro-ing, an agreement and then work commences and oh shit, we forgot to add the widget to the spec and oh crap we underestimated the amount of data we’d need to use and have to re-engineer to use mongodb.

So, with this in mind, how do you estimate, quote, pitch or bid when agile methods are so fluid and you have to work to a fixed cost? You know the least about how long a project will take and what is involved right at the beginning. Using the Cone of Uncertainty, you’ll be between 4x and 0.5x inaccurate with your estimate of an end date (eg: at the beginning of a project that you estimate takes a month, it could take between four months and two weeks)?

Some metrics

There’s a few metrics you’ll need in order to derive costs from estimates:

  • Day Rate: How much a team costs per day. This is only ever going to be a rule of thumb (as there are holiday, illness, training etc.). But you should make it as accurate as possible. This becomes a key metric in working out costs from estimates. You should make sure that your client understands that for the purposes of estimating and pitching, this is an accurate, but rough figure.
  • Best, worst and average velocity: these are important metrics. Your average velocity is simple, add all your sprint velocities together and divide by the number of sprints. You could make this more realistic by using a rolling four sprint velocity, this then takes into account changes in team size, tooling or process. However, for the purposes of estimating the cost of project up front, the average will do.
    The best and worst velocities are derived by taking an average of your three most successful sprints and the three least successful sprints. Why you need these will become clear later.
    Should you have multiple teams and you’re not sure which team will be working on your project, then take an average across all the teams.

If you don’t have this data, you have two options: guesstimate, or use the cone of uncertainty.

Honesty and transparency

The fundamental tenet in any of these below strategies is complete honesty and transparency with your client. You’ll also need to invest three or four days (maybe less, it will depend on how practiced you are at this process, it will become quicker) in your client, it might sound like a lot and there is a risk there might not be a contract at the end of it, but it is worth it. Once you’ve deeply involved your client in the process from the start, they’ll feel like they’ve already started the project and so it’s easy for them to transition to being a paying customer.

From the outset, you’ll need to explain that you do things a little differently to other agencies. Whatever framework you choose to use (scrum, kanban, etc), spend some time explaining to the client what it is, how it works and why you use it. If you’re a little stuck, here are some ideas

  • It gives you a realistic view of the progress and scope of the project.
  • It gives you, as the client and stakeholder, complete control over what features make it into the final product and which don’t.
  • It allows us to deploy features to live as soon as we’re ready to, meaning you can be getting value from our work and your product, even before we’ve finished it.
  • These early and frequent deployments allow you to test your assumptions about your own users and customers and, if necessary, change the features or product without additional cost.
  • You will be encouraged to be involved throughout the whole lifecycle of the project and can view each separate feature as soon as it’s finished to help guide and steer the project toward exactly what you want (which might not be what you originally asked for).

The best way to do this is to invite your client in for a half-day, high level training and workshop. Run some exercises to illustrate the key points and answer all their queries – this is a new thing for many companies who aren’t used to this kind of agile environment, so it’s definitely worth introducing your client to it if they haven’t experienced it before.

The next step is to create some artifacts. Once you have a brief in hand and have spent some time with your product team and analysts and have a really good understanding of the brief, the market, the competitors etc, invite the client in again in order to build a product backlog of user stories. Set the scene for them and explain that, unlike other agencies, you won’t write a functional spec, or a technical spec, because they’re a waste of time. What you will do is write user stories, which describe each feature from the users’ point of view and what value each of the stories will bring.

Labour the point too. Ensure your client *really* understands that you won’t be building ANYTHING that you can’t find the value for and WHY you take that stance. If a feature, or some functionality, has no value, then why would you build it?

Work quickly through the brief and your research and build a backlog of epics with your client. Don’t spend longer than an hour or two on this and, if you have somebody, get someone to facilitate the meeting so you don’t get bogged down in ANY technicalities, or discussions that are long or drawn out. Once you’ve got your list of epics, have the client order them – which are the highest priority things? What user/customer value do they want to deliver first? Don’t worry about dependencies right now, this is only rough.

Finally, break the highest priority epic into a few slightly smaller stories. This isn’t strictly necessary, but will help the client understand the process better. Again, this shouldn’t take longer than 30 mins.

The estimate and the pitch.

Now you’ve got your feature list of epics and some smaller stories for the highest priority, get some estimating done. I won’t cover it here, but you can read the article on white elephant estimating, or watch my presentation on agile estimation. Try and use the team that will actually be doing the work. If you haven’t got a team, or not sure which of your teams will be doing it, then pick engineers, designers and QAs from multiple teams – you’re only looking for roughly accurate figures. If it’s proving too difficult because of the epics, then break some into smaller stories – getting epics and stories sized well enough to estimate will become easier with practice.

Finally, you’re in a position to pitch, but how do you do it? It all depends on your clients requirements.

It’s very important that the client understands they have complete control over what goes into their product on a sprint-by-sprint basis. Explain that you expect them to be involved in the decision making about what goes into the product, the priority of the features to be added and the scope. So, once you’ve agreed on one of the below strategies and signed a contract, the success of the project will be partially down to their input.

Fixed deadline

If your client requires the project to be finished by a certain time, you can work towards that. It will be to your advantage to provide two estimates though, allowing the client to choose their own end goal:

All stories by the deadline

Estimate the number of teams or people required to deliver all the story points by teh date, then, based on a day rate, create a figure. Remember, to be accurate (but not specific), you’ll need to provide a range, so, using your best, worst and average velocity you can say to your client, “We ESTIMATE, it will take four months, but it could take up to five and half, but it could take three months, therefore, it will cost between £40,000 and £55,000”.

A valuable, shippable product by the deadline.

Do the same as above, but use a fixed sized team – you won’t be able to fit some stories in, but this is fine. Use the average, best, worst to show the client the stories you will definitely do, you will probably do, you might do and you won’t be able to do. This will reduce the scope of the project, but also the cost. Remind the client that what features make it into the product is based on their input.

Fixed cost

Working towards a fixed cost is quite rare, but does happen and it will likely mean that the client won’t be able to have all the features they’ve asked for. In order to work towards a fixed cost, you need to know it (obviously…). Once your client has offered this, you can use it to work out your strategy. It will be similar to working out a fixed deadline. You know how long you have to deliver the project based on the day rate (fixed cost / day rate = number of days). So, then you have a fixed deadline to work to and can use the average, worst, best strategy again to indicate what you can definitely achieve, what you can probably achieve and what you definitely won’t achieve.

Fixed scope

Finally, there is fixed scope. The client has to have everything they’ve asked for – in practice, during the project, this is rarely true once you start building and the early, regular releases helps steer the project.

Again, using your best, worst and average velocity, you can work out a range of how long it will take and then use your day rate to provide a figure.

Cost per iteration

(I wouldn’t suggest this method unless your client is particularly agreeable to trying new things, but, assuming you did the setup correctly I describe above, then they should be chomping at the bit to try this new, visible, transparent and honest method of commissioning software)

Pitch at a cost per iteration, instead of a day rate. You can give rough estimates on how much it will cost and these will become clearer after each iteration. Pitch for a minimum number of iterations after which your velocity will stabilize and you can be more accurate in the remaining length of the project. This is a fairly ambiguous method of pitching though, you’ll need to draw on the other strategies above to work out what features and functionality can be delivered, but if your client agrees to sign you for a minimum number of iterations, and you deliver what you agreed to, by this time, your estimates should be pretty accurate and your velocity very stable, allowing you to accurately cost any stories you haven’t managed to complete.

Using your average, best and worst velocity to predict a schedule
Using your average, best and worst velocity to predict a schedule

Honesty and transparency, again.

It’s worth reiterating here that you should be completely honest with your clients as to how complicated and amorphous quoting for a software project is. If you think your client can understand it, then break out the cone of uncertainty to explain that 40 years worth of data prove it to be true.

Explain that, being agile and adopting a framework to allow for short, deliverable cycles allowing for fast-feedback means that your company will very quickly start delivering visible value to them. It will also mean that they can control what that value is on an iteration-by-iteration basis. But, probably the best arrow in your quiver is to explain that, throughout the project, they can see the progress on an almost daily basis and adjust both what you deliver and how much they want to pay you should they need to.

If you deliver some valuable piece of software that causes them to change they way they think about their product, then they could pivot their entire offering and find more funding – and who is already in place to help them achieve that?

Hooray! We won the contract!

Congratulations! Now, don’t sit on your laurels. Remember the commitments you made to your clients during the pitch process and keep them involved through the project. Begin with release planning to decompose the epics into more usable stories and begin to uncover more of the project as you go, with milestones and deadlines if necessary. Keep your client informed, involved and in-the-loop and they’ll be happy when things change (and they will).

Boo, we didn’t win the contract.

Sorry, that sucks – but use it to learn a lesson. Find out why you didn’t win. If it’s just down to cost, then perhaps review your costs, or spend more time breaking down and estimating the stories to provide a more accurate figure next time.

Send out a survey to the client, asking them what they thought of the process and whether there is anything they’d change, or something they didn’t like. Gather as much data as possible, then spend some time retrospecting with your teams about how you can do it better next time.

My client doesn’t want to be that involved or isn’t interested in how we do it.

My advice here would be to not take on the business. If your client isn’t interested in being part of the process of building their product, then you are setting yourself up to fail. You’ll deliver a functional and technical spec detailing exactly what they want, you’ll deliver it and then find out that, while you delivered what they asked for, it isn’t what they wanted.

Finally

The old black box model of estimating and pitching for client work is difficult to manage, hard to sell and will bite you in the ass later down the line if you got it wrong at the start. By getting your client on board and understanding how you do things, and that they can and should be involved from the beginning to the end, you put yourself, your company, your team and, importantly, your client in a position to succeed.

Mad, Glad, Sad retrospectives

I help my first retrospective for a while recently (that’s not to say the team hasn’t been retrospecting, just that others have been doing it in my stead!) and the team wanted to do the Mad, Glad, Sad exercise.Image

Shockingly, I didn’t know it. Or at least, I didn’t think I did.

The exercise starts with everyone writing on three different coloured posts-its the things that made the mad, glad or sad during the sprint. We then stuck them all up on the whiteboard under an appropriate picture.

We all studied it for a while, but I couldn’t really work out what the difference was between mad and sad. The idea of a retrospective is that you inspect and adapt. Look at what you did, how you did it and change things if they need improving. The idea of mad and sad should be things that, affected us negatively that we want to change in future sprints.

But what is the difference? I asked the team. They didn’t know either.

So, we set about trying to decide what each thing meant. In many retrospectives in the past, we’ve realised that there are things that went wrong, didn’t work or just plain sucked that we could fix and there were things that can be classed as ‘Shit Happens’. Stuff that, while we might expend some effort in fixing them, are unlikely to occur anytime in the near future, so we draw a line under it and move on – there are always more pressing things to deal with.

So, using this idea, we renamed Sad to “Get stuff off my chest” and Mad to “I’m not going to take it ANYMORE”.

Image

This means, we can highlight things that annoyed, frustrated or saddened us, but that, realistically speaking, there isn’t a lot of value in dealing with and then the stuff that REALLY got under our skin and we have to deal with.

Once we’d renamed them, we then re-sorted the post-its we’d stuck under the two columns and found that, things we might have dealt with as a real problem and spent time working on, were no more than just people wanting to have a rant, and real problems might have been overlooked as they hadn’t been applied with enough gravity. Perhaps the team member feels that it’s just something bothering them and doesn’t want to make an issue of it.

We worked through all three columns and, during discussion, decided to move some from Sad to Mad and vice versa as we discussed the pros and cons of each. In the end, we had a good, solid list of things that ‘we’re not going to take ANYMORE’ which turned into a list of definite actions.

So, if you run this exercise with your team, check their and your assumptions about what Glad and Sad mean – you may be met with blank looks – and make sure you all have the same understanding to make a good exercise into a great one.

How big is it? A guide to agile estimation

I gave a talk at LondonWeb a couple of weeks ago. It was a lot of fun. I tried to broadcast it with Google Hangout on Air, but the hotel wi-fi was rubbish. Luckily, @nathanlon recorded it too, so here it is:

and the slides:

Slides for ‘How big is it? A guide to agile estimation’

How to do appraisals: asking the team

Mini Marshal by mpeterke at flickr
Feedback comes from these - image by mpeterke

Part of my new role is to review and appraise the team. Given that there are a lot of them and I can’t spend enough time with each of them (and nor would I want to) to be able to do a good review, I figured that I’d have them do 360 reviews. There are multiple ways to do this, which I outline below, but I didn’t pick one for my team, I let them vote (I also let them post any new review methods they knew of in order they could vote for those too) – can you guess which they voted for?

Traditional

The traditional approach is an anonymous review: I pick several people to review the employee, they craft and submit reviews and then I deliver this feedback to the reviewee.

This sucks on multiple levels:

  • Suppose the reviewee disagrees with some of the feedback, how can they offer a rebuttal? To me? How does that help?
  • What happens if they don’t understand the context of the feedback?
  • What happens if, in the name of keeping the feedback anonymous, I remix the feedback and lose the actual message (but I don’t know I’ve done that)?
  • We work in tight scrum teams, this means that the BEST people to offer feedback, are the other people on the team (also the product owner and potentially stakeholders, but we’ll come to that), this means that, after the review, the employee goes back to their desk, possibly seething or feeling dejected, put upon or just miserable because of the above but they KNOW that someone in their team gave them shitty feedback.
  • Sucks, right?
This doesn’t suck because:
  • The shy, or sociopathic might feel they can be more honest if they don’t have to do it face to face.
  • The feedback won’t be bland.
  • There is no fear of retribution (unless they find out who it was).

Not anonymous

So, another option is to have the reviewee choose the people they want to review them. No. This also sucks:

  • They might pick people who don’t really have much to do with the and would offer a good, although bland, feedback.
  • Again, they have no chance of rebuttal or dialogue there and then to discuss context of the feedback unless …
  • …they go back to their desk knowing that one of their team gave them shitty feedback and now aren’t sure how to broach the subject.

This doesn’t suck because:

  • Same as the above.

Team 360

The whole team goes to the pub (or cafe, whatever) and they take it turns to offer feedback on each other. Starting with me as a warmup so they don’t feel shy when it’s there turn (this is great for me, I’ll get LOADS of feedback). You go round the table one at a time and everyone on the team feeds back to me – positive and negative – and write it all down. Then someone else volunteers and so on until everyone has had a go.

Developers drinking...

You need trust in the team for this and a good bond. This isn’t going to work with a new (or a ‘forming’) team and I’d advise something different (not the above, maybe just one-to-one coaching until the team are up to cruising altitude). But for established teams, or those stuck in a retrospective rut, I think this is a great idea.

I’ve run one trial of this method before putting it out to vote and the team had some positive feedback on the process (and each other!). It’s tough to do, but giving and receiving feedback is always tough, and the idea of doing it face-to-face with people you work with every day is challenging, but you should do it. Nut up and prove to your peers that you’re a grown up and can and need to learn something about yourself that you didn’t know before. This is about improving yourself in ways you didn’t know you could improve and making sure you’re not annoying your team. 😉

This idea isn’t new, although I wish I’d thought of it, I originally read it in Management 3.0 by Jurgen Appello (the book is good, as are his talks, but his slides suck – well, he does draw them with MS Paint…)

What about … ?

Well, I mentioned product owners and stakeholders above. I’m undecided yet (but, I’ll probably let the team decide) on whether to include product owners in a team 360. They do spend a lot of time with the time and can probably offer some good feedback – it does depend on the relationship with the PO. Even though we don’t foster the feeling (and, Affiliate Window isn’t alone in this I’m sure) there’s a little ‘them’ and ‘us’ between the developers and the product team – but maybe this is a good start in breaking down that status quo.

Also, for stakeholders, having those in the team 360 would be pointless – they don’t have day-to-day dealings with the team, mostly it’s just input and output with the odd nudge in between, but we need their input. This is what the release retrospective is for but I’ll cover that in another post!

Estimating stories quickly and efficiently with ‘The Rules’

An Old Timer by hiro008
ticktickticktickBING - An Old Timer by hiro008 on Flickr

Estimating a backlog should be easy, especially if your Product Owner has looked after it, knows how to write good stories that mean something to the developers and the business and is able to prioritize based on business value (or, customer delight!). However, estimation meetings, poker planning, planning two or whatever you call it, can often be painful events that descend into chaos, anarchy and heated debate. While these things are all fun, estimation should be fast and simple, afterall applying arbitrary numbers, whose only measure is relatively sized, to amorphous items of work can’t be rocket science, so why would you want to spend much time on it?

Trouble is, developers and engineers are paid to solve problems, that’s what they love to do, so they begin the moment the problem is presented! This is to be applauded, but doesn’t really nail what should be fast conversations about stories!

We’ve recently been coarse estimating the next releases’ worth of stories for each if our products, the backlogs for these products contain between eight and 38 stories, depending on the goal. When we started estimating these, it was clear that it was going to be painful, so I created ‘The Rules’ (to be clear, they’re guidelines, remember the Shu Ha Ri!):

  1. Reset the countdown timer to five minutes.
  2. Product Owner reads story and acceptance criteria.
  3. Team ask questions to clarify their understanding of the feature. No technical discussion.
  4. When no more questions, the team estimates.
  5. If estimates converge or there is consensus, GOTO 1 and start a new story.
  6. If no consensus, start more discussion. Technical discussion is OK here.
  7. When the conversation dries up, or the time ends, whichever is first, the team estimates again.
  8. If estimates converge or there is consensus, GOTO 1 and start a new story.
  9. If a consensus isn’t reached, reset the time for another five minutes.
  10. When the conversation dries up, or the time ends, whichever is first, the team estimates again.
  11. If a consensus still hasn’t been reached after 10 minutes, put a question mark next to the story and GOTO 1 and start a new story.
  12. Optionally: create a spike story to discover more information in order to estimate the difficult story.
This means that the team will never take more than 10minutes to estimate a story. Usually, I’ve found, that the first estimate, right after the PO reads the story and the team clarify their understanding, is enough and rarely do we need the time for the second timebox of five minutes.
Remember, these are just estimates, they can be revised later if necessary and, really, the important part of this meeting is the conversation to clarify the requirements and, thus, ensure that business value is met.

Should I start with scrum, or kanban?

or, Should I transition from scrum to kanban?

Tellurian Post Apocalypse Roleplay
Adventure lies EVERYWHERE! Image by Daniel Voyager

We’ve been doing scrum at Affiliate Window for a couple of years now. We go through the inspect and adapt cycle regularly and we’ve made few, if any, changes to our process, only our engineering practices. Recently, however, we’ve been working on a couple of projects which have required some groundwork and lots of backend work. This kind of work doesn’t really lend itself to the ideal of having something to show the stakeholders at the end of every iteration. We still have the review, describe what we’ve done and then take a look at the backlog with the stakeholders to decide what the priorities are for the next iteration.

This has raised a question from one of the teams about moving over to kanban and it got me thinking about what organisations can do to start on an agile journey, should they start with scrum, or kanban? What about established teams? Can they transition from scrum to kanban, or vice versa? Why should they? Should they even do it?

Now, there’s no reason why you can’t continue doing scrum (which is what we’ve decided to do), even with backend heavy work. Afterall, it’s simply a framework designed to offer constraints in order to create fast feedback (amongst other benefits), but kanban is attractive in that it allows you to decide when you demo/review and so gives you the benefit of time in order to actually have something tangible to show the stakeholders. Kanban is also attractive for those coming to agile afresh. It allows you to start integrating some agile practices and get a view of your issues (in order to solve them) without changing your heirarchy or existing process.

For Kanban to really work, you need to make sure you’re setting up a pull system that describes your entire process, from inception to delivery – not just a column based todo board (which, unfortunately, you see a lot described as ‘kanban’). To be able to do kanban well and retain the holistic view of your work required by the product team to make educated guesses on delivery and suchlike, the team needs to be mature enough to do kanban well. This means; mapping out the different states in your system, working out WIP limits and, importantly, deriving cycle time (how long it takes an item on the board to get from left to right) in order to estimate delivery/scope/cost etc. Herein lies the rub (I’ve always wanted to put that in a blogpost!).

If you’re not able to use scrum to help you deliver backend-heavy, large work items, then it’s unlikely that kanban can help either. The constraints of scrum help you stay on target and be accountable for what you have or haven’t done and make tweaks and adjustments to perform better in the coming iterations. Once you’ve nailed scrum and you have a product backlog with well defined stories, which are easily estimatable, then you’re ready to mature to a really valuable, efficient and data-driven kanban implementation. Eventually, when you’re able to size your stories all the same, you can do away with estimating as your cycle time will remain constant. Allowing you to set regular reviews and inspect/adapt cycles. You’re one step closer to programmer anarchy!

So, should you start with scrum, or kanban? Kanban allows you to start your agile journey without upsetting the status quo; you don’t change any names, or add/remove roles. You simply illustrate your process on a big visible board and become accountable, as a team and a company for the work you do. You can implement any agile practise you like (standups, backlogs, user stories, etc) and see how they work for you without rocking the boat. Scrum, on the other hand, requires a reboot and an installation of a new process, one with constraints *built in*, but the benefits are numerous and valuable right off the bat.

Should you start with scrum, or kanban? The answer, I think, is ‘it depends’. If your organisation, including those at the top are comfortable with scrapping your current process and starting something new, even with it’s inevitable lead time, then go for it. Use scrum and the constraints will help you become agile. If your organisation is reticent to start, or you have buy-in from teams, but not the business, then start with kanban. It’s lightweight and non-intrusive and will still allow you to raise your impediments and issues (but solving them often requires bold and honest action, beware!).

Should teams transation from scrum to kanban? Probably, but make sure it is for the right reasons and that your team is mature enough to make the most of it, if you’re not sure, then try transitioning via scrum-ban, it’ll help you see the benefits and enable you to get better at the things which make kaban most useful, without changing more than you are comfortable with.

I’d be interested to know how others started their agile journey, or whether many transition from scrum to kanban (or the other way). Drop me a mail, tweet or leave a note in the comments!

Scrummaster? We don’t need no steenking Scrummaster!

Sad Face :(
A Sad Face by Emmaline

I discovered something strange today. I recently left one development team to go work for another (that’s for a seperate blog post all together) and I was chatting to some old team members today who had just had their Scrum Sprint Review/Retrospective and they mentioned one of the things that had come out of the Retrospective was that they aren’t going to have a Scrum Master for this sprint, or, it seems, any sprint.

“Huzzah!” I hear you cheer! “They must be self-managing!”

No, no they are not. It’s far, far worse than that. They’ve decided they don’t need a scrum master as, in their experience, the scrum master is useless. With no power to help the team make decisions and one team member describing the Scrum Master as “a puppet of the business”, then it’s quite clear that scrum, at this company, is broken.

It’s a sad day for me as ever since I heard about scrum about two and a half years ago from a colleague (@garethholt), I realised it’s potential. I’ve tried hard, arguing the point with colleagues, management and everyone who’ll listen. We’ve tried different length sprints, staggered sprints, digital backlogs, analogue backlogs, all sorts. Inspected and adapted and now it’s all for nowt. At every turn, the business said “No, I’m not confident enough to try scrum properly.” or, more accurately, “I don’t want to surface the organisational dysfunction.”. The team realises that the business cannot let go of the reigns.

It’s all about trust.

They all seem like the wind has been taken from their sails and it’s sad and dissapointing. Especially as they have an ‘Agile Manager’ and a ‘Product Owner’ who is also a director. Even the people employed to support scrum don’t seem to have confidence in it. Now THAT is a problem.

I might go home and cry.