Link dump! Agile is the new waterfall, from chaos to managed chaos, best practice and more!

Link dump! Agile is the new waterfall, from chaos to managed chaos, best practice and more!

Agile Is The New Waterfall
“Agile promised us engineer driven development.” is the first of many fallacious claims that the CTO of joinjune.com makes in this rant that woefully misunderstands what agile is. He makes the odd good point, but then messes up the punch line (I especially like the bit about estimating). I suggest you read it with an open mind, otherwise you risk a hernia.

From chaos to managed chaos
Vasco Duarte tweeted this during the Coding Serbia conference. It’s one teams journey from estimates to #noEstimates. An interesting read.

Four Ways to Kill Best Practice
I’ve blogged about best practice before, but Megan Louw from ThoughtWorks offers some practical advice on getting rid of the idea from your business entirely.

Hashtagagile
Some cheeky self-promotion on the new agile slack channel I recently created. We’ve got a massive 23 people online now – come and join the conversation

How Google Thinks About Hiring, Management And Culture
I really enjoyed Work Rules! By Laslo Bock, who used to be Googles SVP of operations – there are some excellent ideas and stories in there and plenty of SCIENCE! This has an interview with the man and some key points from what he was saying – worth a listen.

BDD: Starting With Quality

By Shannon Kringen @ flickr
Quality with a star and stuff

Quality isn’t defined by a lack of defects or bugs. You wouldn’t get a cup of barista coffee and exclaim, “This coffee is quality because it doesn’t have a problem with it.” (No bugs in my coffee). You can get a cup of coffee from a petrol station that doesn’t have any problems with it. It’ll be hot, wet and coffee flavoured – there are no defects with this coffee. So, where does this idea of the “quality” of something come from? It’s a subjective thing. A quality cup of coffee to me may not be the same quality cup of coffee to you. If you favour a black americano, your definition of quality is not going to involve a milky heart drawn into the top of it, which is how I like my coffee.

When we talk about the quality of a feature, the fact that it doesn’t have any defects is an implicit part of that quality, but it’s not where quality starts or stops. What makes the quality of a feature is whether or not it does what it’s supposed to do and whether it provides the amount of value to the user that they expect. Again, this is different depending on your context: what you application does, who it does it for and why.

Quality Assurance (QA) plays a fairly key role in any software development team. I know some schools of thought suggest that there shouldn’t be a QA role, and while this is probably the subject of a separate blog post, I feel that this is wrong. We have a QA in the team, just the same as we have a designer in the team. It’s a specialist role that requires certain skills I don’t expect engineers to necessarily have.

That said, I’ve always been troubled with the way that the QA role is executed in a team. Let’s suppose that we’ve got a scrum team that performs well. They commit to a given number of independent stories, work on them sequentially, so they finish the first story before starting the second and so on. Once the feature has been completed, the work of the QA starts in earnest (until that point, the QA will put together a test execution plan and a strategy for dealing with the tests during the sprint). They will begin exploratory testing and creating or updating automated tests. This is all well and good and will ensure that the feature meets the minimum, implied, level of quality. In most cases, it’s enough that it’s free of defects.

For me, this is where the problem lies. But how do we solve the problem?

We realised that actually, we never really discussed what quality meant to a particular story or sprint. We had made assumptions about the quality based on the content of the story and the acceptance criteria. As long as the story met those acceptance criteria and didn’t have any defects, we assumed we were done. In reality, we weren’t really thinking about what constitutes quality but just what constitutes the feature.

So we decided to start with quality. It made sense to talk about what we thought quality meant to any particular story before we talked about anything else. At the beginning of planning a story in sprint planning, we would spend some time discussing what quality meant to this feature. Using the example of a login screen, the story might be:

  • As a user,
  • I need to log in to the site,
  • to access all the features.

Before we chose to start with quality, we might discuss what the feature looked like, or we may already have a design for it. But then we’d just jump straight into the technical planning: how do we implement it, what code do we need, database schemas – that kind of thing. Instead, now we talk about the feature from a users’ point of view:

  • What happens if they get their password wrong?
  • How do they reset their password?
  • How long should the password be? Should it have characters?
  • What happens if a user doesn’t have an account, how should we direct them to sign up?
  • What kind of error messages do we want to show?
  • Etc.

This opened up a whole new discovery phase. Product Owners cannot think of everything when writing stories and this discovery allowed us to offer our insight into how the feature works, ask questions about how it should work and these are often based on technical knowledge of the platform that the Product Owner may not have. We began by adding these new requirements to the conditions of satisfaction, but they soon become long and arduous to check. So we looked for a better solution than acceptance criteria.

The solution we chose was to use a new tool. BDD (Behaviour Driven Development) is a method of functional testing which allows you to describe the functionality of a feature in a “scenario” file in plain english:

  • Given I am on the login page
    • When I enter ‘mikepearce’ into the username field
    • And I enter ‘butteryballs’ into the password field
    • And I click “login”
    • Then I should see my dashboard.

We can then use a BDD framework to run these scenarios against our web application. We use Behat as a framework and it works really well (apart from our problem of data fixtures…). It pretends it’s a browser (and, will use Firefox if it needs to work with javascript) and does what your user would do with your system. This allows us to have automated functional testing too. Awesome.

So, when we’re doing this extra discovery step, we record our findings as these step definitions, instead of acceptance criteria:

  • Given I am on the login page
    • When I enter ‘mikepearce’ into the username field
    • And I enter ‘wrongpassword’ into the password field
    • Then I should see the message ‘Sorry, your password is wrong’
    • And I should see the link ‘Did you forget your password?’

We slowly build up a specification file for this feature, which is mostly centred around the “happy path” and add edge cases or problem scenarios if we think of them. It’s important to note that we don’t expect to think of EVERYTHING in this session as we time box it to ten minutes and expect other features or ideas to emerge during the sprint.

Once we’ve finished, we’ve got a specification file that we can run against the web app with Behat. The first time we run it, it will fail, because the feature isn’t there – but this is good! This is Test Driven Development for the masses! As the team slowly builds the feature and keeps running the Behat tests against it, it will slowly become more and more green. If new things emerge during the sprint, we add extra steps to the scenario file. By the end of the sprint, all Behat tests will be green and we can have confidence that, not only is the feature defect-free, but it also does what the user expects it to and provides them value.

So, now we have a way of assuring that our software has quality. Not only do we have a slick set of automated functional tests, but we’ve also added a low-friction, low effort step of discovery that allows us to really understand AND define what quality means to us in the context of this feature for our product.

I’d encourage you to try this in your teams. Actually having Behat (or any other BDD framework) isn’t really a requirement to get started. You can start by just writing your scenario file as the first step of your team planning a story and storing it somewhere for future reference. The value is in the discussion you have to define the quality. The artefact you create is useful afterwards for checking that you meet those requirements. The added benefit is that it’s written in a language that anyone can read – from your team and product owner, to stakeholders and anyone else in the business who is interested to learn more about your product.

(The benefits of using BDD are outside the scope of this article for more than I’ve described, there are plenty of articles on the web to suit your style, language and infastructure and I would encourage you to learn more about it. Start with this presentation from Gojko Adzic and this from Dan North)

Mad, Glad, Sad retrospectives

I help my first retrospective for a while recently (that’s not to say the team hasn’t been retrospecting, just that others have been doing it in my stead!) and the team wanted to do the Mad, Glad, Sad exercise.Image

Shockingly, I didn’t know it. Or at least, I didn’t think I did.

The exercise starts with everyone writing on three different coloured posts-its the things that made the mad, glad or sad during the sprint. We then stuck them all up on the whiteboard under an appropriate picture.

We all studied it for a while, but I couldn’t really work out what the difference was between mad and sad. The idea of a retrospective is that you inspect and adapt. Look at what you did, how you did it and change things if they need improving. The idea of mad and sad should be things that, affected us negatively that we want to change in future sprints.

But what is the difference? I asked the team. They didn’t know either.

So, we set about trying to decide what each thing meant. In many retrospectives in the past, we’ve realised that there are things that went wrong, didn’t work or just plain sucked that we could fix and there were things that can be classed as ‘Shit Happens’. Stuff that, while we might expend some effort in fixing them, are unlikely to occur anytime in the near future, so we draw a line under it and move on – there are always more pressing things to deal with.

So, using this idea, we renamed Sad to “Get stuff off my chest” and Mad to “I’m not going to take it ANYMORE”.

Image

This means, we can highlight things that annoyed, frustrated or saddened us, but that, realistically speaking, there isn’t a lot of value in dealing with and then the stuff that REALLY got under our skin and we have to deal with.

Once we’d renamed them, we then re-sorted the post-its we’d stuck under the two columns and found that, things we might have dealt with as a real problem and spent time working on, were no more than just people wanting to have a rant, and real problems might have been overlooked as they hadn’t been applied with enough gravity. Perhaps the team member feels that it’s just something bothering them and doesn’t want to make an issue of it.

We worked through all three columns and, during discussion, decided to move some from Sad to Mad and vice versa as we discussed the pros and cons of each. In the end, we had a good, solid list of things that ‘we’re not going to take ANYMORE’ which turned into a list of definite actions.

So, if you run this exercise with your team, check their and your assumptions about what Glad and Sad mean – you may be met with blank looks – and make sure you all have the same understanding to make a good exercise into a great one.

Have you read the manifesto?

I sometimes wonder whether people have actually even read the Agile manifesto. It is made of four core values and 12 principles, I won’t paste them all here, you can find them at the above link. Maybe the principles are hard to find, but you can read them here.

That’s all there is to it. Agile isn’t a “framework” or a “methodology”, if anything, it’s common sense. Common sense described in four values and 12 principles.

Agile is not a synonym for scrum, or kanban, or anything else. The term “Agile” shouldn’t be used as a description of an all encompassing set of practices that it doesn’t describe. For example, what does “Agile adoption” even mean in this context? Does it mean you’ve finally decided to use common sense? No, it means a company is adopting a framework or methodology that is inspired by the Agile manifestos values and principles, such as Scrum.

Agile is a mindset, a way of being. So please, stop posting things about how “Agile has failed”, or “Agile does damage” unless you actually mean that common sense has failed, or common sense has caused damage. Which is unlikely.

Remember, you don’t DO agile, you ARE agile, it’s a way of being, not something you do.

How big is it? A guide to agile estimation

I gave a talk at LondonWeb a couple of weeks ago. It was a lot of fun. I tried to broadcast it with Google Hangout on Air, but the hotel wi-fi was rubbish. Luckily, @nathanlon recorded it too, so here it is:

and the slides:

Slides for ‘How big is it? A guide to agile estimation’

You don’t really want a faster horse!

French Officers watering horses
Horses, going slowly. Photo by The Library of Congress - http://flic.kr/p/a88fYb

Henry Ford is quoted as saying “If I’d asked my customers what they wanted, they would have asked for a faster horse!”. While this might be true, this is a short sighted view of product management.

Often, our customers will ask us for something they want, but they’ll frame it in the context of something they already have. This seems to them to be the right thing to do. It’s helping us out as the people who build their products, right? Kind of. If we were to take the view that customers always know what they want, then I suppose so. However, our customers will endeavour to give us a solution to the problem they have, instead of being clear about their problem and letting the problem solvers have at it instead.

If your customer says “I want to be able to send an email to all my users informing them of X,Y and Z.”, You’d think they what they wanted to do was send an email to customers about X, Y and Z. But is that what they really want?

If you probe a little deeper, by asking open questions, you might find a different answer:

“What is X, Y and Z?”, you ask.
“Well, it’s the new promotions we offer, we want to tell our users about them as they might not find them in the promotion list.” Replies your customer.
“How does the promotion list currently show them?” you probe.
“It’s ordered with the newest on top, but if our customers aren’t looking for promotions, they might miss them.”
“So, you need a way of highlighting your newest promotions?”
“Yeah, I guess so. But we’d like to be able to send them emails about other things as well, like changes to our terms, or new features.”
“OK, so am I right in saying you need some way of informing your customers of new features, promotions and other pertinent information?”
“Yes!”

Now you have the real problem, which turns out to have little to do with email and there are many ways to solve this kind of problem. It’s now a case of discussing these ideas further with the customer and the team in order to find the right solution … which might not be a complex email system.

So, if Henry Ford had asked his customers: “What do you want?” and they said “A faster horse!”, he might have asked, “What do you need a faster horse for?” and the answer might well have been, “Duhh! To go faster!” Which is the real requirement. One that he fulfilled quite well.

Originally published on Technorati.

The Foundations of a High Performing Software Development Team

Get Busy by Dusk Photography
Ants are high performing. Yes they are. Can you lift 100 times your own body weight? No. Get Busy by Dusk Photography

(Article first published as The Foundations of a High Performing Software Development Team on Technorati.)

Building high-performing teams requires a foundation of core values in order to channel the traits and skills needed to be successful. Sounds like marketing blurb right?

Let’s break it down into manageable chunks.

What is a high performing team? The answer, I suspect, will differ depending on who you ask, but when asked of a team of software engineers or developers, they’ll tell you it’s a team that can do anything, do it quickly and do it right, while maintaining rhythm and a steady growth of both the team and it’s individuals. The question is, how do you build this team? Firstly, you need the right people. Hand-pick your teams, or don’t – it’s up to you. Great success can be head with constructing teams of star players as well as having teams of volunteers. After that, it’s about coaching the team to performance and that needs to start with core values.

These values should contain the following at least; respect, trust, openness, focus and commitment. If you get your team to help craft these values, they may come up with a few more. Make sure the team understands, or comes to an agreement on what each of these mean (focus is a good one, what does that mean? Can you have focus if you’re having fun?).

With the foundation in place, the skills and traits build on top of that. Depending on your team these will differ also, but some traits will always be there; the team will be able to self-organize, they’ll be totally committed, an unshakable belief in themselves and what they can do, they’ll be responsible and own their own problems, they’ll be motivated and engaged.

With the foundations in place and the traits emerging, a high performing team will emerge, they’ll get stuff done, the right stuff and they’ll do it with pride, skill and a devotion to quality.

‘Get Busy’ Photo by dusk-photography