Interviews

Interviews

I prefer interviews that are a conversation as opposed to me asking questions and listening to answers. I want the candidate to engage with me like they would a colleague – it shows they’re taking the process seriously and are interviewing me just as much as I am interviewing them. I love it when I leave an interview thinking “I really want them to work here, I hope we impressed them enough!”.

What I want to know about a candidate is what experiences have they had before and how they dealt with them. I want to hear about real world examples. Questions will usually take the form of “Tell me about a time that X, what did you do?”

“X” could be:

  • you had a difficult team member.
  • you had a ridiculous deadline.
  • you had some tough politics.
  • you worked with remote stakeholders.
  • things really went bad.
  • thing went really well for you and the team.
  • Etc…

Usually these will be a conversation rather than a question/answer session, but still based on the candidates actual experience as opposed to “What would you do if?”. However, it’s OK to say “Actually, I’ve never had to deal with that, but if I did, I would …”

I also want to know about actual projects themselves as I’ll be trying to get a feel for team/project size, the position of influence the candidate felt they had within the team. The language people use is important – are they talking about they/them or we/us?

I’m looking for PM style – are they command and control, or servant leader? Or somewhere in between? The language they use is as important here as the actions they took. Do they use people’s names from previous projects, or just “the developer” this is telling, although not conclusive.

Finally, I’m looking for evidence of autonomy and mastery. I want the successful candidate to be someone I can point at a problem and they’ll get on and deal with with little interaction from me.

Confidential, not anonymous. Performance, not pay rise

Confidential, not anonymous. Performance, not pay rise

(First posted on Medium)

Peer performance reviews are broken, but if you must have them, read on.

Traditionally, peer reviews are anonymous. Reviewers are asked a bunch of questions and possibly asked to give a grading of a peer or colleague. I’ve never seen this work particularly well anywhere I’ve worked and there arecountless articles decrying their use. There are two fundamental reasons, I believe, that cause them to suck.

  • The purpose is unclear: why am I reviewing this person? How do I know my comments will be taken seriously and in the context I wrote them in?
  • Mistrust: how do I know the process is fair? Are my reviewers spending as much time as me? Do they know me enough to give a balanced review? Will this mean I get less of a pay rise? Is it worth my time?

These lead to two secondary problems:

  • Procrastination: everyone leaves doing their reviews until the last minute, this means they’re rushed, low quality and bland.
  • Closure: what happens to my comments? How do I know they made a difference? I am apathetic about this.

I dislike the anonymity of these kind of reviews chiefly because they do their part in fostering a culture that doesn’t value transparency, they make secrecy seem like it’s a behaviour that’s sanctioned by the company — secrecy shouldn’t be sanctioned — we demand openness, transparency and honesty — anonymity inhibits these behaviours during one of your most crucial activities.


What can we do about it?

Performance, not pay rise

This one is easy(ish).

Dan Pink talks about the three things people need for high performance are autonomy, mastery and purpose. Of the three, purpose is what is important in this context. Give your reviewers a reason to complete a review. The reasons, of course, are up to you — but what is key is that, to get the most of them, they should be unhitched from pay reviews. Peer and performance reviews are about helping the person being reviewed to improve in some way — either professionally (usually this is the case) or personally. Linking them to some kind of increase (or, decrease) in remuneration will kill the value of these reviews faster than video killed the radio star.

Do your peer/performance review cycle at a different cadence to pay reviews
It’s easiest to do them all at once, I know, but staggering them (even by a couple of weeks) will be useful in this context. In Work Rules! by Laszlo Bock, he states the Google have their pay review sessions a month after their performance reviews. This is enough time to decouple them in the minds of Google employees. He says that it reinforces the idea that they are separate to have the activities as separate in everyone’s calendar.

Make it obvious what they are for
When you send out whatever missive you choose to kick off a review cycle, make its purpose absolutely explicit and keep repeating this. In The Advantage by Patrick Lencioni, he says that, to get the message of what is important across to an entire company effectively, you need to repeat it and repeat it until and don’t stop, even if you think everyone gets it.

Follow up and close the feedback loop
This one is important and almost universally missed out.

As a conscientious reviewer, you spent time and effort on writing a good, balanced review of your peers, giving examples of behaviour and congratulations on a job well done. You hit the “send” button with a sense of a job well done and then … nothing.

What we’re missing is feedback for the reviewer. The whole process is (rightly) geared around the person being reviewed — we look to change behaviour through decent feedback to the person being reviewed and give them concrete reasons to improve, or continue down a certain path. There’s really no good reason why we can’t follow the same logic for the person doing the reviewing. Otherwise, they will only ever feel like their feedback is dropped into a black hole.

For example, let’s say that Bob was being reviewed, and he was informed that, while his work on the Acme project was excellent, his timekeeping could be improved so as not to keep his colleagues waiting around at the start of meetings. Bob discusses this with his manager and agrees that he’ll make sure he ends meetings five minutes early to be on time to the next one.

After the review process with his manager, Bob will pen a short email that is sent to his reviewers:

Dear Colleagues,

Many thanks for taking the time to give such honest and balanced feedback, I’m glad that my work on the Acme project was appreciated, I worked hard to make it a success.

I’m sorry that my poor-timekeeping is causing a problem for some of you, I wasn’t aware it was having such an impact — I’m in too many meetings! I’ve committed to leaving meetings five minutes earlier to make sure I’m on time — please bear with me while I do some rescheduling, as it may take a week or two to get into the new routine.

Regards,

Bob

Without much more effort, Bob has closed the loop, for both himself as a means to thank his reviewers and apologise (if necessary) and also for his reviewers, they can see their feedback was taken seriously and acted upon.

Next time they’re asked to complete a review for someone, they’ll remember the feedback loop was closed and will try to provide good, constructive feedback that can be acted upon.


Confidential, not anonymous

Making reviews and reviewers public leaves many people quaking in their boots. Complete transparency is not for the faint of heart (but, it’s somewhere you should aspire to) and consequently, not something I would encourage jumping straight into for anyone with an already established process. What I would encourage though, is moving from anonymous to confidential.

The difference is important on many levels. But what does it mean?

Anonymous means that nobody knows the identity of the reviewer, neither the manager or the reviewer. Confidential, on the other hand requires that the reviewer’s name is known, but only to the manager of the person being reviewed. The benefits of a confidential review system outweigh any potential risk that may come with one.

For the reviewer
Removing the comforting blanket of anonymity means that anything written down, must now be carefully considered. If I know that the manager of the person I am reviewing will know what I’ve written, I can’t provide useless waffle, bland platitudes, backbiting or simple bullshit. The fact that my feedback can be sense-checked means my feedback will be more considered and, hopefully, of a higher quality, which will improve the process end-to-end. I also understand that, should my feedback require clarification, I’ll be asked, directly, to provide it. So, I feel some measure of shared responsibility for resolving any issues that I bring it.

For the manager
In the past, I’ve seen feedback that is useless. Sometimes it’s personal feedback (one employee has a personal problem with another), other times, it may not have an example accompanying it, on rare occasions, the feedback may border on gross misconduct, or a sensitive problem that cannot be dealt with in a normal peer performance review process. When the feedback is anonymous, it stops there. There’s not much I can do about it and adding it to the performance review brings little value.

If, as a manager, I were to know the author of the feedback for one of my employees, I could do something about it. I can understand both sides of the issue and, when it’s not anonymous, I can take remedial action that may otherwise be impossible. For example, a personal problem could be fixed by encouraging the parties to discuss it — “Pete is frequently short with me in our Thursday afternoon meetings and it bothers me, have I done something to upset him?” — Pete has a call scheduled with the unreliable service vendor before the Thursday meeting and he is often arguing with them on the phone (and only I know this). I can encourage Pete to reschedule his meeting and then spend some time with the feedback author to explain the situation and ensure them they’ve done nothing to upset them. Of course, I would know both sides and can discuss this potential outcome with the author of the feedback and see if they’re happy to sit with Pete to discuss it — this would expedite the resolution to closure much quicker than me saying “Pete, you’re grumpy in the Thursday meeting and someone thinks you’re upset with them”.

I can find clarity on any feedback that I don’t understand (“Sarah didn’t do a great job on that work for Bobs Books INC” — “What do you mean Sarah? What was the problem?”).

Finally, I can deal with much deeper problems sooner that I could with anonymous feedback. When feedback is anonymous, the remedial action is broad, when it is confidential and I can seek clarity from the author and the remedial action can be precise.

Obviously, any outcome to a more serious problem would be discussed with the author beforehand, to ensure they are comfortable with the process and that confidentiality remains.

For the person being reviewed
If I know that my reviewers are not hiding behind a shroud of anonymity, I know that any feedback will be considered and of a higher quality. I also know that because my manager knows who the feedback is from, I can assume it will be fair and balance and that my manager will do what is required to make it fair and balanced if it isn’t. In short, with it being confidential and not anonymous, I know it will be fair and I know it will be meaningful for me in helping to improve myself.


So…

Most of us don’t have the luxury of running HR or People Ops for the companies we work for and, for the most part, or demands for a better system of performance reviews (or NO performance reviews) will fall on hard-of-hearing ears. However, small changes like these I propose will move us closer to a better, fairer way of reviewing our staff and actually make sure it is a meaningful process that provides value for everyone involved.

Common vs. best practise

Common vs. best practise

I’ve heard the term “best practise” a lot over the years. It’s always fascinated me as quaintly ridiculous notion. If you’re not doing “best practice” then, I guess, you’re doing “common practice”.

But why would you do that? Why would you expend effort on doing the same thing as everyone else is doing, when there is clearly a better way of doing it, the “best practise”?

Perhaps it’s simply the fact that best practice slowly become common practice as everyone starts to do it. You have to invent new ways to practice your craft, or trade, in order to be better than those slogging along at a common place.

The trouble is, if you start looking at “best practice” as a way to improve on your current “common practice”, you’re just exacerbating the problem, you’re just turning “best” into “common” quicker.

What can you do about it? Well, the first thing is to quit worrying about what best practice is. Focus on what is best in the context of what you do and not in the context of your industry. The next best thing for you to do probably isn’t best practice and, it may even fly in the face of best practice, but if it helps you do better, reach a wider audience or climb higher, than who cares whether it’s best or not?

Stop following trends and start making your own path.

Annual/Half-yearly/Quarterly Performance reviews: if you have to do them, do them well.

Annual/Half-yearly/Quarterly Performance reviews: if you have to do them, do them well.

I have mixed feelings about performance reviews. On one hand, I think they’re not a useful way of going about appraising an employees performance, on the other hand, I think they’re a useful way of appraising an employees performance.

It’s all about context.

If you’re in a company that requires you to do them, then do them – it’s part of the process in which the company is entrenched and, unless you are way up the food chain, all  you’ll be doing is causing trouble for people by not doing them and, if you’re going to do them, do them well.

Don’t leave them until the last minute, spend at least a couple of weeks going over your notes (you have notes about what you/your reports have done, right?) talking to peers (if you don’t have 360 reviews) and drafting and re-drafting the review. If you’re doing them for direct reports, then you damn well owe them this time and effort, a large part of their career is in your hands and, how else will they grow without this kind of feedback? If you’re doing it for your boss, then do the best job you can there too – this will give him or her the best opportunity of providing you the feedback you need if you take it as seriously as you should.

Doing reviews well involves good, constructive (or destructive) feedback, using examples where available and appropriate and making sure you cover all the bases. Be honest, but supportive. Be clear and don’t waffle around the subject, especially with “How can this employee improve?” type of questions.

Finally, if any of your feedback comes as a surprise to those it is intended for, you suck as a manager – this stuff should be coming up long before performance reviews.

How to write a tech CV (that doesn’t force me to judge a book by its cover).

I read a lot of CVs. A lot.

Most of them are terrible, eye-watering, piles of shite. This post is a guide on how to write a decent tech CV. I review and interview software engineers mostly, so this post will be focussed on that kind of CV, but there should be something useful for anyone wanting tips on writing a technical CV.

Formatting

First an apology – all CVs I get are first triaged by my lizard brain, this means that I’ll make a judgement about you without even reading anything on the CV. This is unforgivable, but also, unavoidable. First impressions are important, so please make sure that your CV is legible, sensibly laid out and, above all, not in comic sans.

There are some creative, wacky and alarming CVs. You’ll need yours to stand apart from a plethora of other, similarly written CVs. Spend some time thinking about what you want me to know about you and how you want to come across. I hire for culture rather than a set of specific skills, make your personality shine through your CV.

I’m usually not the only one to read the CVs. I’ll pass it to others in my team (lead and senior devs) and, if you’re in engineering you’ll know this already, some engineers can be very cynical – so, get a good balance of great presentation and content. It’s a challenge, definitely, but I want to be able to get a good idea of who you are from your CV, as well as what you can do.

Ratings

While on the subject of formatting, some people give themselves star ratings on their CV, or marks out of five or 10. Please don’t do this. If you give yourself five out of five, or five stars, you are telling me that you are absolutely perfect at whatever you’ve rated yourself on. This is rarely true and, if it was, I’d be contacting you to work with me, not the other way around.

I will bring you in for an interview just for fun, to watch you squirm (well, I won’t, but my engineers will – they’ll enjoy watching you squirm under the intense laser beam of knowledge of bleeding-edge features that only a five star engineer would know).

What to put in your CV

So, we get to the content. Broadly speaking, I want to know what tools/languages you know and use well and what kind of problems you have solved with them, and how.

Summary/Description/Personal Statement

I’ve never seen one of these that was any good. Mostly it’s fluffy bullshit that I don’t need to know. Words like “hard working”, “trustworthy”, “mature”, “team player”, “versatile”, “creative” and “hands-on” are almost entirely useless. I will assume you are these things because, without them, it would be hard to hold down a job. They are “permission to play” virtues. This is a good summary paragraph:

With a strong focus on innovation, research and development of challenging solutions, I’m a firm believer in using the right tool for the job.

ONE list of languages/tools/technologies

Put a brief list (10 – 15 items long) of tools/technologies you’ve worked with that you know well. If you put PHP or JAVA on your CV, I’ll know that you can do python, ruby or any other similar language – I don’t need to know you wrote a python script to sync your flickr photos a couple of years ago. I want to know CORE language skills and tools and I can work out the rest based on what you put in the rest of your CV. If they match the ones I’m looking for you stand a better chance of landing an interview. Huge lists make me think you’re a fraud or a liar (or both), unless your job history is long and varied and I can see evidence of a history that would allow for that many tools to be learnt in depth.

Huge lists of tools and tech are usually for the benefit of lazy recruiters who simply search CVs for keywords. If you really must put all your languages on a CV, then put them at the end so I can safely ignore them. Knowing what and what kind of language you’re most comfortable in is more useful to me that knowing how many languages you’ve read about.

Things you’ve done, not technologies you’ve worked with

Now you’ve added the list of things you can do to your CV, you can stop worrying about stuffing each and every position with the technologies you used there. What I really want to see under each position is the problems you solved and how you solved them. I cannot stress how important this kind of information is. If I see this on your CV you will be 90% of the way through the door because it usually shows that you’re agnostic to your choice of tools – the problem will dictate the tool, not the other way around.

For example, something like this will get me excited:

Company Inc – July 2010 – Jan 2011
At Company INC, we were asked to make the search functionality on the site faster and more accurate. Our search requirements were unique enough to not allow us to buy search from a 3rd party. After some research and prototyping, we settled on using Hadoop to allow distributing the keyword mining across clusters – something that we found MySQL wasn’t very good at for our needs.

This shows that you can think about a problem without worrying about tools. it also gives us something interesting to talk about in your interview.

Something like this will cause me to be sad:

Company Inc – July 2010 – Jan 2011
Hadoop, MySQL, scrum, PHP, kanban, javascript, jquery, angular, java, bananas, apples, pears, BBC, RAC, RACI. Built search engine from scratch with PHP and MySQL. Completed objectives. General maintenance of platform including x, y and z.

It’s not that this is a terrible way of representing your work at Company Inc. It just doesn’t give me any clues as to how you went about solving problems.

Accurate dates and explicit sabbaticals

Pay attention to the dates you put on your CV for the periods of time you held the positions. If positions overlap, it throws up a red flag for me – either there was some reason for the overlap, or your attention to detail sucks. I’m going to go with the latter because I don’t really know why you would have overlapping positions. The only exception to this is if you’re running your own company while working somewhere else, or you’re a contractor. If you’re a contractor, make that explicit in the position title.

If there’s a gap in your employment, it doesn’t alarm me, but it would be interesting to know what you’d done. Perhaps you’d gone to Tibet to train to be a monk, or taken three months off to learn a new skill – either way, it will help to add to the overall picture I have of you and where you might fit into my organisation.

Some prefer to see “solidity” in a job history – so, loyalty to a company, staying around for three or four years. I don’t. I’m aware that people do “tours of duty” now and loyalty is something that works both ways. Thats said, five jobs of 2 months each on a CV would ring some alarm bells (unless they were contract jobs).

URLs

There’s some contention over this; some people say you should add URLs to things and some people say you shouldn’t. To be clear, I don’t want your FaceBook URL, I want your Github (or bitbucket etc) URL and your blog, that should tell me what I need to know about you. If you contribute to any software, throw that one in too.

Adding URLs to websites you’ve worked on is OK, if you tell me what you did on the site. Just a URL on its own doesn’t tell me much, but something like: http://www.somecompany.inc (worked on the search and the three homepage widgets) is useful.

Interests

Yes, I do want to see this. If you put “visiting parks and gardens, bird watching and making iced cakes” it tells me a tiny bit about you and, like the MBTI assessment, helps me understand what team you might end up. It probably won’t be with the “play epic WoW sessions, pheasant shooting and gluten free savoury buns” guy (or it might be, depends on the rest of the team).

What you aspire to (optional)

(This is optional, because you may not have any aspirations, or any aspirations that are further than getting a job at my company). Do you want to run your own department or team? Does your future lie in management, architecture or systems? Adding this kind of detail allows me to see where you might fit and what kind of career path you may have. I need to work out where you will a) provide the most value and b) get the most value. I know that you’re interviewing us to and if I can help you understand how a role with my company might positively affect your career, you’re more likely to want the job.

Tailoring your CV

If you get an interview, you’ll more than likely want to research the company so you can ask questions and understand what it is you’re trying for before you get there. Why not get ahead of the game and do it before you even send your CV? Spend some time researching my company, my website it’s problems, the stack we use and anything else you can find, then tailor your CV to match. If you know Java, PHP, Scala and Ruby very well, but find that we do Java and PHP only, then put those first on the CV. Offer the problems you solved with PHP and Java over those with Scala and Ruby (that’s not to say you shouldn’t put the latter on your CV, only that you should favour the former).

Psychologically speaking, this works well.

Summary

Your CV is usually the first point of contact I will have with you. It needs to make a good first impression and, while I am able to sort wheat from chaff, it makes it easier for me if you’ve done the hard work for me. If I have to decode three terrible CVs only to find each candidate was useless, then I get to your CV (and, in fact, you’re amazing), my threshold to decode shit CVs is lowered and you’re may just go in the bin because I’ve had enough.

I want to know what problems you’ve solved and how. I want to know what tools/technologies you’re great at. I want to understand your work history.

I want to get to know you a bit – think of it like online dating and try and woo me with your true self.

Working from home ‘more productive’ – The BBC

Working from home is beautiful.

A had a text from a friend tonight, “They’re talking about you on the BBC!”, I was momentarily excited until I got the link to the audio from the Radio4 Today programme this morning, talking about working from home, it’s here http://news.bbc.co.uk/.

Apparently, 250 members of staff from a firm in China who worked from home were 12% more productive than the other 250 who worked from the office. The programme cites they study called Does Working From Home Work? A Chinese Experiment (Bloom et al, 2012). While the today programmed states a 12% increase, the study says there was a 13% increase; 9.5% for more minutes per shift and 3.5% as an increase in the volume of calls made due to a quieter working environment. However, one the nine month trial was over, the company rolled out WFH to the entire company, not everyone took the opportunity, some chose to come to work, about half those who were in the randomly selected working from home group and two thirds of the control group, those who worked from the office, chose to stay in the office. Interestingly, productivity went up more and Bloom et al state that, working from home as a modern working practise, along with the employee having a choice in how they work (which harks of Dan Pinks Autonomy, Mastery and Purpose piece), as a combination are very beneficial to overall productivity.

The experiment wasn’t ROWE though, the was a heavy focus on the amount of hours worked. In order to get some good measurements, they required those WFH to work specific times (9-5) as well as those in the office. As it was a call centre, measuring was fairly straight forward; they measure number of calls, notifications sent, corrections etc. The team leaders still dictated when the employee had to be in work as the experiment was only for four out of five days, with the team leader deciding on the day the worker had to come into the office. So, pretty far removed from ROWE, but still some good data on what it means to work from home.

ROWE in this context would be fairly simple to implement given the straightforward way they gathered and reported on the metrics.

However, the bottom line is – working from home is more productive … for the right kind of people (low performers generally chose to work in the office), but how much the culture of the Chinese and their existing working practises affected this outcome is pause for thought too. Working from home requires discipline, so this is where a focus on the results and how they can be measured is the most important thing about ROWE. If you know what you need to do and you’re going to be measured, it’s much easier for you to do the thing, knowing that that’s all that counts.

Ready, steady, ROWE!

Today, my department begins a trial of a new culture. It’s a new way of thinking about work. Alright, it’s not *new* but it is rare. Especially in the UK. The culture can be summed up with one sentence: ‘Employees are free to do what they want, when they want as long as they get the work done.

There will be a whole bunch of blog posts on this subject, so consider this one a bit of a summary of all the main points.

History

I’m talking about a Results-Only Work Environment, or ROWE for short. It started with Best Buy in the states. Two employees were tasked with making things better and they began with flexible working programs which quickly morphed into a Results-Oriented Work Environment, then settled on a Results-Only. They detail their journey in their book ‘Why work sucks and how to fix it‘. I devoured the book in one sitting and made copious notes and realised that this was what we needed.

The way business works now is outdated; why do we need to work between 9 and 5:30? Why do we need to go to one particular place to do it? It doesn’t matter if you’re a knowledge worker, or in sales, it’s the same thing: the focus should be on results, not when or where you are.

The trial

It’s not been easy getting this trial started. There are lots of questions around holiday allocation and remaining on the correct side of UK employment law and legislation. Essentially, we need to still allocate people the appropriate amount of holiday and then ensure that they remember to take it. This mostly covers the company, so if someone leaves on a bad note, they can’t claim to have not taken holiday, because, in ROWE, you take holiday whenever you need it, there’s no allocation really. Because employees get to choose when, where and how they work, the rules governing the Working Time Directive don’t count either. There’s no 48 hour waiver or whatever.

We’re starting with *almost* the exact ROWE for a three month trial, I say almost because we’ve made two concessions, which I detail in the guideposts below. After three months, if it’s still working, we’ll extend the trial. The idea is to see if vanilla ROWE as detailed in the book works in the UK. UK and US cultures are different and employment law and employee rights are also different, so we need to make sure it fits properly for us and our business. I’m 100% confident that this will work.

Freedom and culture

Ultimately, working in a ROWE is purely focussed on the results. But it gives employees absolute freedom to manage their lives they way they need to. We all have to work, there’s no way around that (unless you’re a millionaire playboy or whatever), so we should be free to fit that work into our lives as we see fit. You don’t need to take a half day holiday to visit the doctors, you don’t need to phone in sick unless you’re going to be letting someone down with a meeting or conversation and you don’t need to worry about where anyone is, you and your colleagues are available 24/7 by phone, voicemail, email, SMS, skype, irc, Google Hangout – lot’s of options!

ROWE isn’t an activity or an action, it’s a culture and a new one too. It’s about changing the attitudes people have towards work and challenging the long held beliefs that time plays an important role in measuring someones value (time is still relevant for deadlines and in some lines of work, but not in the web industry).

Productivity

I’m expecting to see productivity improvements too – I have no idea what they’ll be or in what format, but I’m sure they’ll be there. For one, I find I can get a lot of work done in the wee small hours, because I’m not being distracted and there’s nothing to procrastinate about when the kids and wife are in bed – so I can’t play PS3 or watch a movie, so I work and I can get more done in less time because I’m so focussed on what I’m doing.

The results

Our teams use Scrum to build our software, so we have a built in results measure. We’ll be looking at velocity for each team and the department as a whole, as well as defects, engagement and acceptance rate of stories (the percentage of story points accepted at the end of the sprint). On top of that, all the people in my team have individual goals and objectives; a mix of skills acquisition and platform or performance goals, so I can measure individuals as well as teams.

The Guideposts

ROWE is based on a set of principles called ‘Guideposts’ which enable the change of culture to happen with a purpose, the most contentious of which are ‘Unlimited Paid Holidays’ and ‘Every Meeting is Optional’. The holiday one is easy to cover in principle – it’s irrelevant how much holiday you take, or when/where you take it, as long as the work gets done. That said, we still need to make sure that we’re adhering UK legislation and in order to do that, we still need to allocate and record holiday taken. It’s a small price to pay for that much freedom though!

‘Every Meeting is Optional’ is also difficult for people new to the culture to get their heads round, it doesn’t mean ‘flip a coin to decide whether or not to go to a meeting’, it does mean, find out if you can get, or give value to the meeting, find out, or define (if it’s your meeting) the outcomes and then decide whether you need to be there in person, whether you can dial or Skype in, or whether you’re just required to give information which you can email to the organiser. As long as the work get’s done and you’re meeting the goals, objectives and targets set, it’s up to you whether you attend meetings and how you attend them.

The one concession we made to this guidepost was that, ALL meetings are optional, but pay particular attention to external client meetings or group-wide meetings. We’re only one department in one company doing a ROWE trial, we can’t expect others to change they way THEY work … just yet. It’s a small concession and, to be fair, will barely affect our department.

Metrics and the win condition

How do we know the trial is successful? If nothing changes. Like I said, we’re currently tracking:

  • Velocity: The average rolling department velocity over the last 4 sprints (and long term velocity, but this is less volatile and less useful)
  • Defect rates: how many defects are opened per two-week period and how long those defects stay open (as an aggregate and also by priority)
  • Acceptance rate: What percentage of story points is ‘accepted’ by the product owner at the end of the sprint.
  • Engagement: we’re using Murmur to track employee engagement with the company on many levels.

We’re also looking to measure the perception of the department from across the business.

The win condition will be if nothing changes. If none of the metrics change over the next three months, then the trial will be considered successful, the net benefits of ROWE will be over and above just the impact on those metrics. So, as long as nothing get’s worse, we’ve proved ROWE as a culture in the organisation.

On top of the team metrics, we’re also setting goals and objectives on an individual level – skills acquisition or just ‘stuff that needs doing’ for the platform, our tools or whatever. So, we can watch everything that’s happening and see if it’s making a difference.

Next?

Next I sit back and wait for a sprint or two and see what’s happening, I need to keep my eye on the metrics and the individual objectives, but really, it’s business as usual … or not, depends on how you look at it!

I’ll be blogging more as the trial progresses in the hope that others in the UK who are already in a ROWE, or those thinking about going ROWE can share their progress, problems, failures and successes.

What is best practise?

 

Pheobe practices
Practise, just that.

Wikipedia sez:

best practice is a method or technique that has consistently shown results superior to those achieved with other means, and that is used as a benchmark. In addition, a “best” practice can evolve to become better as improvements are discovered. Best practice is considered by some as a business buzzword, used to describe the process of developing and following a standard way of doing things that multiple organizations can use.

This is my problem with ‘best practise’ – why is it called that? If it’s the ‘best’ way of doing things, why isn’t it just ‘practise’? Why do we have ‘good’ practise and ‘best’ practise? Would you use ‘good’ practise? Probably not, especially if there’s a better way of doing it, which is ‘best’ practise. So, if you’re only ever going to do ‘best’ practise, then it becomes ‘practise’, right? Then, if it’s just ‘practise’ then you wouldn’t refer to it that way, would you?

“How do you do stuff?”
“Oh, you know, with practise.”

I guess you can have ‘bad practise’, but then, that implies the opposite is ‘good practise’, which we just agreed you can’t have, didn’t we? Moreover, ‘developing and following a standard way of doing things that multiple organizations can use’  is silly, why would you do things the way other companies do things? This would just squash any chance of innovation; ‘That’s not best practise! Other companies aren’t doing that!’ dumb, dumb, dumb.

Let’s stop worrying about ‘best practise’ and just get on and make the way we do things better through regular reflection.

How to do appraisals: asking the team

Mini Marshal by mpeterke at flickr
Feedback comes from these - image by mpeterke

Part of my new role is to review and appraise the team. Given that there are a lot of them and I can’t spend enough time with each of them (and nor would I want to) to be able to do a good review, I figured that I’d have them do 360 reviews. There are multiple ways to do this, which I outline below, but I didn’t pick one for my team, I let them vote (I also let them post any new review methods they knew of in order they could vote for those too) – can you guess which they voted for?

Traditional

The traditional approach is an anonymous review: I pick several people to review the employee, they craft and submit reviews and then I deliver this feedback to the reviewee.

This sucks on multiple levels:

  • Suppose the reviewee disagrees with some of the feedback, how can they offer a rebuttal? To me? How does that help?
  • What happens if they don’t understand the context of the feedback?
  • What happens if, in the name of keeping the feedback anonymous, I remix the feedback and lose the actual message (but I don’t know I’ve done that)?
  • We work in tight scrum teams, this means that the BEST people to offer feedback, are the other people on the team (also the product owner and potentially stakeholders, but we’ll come to that), this means that, after the review, the employee goes back to their desk, possibly seething or feeling dejected, put upon or just miserable because of the above but they KNOW that someone in their team gave them shitty feedback.
  • Sucks, right?
This doesn’t suck because:
  • The shy, or sociopathic might feel they can be more honest if they don’t have to do it face to face.
  • The feedback won’t be bland.
  • There is no fear of retribution (unless they find out who it was).

Not anonymous

So, another option is to have the reviewee choose the people they want to review them. No. This also sucks:

  • They might pick people who don’t really have much to do with the and would offer a good, although bland, feedback.
  • Again, they have no chance of rebuttal or dialogue there and then to discuss context of the feedback unless …
  • …they go back to their desk knowing that one of their team gave them shitty feedback and now aren’t sure how to broach the subject.

This doesn’t suck because:

  • Same as the above.

Team 360

The whole team goes to the pub (or cafe, whatever) and they take it turns to offer feedback on each other. Starting with me as a warmup so they don’t feel shy when it’s there turn (this is great for me, I’ll get LOADS of feedback). You go round the table one at a time and everyone on the team feeds back to me – positive and negative – and write it all down. Then someone else volunteers and so on until everyone has had a go.

Developers drinking...

You need trust in the team for this and a good bond. This isn’t going to work with a new (or a ‘forming’) team and I’d advise something different (not the above, maybe just one-to-one coaching until the team are up to cruising altitude). But for established teams, or those stuck in a retrospective rut, I think this is a great idea.

I’ve run one trial of this method before putting it out to vote and the team had some positive feedback on the process (and each other!). It’s tough to do, but giving and receiving feedback is always tough, and the idea of doing it face-to-face with people you work with every day is challenging, but you should do it. Nut up and prove to your peers that you’re a grown up and can and need to learn something about yourself that you didn’t know before. This is about improving yourself in ways you didn’t know you could improve and making sure you’re not annoying your team. 😉

This idea isn’t new, although I wish I’d thought of it, I originally read it in Management 3.0 by Jurgen Appello (the book is good, as are his talks, but his slides suck – well, he does draw them with MS Paint…)

What about … ?

Well, I mentioned product owners and stakeholders above. I’m undecided yet (but, I’ll probably let the team decide) on whether to include product owners in a team 360. They do spend a lot of time with the time and can probably offer some good feedback – it does depend on the relationship with the PO. Even though we don’t foster the feeling (and, Affiliate Window isn’t alone in this I’m sure) there’s a little ‘them’ and ‘us’ between the developers and the product team – but maybe this is a good start in breaking down that status quo.

Also, for stakeholders, having those in the team 360 would be pointless – they don’t have day-to-day dealings with the team, mostly it’s just input and output with the odd nudge in between, but we need their input. This is what the release retrospective is for but I’ll cover that in another post!