Give Thimbl the Open Web award!

//////////////////////////////////////////////// http://www.thimbl.net/award.html

Why Thimbl should win the Transmediale/Mozilla Foundation Open Web Award

Thimble
Thimble by zimpenfish @ flickr

The Telekommunisten Collective thinks that people should finger each other as often as possible. Maybe even several times a day, hell, why not once an hour? As often as you like!

People thrive on interaction with other people. Mutual stimulation is a deeply felt human need, a key characteristic of what makes us human. Imagine that instead of reading your status updates on Twitter or Facebook, your friends would just finger you instead.

The Finger protocol was originally developed in the 1970s as a way to publish user and status information, such as who you are, what you’re working on, and what you’re doing now. This is how the relatively few folks with access to networks posted pithy personal bios. From when colourful polyester pants were still groovy until the 90s people used to Finger each other all the time! Finger evolved into a completely decentralized system, where any user could finger any other user as long as they were both on the Internet. There were no big companies in the middle to control these users, or monitor them, or try to turn their personal data into money. Fingering was a personal matter between users, direct and unmediated, and nobody really knew exactly who was fingering who. Promiscuous, right?

Sadly, these heady days of open relationships slowly came to an end. Finger software was developed before the Internet had many users, and before development was driven by commercial interests. The idea was bold, but the software was primitive. Capitalists and their desire for profit have no interest in such freedom and promiscuity and chose to instead fund centrally controlled systems, in which they are intermediaries. Investors wanted control, so that they can commodify and monetize these relationships. Instead of users fingering each other with reckless abandon, people are now stuck with centralized, privately owned services like Facebook; chaperoning their relationships, imposing user policies on them, and monitoring and monetizing their conversation.

Back in June 2010, Telekommunisten had had enough! “People must be freed from these puritanical, controlling, consumerist, profit-seeking cults”, they thought. If witchcraft, wet shaving, rocker hair and skinny jeans could make comebacks, why not Finger? The Thimbl project was born, and immediately started working on giving the project an online identity and releasing tools to create a microblogging platform built on Finger, that groovy 70s protocol.

In October, Telekommunisten received the news that Thimbl was one of three projects nominated for Transmediale/Mozilla Foundation Open Web Award and almost immediately, Thimbl broke on Hacker News and the project started to attract significant interest. Thimbl started popping up all over the place: P2Pfoundation, ecopolis, alt1040, O’Reilly Radar, OneThingWell, Ecrans, reboot.fm… Evan Prodromou from competing service identi.ca even took a playful swipe at us!

Finger was becoming cool again. The masses were longing to finger each other!

In a few short months, without much in the way of a marketing strategy and with a budget that could be stored in a matchbox, Thimbl has managed to gather over 250 followers on Twitter – the very service it someday hopes to compete with – and has been the subject of hundreds and hundreds of tweets. Thimbl even has a small following on identi.ca, which is closer to the heart of Thimbl than the service with birds and whales. The thimbl.net website has over 300 ‘Likes’ with its Facebook button and the Telekommunisten Facebook fan page is abuzz with talk of Thimbl. The project has even gathered over 100 votes on the Drumbeat platform. Not bad for a project that was completely unknown to all but a handful of people when the award nominations where announced!

Still, the problem remains: Capital will not fund free platforms like Thimbl. Even with the buzz Thimbl has, building a community big enough to actually create a viable platform without financing is a major challenge.

Wouldn’t it be great if Thimbl could actually win the Open Web award? The endorsement of Transmediale and the Mozilla Foundation would be a tremendous boost for the project, perhaps enough to give the community the needed escape velocity to break free from centralized social media like Twitter and Facebook and make Finger the once and future king of personal status updates! Transmediale and The Mozilla Foundation had a great idea: instead of having a jury decide the winner of the award, present three projects to a community engaged with the open web and its technical, political and artistic dimensions. Mozilla had recently launched the Drumbeat project, just for this purpose, as a hub for projects that embrace the open web to get support and find contributors. So it made perfect sense for Drumbeat to host the voting for the award.

Drumbeat is a fantastic initiative from Mozilla and has a really promising future. However, Drumbeat is a relatively new platform. As a result none of the projects received much attention from existing Drumbeat users or from the Transmediale community jumping on to Drumbeat to participate. The idea that an impartial community would consider the three projects and select a winner didn’t quite work out. Instead, it has become a competition to rally the existing supporters of the three projects to sign up to Drumbeat and vote for them specifically, without genuinely considering voting for the others. This means that, honestly, the vote count is about as impartially meaningful as a Florida election run by Diebold.

Thimbl is up against two cool projects as candidates for the Open Web Award; Booki, the book publishing platform behind FLOSSManuals and many great book writing sprints, and Graffiti Markup Language, a project to enable analysis and archiving of graffiti writing which has the support of many awesome, large and active communities like F.A.T. Lab and eyebeam. If the open web award is really meant to give well-earned support to existing, successful projects like Booki and GML, then we will celebrate their success with them at the award ceremony in a few days. We readily concede that Thimbl has not yet achieved anywhere near what these projects have and that our community is much, much smaller and far less known.

Unless we succeed in our desperate bid to convince Lady Gaga to dump Polaroid and instead dedicate her star power to the cause of ushering in a new golden age of rampant fingering, we are very unlikely to win based on Drumbeat vote count. But if Open Web Award aspires to “clearly demonstrate the unbound potential of the open web in ways that can spark new thinking and practices,” as stated, then, damn it, Thimbl is the most about the open web!

We live and breathe the open web, directly addressing the technical and social issues facing the open web in every aspect of the project, in the code, and in our manifestos. We talk to anyone who will listen about how the open web is not just critical to the future of the Internet, but to society itself. And people are beginning to take notice.

Selecting Thimbl for the Open Web Award at Transmediale would be one heck of a powerful spark. Igniting the new thinking and practice that led to the idea of Thimbl with a clear and bold statement of support for an open web that is truly open! The multitudes are trapped and frustrated, clinging to their social interactions within sterile, commercial platforms, longing for wanton, unbridled realms of contact.

Join us in inscribing upon on our banners the revolutionary slogan, “Don’t be a Twit, it feels good to be fingered!”

Give Thimbl the Open Web Award!

With Kind Regards, Your Telekommunisten.

http://www.thimbl.net /////////////////////////////////////////////////////////////////

(psst… pass it on)

Advertisements

No holiday allowance, company credit card AND I’m the boss?

REWORK
REWORK

37signals, their staff and, specifically, Jason Fried and David Heinemeir Hansson are a constant source of inspiration to me. I devoured their book ‘REWORK‘ in one sitting and always enjoy the freedom their staff have to post on the company blog. Today, I found this article on NFIB.com called Un-Manage Your Employees and, frankly, it’s both awesome and a bit crazy.

In summary, they have a different member of staff manage the teams every week. They become the go-to guy, the decision maker, work out an agenda and write the company update. It’s an interesting approach and one which, I’m sure, keeps everyone in the company engaged. I imaging no one says ‘I can’t do it, management won’t let me!’ … especially when they ARE the management.

The other interesting things from the post (and stuff I knew already) is that, every member of staff gets a company credit card, they simply email receipts to a group inbox for auditing purposes, there’s no signoff and no one is required to get permission to buy stuff. Also, they don’t track holidays or sickdays. In fact, David Hansson suggests that they have to REMIND people to go on holiday. No one abuses either of these perks. 37signals staff are passionate and dedicated to their work, partly because of the ethos of the company I guess.

Working for 37signals must be very cool, but I imagine it would take you a while to get used to it. ‘What? I have no holiday allowance? I can take holiday when I like AND I get a credit card?’. New starters must be constantly looking over their shoulders, especially if they’ve come from a more traditional, corporate background!

I hope 37signals continue in the same vein. They make some good products and stand as an example of how great company can be to work for.

 

CAN HAZ NOO JOB?

101 years ago : the regicide - Há 101 anos : o regicídio by * starrynight1, on Flickr
A coach (not me!). (101 years ago: the regicide by starrynight on flickr)

Well, it wasn’t long ago that I was posting about having a new job. It turns out that I can post it again now. Affiliate Window, the lovely people with whom I am currently employed, are about to start rapidly expanding. One of the reasons I joined AWin was because they’re agile and they do scrum – and I love scrum.

It’s not a perfect implementation of scrum (but, where is there such a thing?), but it’s pretty good. However, with 6 new teams coming on board and one potentially being international, getting a process down that works for everyone and across teams has become fairly key.

One of the reason Affiliate Window chose to employ me over some other developer, was that I love scrum (did I say that already?). So, with the expansion ahead and a not-quite-right process, something needed to be done! That’s where I come in. I was offered the role of Agile Coach, which I nearly bit the hand off my boss to accept (well, almost, eh Karol?). It’s something I’ve thought about doing before, coaching or training, but never really had a clear idea of how to get into doing that.

So, from here on in you can call me Coach Mike, or just Coach. I’m scrum master of two teams (three when there’s more), researching processes, removing impediments, training and ensuring that we’re continually improving our process while making sure we stay agile. So, I would imagine there will be more agile/scrum posts in the future – stay tuned!

Help! Speaking at the PHP London Conference 2011

Hopefully my illustrious readership have gleaned at least one or two useful things from my blog here. I’ve enjoyed writing the posts and I just hope you’ve enjoyed reading them.

I’ve also enjoyed giving the two talks at PHP London How Not To Write A RESTful API and Writing Effective User Stories). It’s because of this that I’ve decided to submit a paper to talk at the PHP London Conference 2011. I’ve got a few ideas of things I’d like to talk about; including past talks (for a wider audience, natch. Doesn’t harm that I don’t have to write a new presentation either!) but I thought I’d ask my tiny readership if there was anything you’d like to see.

Well, is there? Leave a comment below, or email/twitter me. mike@mikepearce.net and @MikePearce

Writing Effective User Stories

This months PHPLondon took on a different format. Usually there is one speaker who gives a talk, but this month, the new President and the committee decided they’d take a different approach, in the form of lightening talks. Whoever wanted to could stand up and talk for five minutes on any subject relevant to PHP or development.

I offered to do a talk on Writing Effective User Stories (which I’d give before at my place of work) and I thoroughly enjoyed it, I just hope that the audience did too! As well as the video above, you can view the slides by clicking here and the handout (which I didn’t hand out!) here. Or you can see them both after the jump. Contact me if you have any questions or would like to know more.

Continue reading “Writing Effective User Stories”

Namespaces, unit testing and dependency injection (with typehinting)

Bad Medicine by Vermin Inc @ Twitter
Injection Dependency STRAIGHT INTO YOUR OCCIPITAL LOBE!

I’ve struggled with the concept of unit testing and how to deal with dependencies a lot in the past. The solution seemed to be just out of reach. I knew there WAS a solution and not a hacky one either, one that was elegant and worked well. I never really found it. PHPUnit seemed to be well written but not particularly well documented and a lad like myself, one who is a few sandwiches short of a picnic at times, almost gave up.

Almost.

I know and understand the benefits of unit testing and I understand the disadvantages. Unit testing is a weapon that can mite out retribution and vengence if not used properly. There are plenty of posts on the web about the wrong way to unit test and the hazards of them, so I’ll leave you to find them.

Aaaaaanyway, writing your application so that it uses the dependency injection pattern is the best method for making sure it’s uncoupled and modular enough to write some efficient, unit tests (ie: tests that test just a unit of code). Especially when they depend on objects which link to databases, or web services or other crazy stuff that you don’t really want or need to instantiate in your test suite. The problem that I encountered time and time again was when you mocked an object to pass into one test, then you wanted to actually test the REAL version of the object you’d previously mocked or stubbed, you couldn’t. PHP would throw an error:

Code.php:

<?php
    class badger {
    }
    class badger {
    }

%> php code.php
Fatal error: Cannot redeclare class badger

See? It’s a predicament, especially when you want to do some unit testing.

<?php
class weather {
    /**
     * @param string $postcode
     * @return object
     */
    public function getWeatherFromPostcode($postcode) {
        //.. connect to a web service
        $weather = new ThirdPartyWeatherService();
        return $weather->getWeatherAt($postcode);
    }

}

class stuff {

    public function isItRaining($postcode)
    {
        return (
                    $this->weather->getWeatherFromPostcode($postcode)
                          ->precipitation = 'RAIN'
                        ? TRUE
                        : FALSE
               );
    }

    public function injectWeatherObject(weather $o) {
        $this->weather = $o;
    }
}

class doTest extends PHPUnit_Test_Case {

    public function testIsItRaining() {
        $stuff = new stuff();
        $stuff->injectWeatherObject(new weather());
        $this->assertTrue($stuff->isItRaining('RH2 9SS'));
    }
}

The problem with the above code is that, the test may pass or false randomly depending on the weather. Clearly, this is a nightmare! No one thinks that there code is going to behave differently depending on the weather, that’s CRAZY TALK MAN! Well, this code would, especially if the weather is as strange as it is in Reigate.

So, how to get round this: dependency injection and mock objects. See, we had the foresight to inject the weather object into the stuff object. That means we’re halfway there (if you’d just instantiated the weather object in the stuff object, this would be much harder). To workout the solution to this, we need to ask a question: What are we testing?

The answer is, we’re testing the stuff class and, more specifically, the isItRaining() method. So, what do we need to do to test that? Well, for a start, we don’t actually need the full weather object (which might connect to a 3rd party service, which in turn, might be down, or slow or whatever – meaning the test might fail NOT because the unit is broken, but because a dependency is…), we only need to AN object with a ‘precipitation’ property. So, this means we can mock it up. Woo!

Code.php:

class weather {
    /**
     * @param string $postcode
     * @return object
     */
    public function getWeatherFromPostcode($postcode) {
        $w = new stdClass;
        $w->precipitation = 'RAIN'
        return $w;
    }

}

class stuff {

    public function isItRaining($postcode)
    {
        return (
                    $this->weather->getWeatherFromPostcode($postcode)
                          ->precipitation = 'RAIN'
                        ? TRUE
                        : FALSE
               );
    }

    public function injectWeatherObject(weather $o) {
        $this->weather = $o;
    }
}

class doTest extends PHPUnit_Test_Case {

    public function testIsItRaining() {
        $stuff = new stuff();
        $stuff->injectWeatherObject(new weather());
        $this->assertTrue($stuff->isItRaining('RH2 9SS'));
    }
}

Now, we know that it’ll ALWAYS be raining in Reigate (which sucks, but well, what can you do? All in the name of great testing, right?). This test will now pass, unless someone changes isItRaining() and it breaks. But that’s what unit testing is for.

Huzzah, let’s party. Except, we’ve introduced a problem here; what happens if we now weant to test a method in the REAL weather object:

Code.php


class weather {

    public function getTypesOfCloud($type)
    {

        $clouds = array
        (
            'high' =>
                array('Cirrus', 'Cirrocumulus', 'Cirrostratus')
            'medium' =>
                array('Altostratus', 'Altocumulus',)
            'low' =>
                array('Stratocumulus', 'Stratus',)
        );
        if (isset($clouds[$type])) {
            return $clouds[$type];
        }
        else {
            return FALSE;
        }

    }
    /**
     * @param string $postcode
     * @return object
     */
    public function getWeatherFromPostcode($postcode) {
        ..
    }
}
class weather {
    /**
     * @param string $postcode
     * @return object
     */
    public function getWeatherFromPostcode($postcode) {
        $w = new stdClass;
        $w->precipitation = 'RAIN'
        return $w;
    }

}

class stuff {

    public function isItRaining($postcode){ ... }

    public function injectWeatherObject(weather $o) { ... }
}

class doTest extends PHPUnit_Test_Case {

    public function testIsItRaining() { ... }

    public function testGetClouds()
    {
        $weather = new weather();
        $this->assertEquals(
            array('Cirrus', 'Cirrocumulus', 'Cirrostratus'),
            $weather->getTypesOfClouds('high')
        )
}

%> php code.php
Fatal error: Cannot redeclare class weather

OMFG! Now we’ve got two classes called the same thing, this makes it impossible to TEST THEM OMG NOO!

I should mention here that you’ll probably not have all your tests in one file, they’ll probably be split across multiple files and directories. They’re all in one file here to make things easier. However, this doesn’t mean you won’t get the same problem.

The easier thing to do would be to rename the mock weather object to something like ‘weatherMock’, but we can’t because our stuff class expects the object that is injected to be an instance of the ‘weather’ class.

“Well, remove the typehinting dumbass!” I hear you cry, throwing your hands in the air.

I could, but I shan’t. Because I WANT the typehinting there, it’s a conract and I am BOUND by it. I must have a weather object passed in, I can’t have ANY OLD object being passed in now can I? What would happen if someone passed in a ‘cheese’ object? DOES CHEESE HAVE PRECIPITATION?

Anyway, this is how I chose to solve this problem:

Code.php

<?
namespace Weather {
    require_once 'PHPUnit/Framework.php';
    abstract class abstractWeather {
        public function getTypesOfCloud() {}
        public function getWeatherFromPostcode() {}
    }

    class weather extends abstractWeather {

        public function getTypesOfCloud($type)
        {

            $clouds = array
            (
                'high' =>
                    array('Cirrus', 'Cirrocumulus', 'Cirrostratus'),
                'medium' =>
                    array('Altostratus', 'Altocumulus'),
                'low' =>
                    array('Stratocumulus', 'Stratus'),
            );
            if (isset($clouds[$type])) {
                return $clouds[$type];
            }
            else {
                return FALSE;
            }

        }
        /**
         * @param string $postcode
         * @return object
         */
        public function getWeatherFromPostcode($postcode) {
            // Commented as this doesn't actually exist.
            //$weather = new ThirdPartyWeatherService();
            //return $weather->getWeatherAt($postcode);
        }
    }
}
namespace WeatherMock {

    class weather extends \Weather\abstractWeather {

        /**
         * @param string $postcode
         * @return object
         */
        public function getWeatherFromPostcode($postcode) {
            $w = new \stdClass;
            $w->precipitation = 'RAIN';
            return $w;
        }

    }

}

namespace Main {
    class stuff {

        public function IsItRaining($postcode) {
            return (
                        $this->weather->getWeatherFromPostcode($postcode)
                                      ->precipitation = 'RAIN'
                            ? TRUE
                            : FALSE
            );
        }

        public function injectWeatherObject(\Weather\abstractWeather $o) {
            $this->weather = $o;
        }
    }

    class doTest extends \PHPUnit_Framework_TestCase {

        public function testIsItRaining() {
            $stuff = new stuff();
            $stuff->injectWeatherObject(new \WeatherMock\weather());
            $this->assertTrue($stuff->isItRaining('RH2 9SS'));
        }

        public function testGetClouds()
        {
            $weather = new \Weather\weather();
            $this->assertEquals(
                array('Cirrus', 'Cirrocumulus', 'Cirrostratus'),
                $weather->getTypesOfCloud('high')
            );
        }
    }
}

So, the solution is two fold.

  1. I made both the weather classes (real and mock) an extension of the abstractWeather class. This is good, because it means my contract is still in place (if a little more flexible) and I can force people to use the methods prescribed.
  2. I wrapped the abstract class and the real object in a namespace and the mock weather object in another namespace. This means I can have two instances of a class named ‘weather’ as they reside in different namespaces. I could have named the mock weather object as ‘weatherMock’ and not had the namespaces, but adding them makes it much cleaner. If I know I’m using namespaces and not going to get into trouble with conflicting error names, I can be confident that, with a HUGE library of tests, when they’re all run together, I’m not going to get the error (I don’t know if a colleague created a weatherMock() class two days ago for a different part of the application, but for use in the same suite of tests – using namespaces means it doesn’t matter).

Now, you could use interfaces instead of abstract classes, but I prefer the abtract classes as they give you something concrete to work from and, really, if you’re wanting to ride modular, encapsulated code, then you SHOULD be using abstract classes.

The only other thing to remember (which caught me out) is that stdClass lives in the global space, so, whenever instantiating a new stdClass, it must be pre-pended with a forward slase: \stdClass (the same goes for anything that ISN’T in a namespace when you’re working in one: \PHPUnit_Framework_TestCase

Feel free to copy this code and run it in your own environment, it *should* run!

This is not the only, or definitive method of achieving this. But I think it fits my ideal of having an elegant solution and it means the actual code (and not the tests) isn’t mauled about JUST to make the tests fit. Which is where a lot of unit testing falls down.

If you have another method or other ideas for how to get round dependcy and classname conflicts, then I’d love to hear them. Please post in the comments below!

The difference between git pull, git fetch and git clone (and git rebase)

Update: So, over a year later and I’ve had some feedback from a colleague (thanks Ben!). Nothing here is drastically wrong, but some clarifications should help!

When I started out with git …

… who am I kidding, I’m still a git n00b. Today, I tweeted about git. I wanted to know what the differene between pull, fetch and clone is. After discovering that, really, 140 characters isn’t enough to answer the questions, I had a play around.

Git Pull

From what I understand, git pull will pull down from a remote whatever you ask (so, whatever trunk you’re asking for) and instantly merge it into the branch you’re in when you make the request. Pull is a high-level request that runs ‘fetch’ then a ‘merge’ by default, or a rebase with ‘–rebase’. You could do without it, it’s just a convenience.

%> git checkout localBranch
%> git pull origin master
%> git branch
master
* localBranch

The above will merge the remote “master” branch into the local “localBranch”.

Git fetch

Fetch is similar to pull, except it won’t do any merging.

 %> git checkout localBranch
 %> git fetch origin remoteBranch
%> git branch
master
* localBranch
remoteBranch
 

So, the fetch will have pulled down the remoteBranch and put it into a local branch called “remoteBranch”. creates a local copy of a remote branch which you shouldn’t manipulate directly; instead create a proper local branch and work on that. ‘git checkout’ has a confusing feature though. If you ‘checkout’ a local copy of a remote branch, it creates a local copy and sets up a merge to it by default.

Git clone

Git clone will clone a repo int a newly created directory. It’s useful for when you’re setting up your local doodah

%> cd newfolder
%> git clone git@github.com:whatever/something.git
%> git branch
 * master
remoteBranch
 

Git clone additionally creates a remote called ‘origin’ for the repo cloned from, sets up a local branch based on the remote’s active branch (generally master), and creates remote-tracking branches for all the branches in the repo

Git rebase

Finally, git rebase is pretty cool. Anything you’ve changed by committing to your current branch but are no in the upstream are saved to a temporary area, so your branch is the same as it was before you started your changes, IE, clean. It then grabs the latest version of the branch from the remote If you do ‘git pull –rebase’, git will pull down the remote changes, rewind your local branch, then replays all your changes over the top of your current branch one by one, until you’re all up to date. Awesome huh?

Finally..

If you get stuck, run ‘git branch -a’ and it will show you exactly what’s going on with your branches. You can see which are remotes and which are local. This is a good headsup before you start to break things! It’s worth remembering that git branches are basically just a pointer, so to be able to work with those commits you need a local branch which points to somewhere from which those commits are reachable.

Thanks to Ben for the extra stuff, clarifications and calling me an idiot when I get git wrong, because I am, as it’s really pretty simple, except for the simple.


	

macupgrades.co.uk: a whole jar of awesomesauce

A head assembly on a Seagate hard drive
Not my harddrive - Image by Robert Scoble @ Flickr

I recently purchased a new hard drive for my aging Mac from www.macupgrades.co.uk. Storing everything on my NAS so I have more space on my mac is a pain-in-the-ass, so I decided to get a larger capacity drive so I can store everything locally as well. I placed an order on the Tuesday and paid for the courier delivery as I needed the drive before the weekend.

On the Wednesday I recieved an email stating that the drive had been shipped and would be with me on Friday – awesome. I was very excited, I nearly bought a macchiato to celebrate.

Anyway, the next day, I received an email stating there had been a problem and that the drive hadn’t shipped so wouldn’t be with me until the following Monday. I was pretty pissed off about that, so I emailed www.macupgrades.co.uk and said, I thought the item was in stock and would be delivered by Friday, if I’d known there would be a problem, I’d have ordered another item.

I also asked if I could be refunded the difference between the Courier and Royal Mail delivery as paying for the courier delivery wouldn’t have got me the item on time. I was amazed to recieve this response:

I’m sorry there has been a delay with your item. This was due a collection

issue with our courier company.

We can as a gesture of goodwill offer you the Seagate Momentus 7200 RPM

500GB drive for you at the same price – delivery for Friday.

Or we can progress with the Scorpio for delivery Monday with the difference

on the courier refunded.

Please let us know.

Best Regards

Gavin

Awesomesauce! A better harddrive (I only ordered a 5400) and delivery the next day! Well, I asked if they could send me a drive, any drive, by Friday and lo, when I arrived at work this morning, it was waiting for me. A pre-9am delivery no less.

So, thank you macupgrades.co.uk, your customer service is second to none and I will be a customer for life now.

Scrummaster? We don’t need no steenking Scrummaster!

Sad Face :(
A Sad Face by Emmaline

I discovered something strange today. I recently left one development team to go work for another (that’s for a seperate blog post all together) and I was chatting to some old team members today who had just had their Scrum Sprint Review/Retrospective and they mentioned one of the things that had come out of the Retrospective was that they aren’t going to have a Scrum Master for this sprint, or, it seems, any sprint.

“Huzzah!” I hear you cheer! “They must be self-managing!”

No, no they are not. It’s far, far worse than that. They’ve decided they don’t need a scrum master as, in their experience, the scrum master is useless. With no power to help the team make decisions and one team member describing the Scrum Master as “a puppet of the business”, then it’s quite clear that scrum, at this company, is broken.

It’s a sad day for me as ever since I heard about scrum about two and a half years ago from a colleague (@garethholt), I realised it’s potential. I’ve tried hard, arguing the point with colleagues, management and everyone who’ll listen. We’ve tried different length sprints, staggered sprints, digital backlogs, analogue backlogs, all sorts. Inspected and adapted and now it’s all for nowt. At every turn, the business said “No, I’m not confident enough to try scrum properly.” or, more accurately, “I don’t want to surface the organisational dysfunction.”. The team realises that the business cannot let go of the reigns.

It’s all about trust.

They all seem like the wind has been taken from their sails and it’s sad and dissapointing. Especially as they have an ‘Agile Manager’ and a ‘Product Owner’ who is also a director. Even the people employed to support scrum don’t seem to have confidence in it. Now THAT is a problem.

I might go home and cry.

Resourcing, it’s just an estimate OK?

Reign of the Supermen by fengschwing
Reign of the Supermen by fengschwing
It is NOT an indicator of how much work the team is doing, right?

Too often, businesses seem to think that the resourcing we do during sprint planning to work out how many ideal hours we have, allowing us to decide what we can fit into a sprint, is an accurate reflection of how much work the team can/will do.

It’s not, OK, chill out. At BEST it’s a worse case scenario.

The average work day is 7.5 hours. That does not mean that I can dedicate 7.5 hours of work to tasks from the board. When we used to do resourcing, the business would estimate our resourcing for us:

There are seven people on the team doing a two week sprint, therefore, you have 490 man hours in this sprint, fit stories and tasks in that equate to those hours, you can have 15% of that figure for bugs.
-Somebody from the business

We quickly realised (well, as a team, we realised this was wrong immediately, but the business will always wait for empirical evidence first) that this wasn’t going to work. There is no way each team member can dedicate seven hours a day to the sprint. There are numerous things that vie for your time; getting a coffee, taking a piss, talking to colleagues, talking to the business, researching your industry, emailing your wife, checking your bank balance, rebooting your Windoze computer on a AD network which takes about 15 minutes, getting a coffee, etc, etc.

So, sprint after sprint failed (let me be clear, the sprints didn’t “fail” as the business always decided we “drop” something low priority from a sprint before the end but, as a team, that feels like a failure) and this could be down to bad estimating of ideal hours for tasks, true, but this is compounded by bad resourcing.

If you resource incorrectly, your entire sprint is undermined as it’s the basis for deciding what you commit to, you have an inflated idea of how many ideal hours you can deliver and so you over commit. Your gut feeling is always “this is too much work, we’re overcommitting!” but the math is right? So, it should fit in, right? RIGHT?

No, wrong. It won’t. Neither will pretending it will fit in and then dropping stories at a later stage work either. This will undermine the teams ability to deliver quality working software as there is always this grey cloud of unachievable stories hanging on like a bad smell at the end of your sprint backlog. The team will inevitably rush through what they’re doing to try and deliver what they’ve (been forced to) commit to. This is bad on many quite obvious levels, so I won’t spell them all out…

… OK, I will. This will result in less than polished code, rushed, hurried unit tests. Bothered and harrassed QA roles (if you have them) a stressed release manager as he’ll be backward and forward with UAT, an annoyed stakeholder as the stories get rushed, other annoyed stakeholders as their stories get dropped and, finally, a shoddy release of badly written, implemented, tested and released code.

Exaggeration? Maybe. But it makes my point.

If you ask each team member “How many hours a day can you commit to, ideally?” when you’re resourcing. You’ll get a worst case scenario on the amount of ideal hours you have. If you then estimate and plan based on that, you’ll ALWAYS deliver what you commit to (unless you get the planning REALLY wrong) and, usually, have time at the end to pick up an extra story, fix an extra bug, research a new technology or whatever. Everyone wins; the team are happy that they’ve delivered what they’ve said they would (and, maybe more) and the business is happy as they’ve managed external expectations and delivered the commitments and perhaps even more happy if another story was squeezed in, or an extra bug or two squashed.

I understand that you need to know where inefficiencies are, I really do, I want to know also. If the team are kicking around spending three hours a day reading XKCD and reading LOLCats, then I want to know as well and why. But they aren’t they’re spending the time doing actual work. So, let them do it and don’t force them into it.

If you want to measure time spent on things other than the sprint, then use something other than the sprint to measure that time. Resourcing and estimating/planning won’t help you here as they are, after all, just estimates.