It all comes down to two things
- I get to work with people who really love what they do.
- I get to work with people who are insanely open to change.
It all comes down to two things
Recruitment is all about relationships and trust, whichever way you look at it common recruitment practices support neither. While there are countless articles focusing on how hard it is to hire good developers little is said about how to find good companies. Trust work both ways and in order to ‘fix’ recruitment both sides of trust equation must be balanced. Examining each in turn:-
Employers have low trust in external recruiters, low trust in CVs, and low trust that candidates can complete a Fizz Buzz question. This means that it’s not possible to invest sufficient time in individual applications, which in turn makes it less likely they’ll ever attract really good people.
Services like LinkedIn and Stack Overflow have made some gains in solving the employer-> candidate trust problem. In LinkedIn’s case they have scaled the ability to ‘ask around’ for recommendations and Stack Overflow provides a feel for someone’s knowledge. Neither is perfect and in truth the best they can do is give me confidence that the candidate is not a total waste of time.
The Candidate -> Employer problem is more interesting not least because it’s generally ignored. Unless you happen to be Google or Facebook candidate->employer trust is a major stumbling block. How can a candidate be sure that they are dealing with a good company? They can’t trust their agent to have a clue (or care) and they themselves will not be aware of a host of interesting companies. As such applications tend towards the bland and generic since candidates cannot afford to spend days tailoring individual introductions, this in turn fuels the employer perception that passionate interested candidates do not exist.
As an example, I work for a small B2B Telecoms company, our work is on the public eye, but our brand is not. Most developers will not be aware of us. Once hired, developers tend to want to stay with us, with working environment and the freedom to ‘get things done’ playing a big part in that. However as a company I have no easy way to express this. It’s not even a case of saying ‘Isn’t my company great!’ it’s much more about about describing the trade offs. Not everyone will appreciate the chaos, pace and variety of working at a small company, some will prefer the promise of a well defined career path, security and greater opportunity to specialise typicallly afforded by a larger organisation. It’s down to personal opinion.
Individual companies can solve this problem by publishing an engineering blog, sponsoring community events, getting people speaking at conferences and generally exposing their culture and values. It could be argued that companies willing to go to these lengths clearly value recruitment more highly than others and deserve the rewards. However, if there was someway that candidates could pull that information rather than have it pushed then that would be hugely valuable.
The closest example I can see is the Joel Test. To me the Joel Test is starting to show it’s age and could benefit from an update, the best it can say is ‘this company is less likely to be a horrific place to work’. Glass Door also addresses this in part, though practically speaking companies must be a of certain size before it becomes useful.
I’m not sure what the solution might be. Perhaps a curated job board/job fair is the way to go, the curator finds a way to characterise companies and makes sure it only backs good companies. This builds trust with candidates, and should mean that it attracts the top people, especially those for whom money is not the top driver. Companies are happy to pay decent rates because they know how good the candidate pool is, further more there is prestige in being associated with the agency.
So, world, here is my challenge to you. How can I, as a company, express my culture and values in a meaningful and standard way so that candidates can approach me with confidence.
Ask anyone about hiring developers and the advice is always the same ‘only hire the best’. The principle reasons being that
On the face the face of it this seems like great advice, who wouldn’t want to hire the best? It turns out pretty much everybody.
For instance, how long are you willing to wait to fill the position? What if you are really really stretched? What if you’re so stretched that you worry for existing staff? What if hiring a specific individual will mean huge disparities in pay between equally productive staff? What if not making the hire is difference between keeping a key client or losing them? At some point every company has to draw a line and elect to hire ‘the best we’ve seen so far’.
The difference between the great companies and the rest is how to deal with this problem. Great organisations place recruitment at the centre of what they do. If hiring is genuinely everyone’s number one priority then hiring the best becomes more achievable. For starters you might even have half a chance of getting ‘the best’ into your interview room in the first place.
“Every time I hear this cliché about people being most valuable resource I wonder: how the heck can you say people are most valuable when you treat them as resource? As commodity. As something which can be replaced with another identical um… resource. If you say that, you basically deny that people in your organization are important.”
I’m in agreement with Pawel on this point, but I’d go further. Not only is a statement like ‘People are our most valuable resource’ degrading and counter productive, even if you restate it as ‘Nothing is more important than our people’ it’s still incorrect. The real value had nothing to do with people and everything to do with teams.
The key thing that a team provides is a means to align the goals of its members. These goals need not be for the greater good of humanity, in fact they’re generally much more mundane. It really doesn’t matter who wins the world cup* or whether project omega will ship by next Tuesday, all that matters is that the team succeeds in its common goal. A group all pulling in the same direction is orders of magnitude more effective than that same group working as individuals – a business cannot be successful without effective teams.
The trouble is the word ‘team’ is massively over used, it’s a buzzword that has become so ubiquitous we don’t even notice it. The tendency to assemble a group of disparate people and label them as a ‘team’ devalues the concept. One area where this is especially true is that of ‘The Management Team’, generally comprised of middle management peers from various disciplines this group often have very little in common in terms of shared goals and identity.
And here lies the problem, if management is unused to working in a team themselves, then the value of a team is less visible. Furthermore, since it is generally individuals, not the team as a whole, who complete the component tasks the team effect is not obvious from afar.
I don’t think you’ll find an organisation that is anti team, simply that it’s hard prioritise the tasks necessary to encourage team formation when the value of teams is poorly understood. It’s easy to measure the cost of co-location but much harder to measure the benefit to the co-located team, hence the true value of the team is passed over.
Not only are ‘people are not our most valuable resource’, people aren’t our most valuable anything just on their own, it’s all about teams.
[In this post I’ve purposely avoided the subject of how to form a team. It turns out that it’s quite tricky, I’d recommend Peopleware as a good place to start.]
* Except if it’s England of course.
Chris put forward the idea of the ‘Hero’, an unofficial role assumed by an experienced developer critical to the project. There are positive aspects to the role, this is person turned to in the moment of crisis when something must be fixed ‘right now’, they are the person with the deepest knowledge of the system and an indispensable contributor to the project. As with many things, however, this strength can also be a weakness. The feeling of being indispensable is very powerful and freely sharing knowledge and collaborating with other less experienced/capable team mates only undermines this feeling. If the Hero is no longer the only person who can fix the problem, surely that makes them less important?
In extreme cases the presence of the Hero becomes toxic, reluctance to collaborate is unpleasant, but active obstruction and obfuscation is something else. At this point the team/project has some serious problems. On the one hand it is doomed to failure without the Hero, on the other-hand, the ability of the group to act as a team evaporates and progress on the project is brought to all but a standstill.
We spoke for about an hour on the subject and while there were one or two examples that partially dealt with the problem, ranging from ‘move them to another project’ through to ‘fire them’ (!), no-one in attendance was able to provide a truly positive example of recovering from such a scenario.
I am fortunate in never having worked with anyone quite as extreme as the examples presented in the session, but where I have seen glimpses of this behaviour my sense is that overwhelmingly, the behaviour is the product of environment rather than necessarily any failing on the part of the individual.
The participants in the session, seemed to be made up of managers/coaches rather than out and out developers, which may explain why much of the discussion seemed to presuppose that the fault lay solely with the Hero.
Common environmental factors that I have observed include:-
It is the manager not the Hero who has most influence over these points. So I think that before answering the question ‘What to do when your Hero goes bad?’, a better question, as a manager is ‘What have I done wrong to allow my Hero to go bad?’.
Unhelpfully, as with all management problems, the best way to solve the problem is not to have it in the first place. Placing greater emphasis on the performance of the team rather than on the individual can help here. Any action that benefits the whole team is recognised and celebrated so the Hero need not lose prestige by supporting those around him. In fact the Hero’s standing increases since he is now multiplying his own capability through increasing the skills of the team. As a side effect, since the team is now more capable the Hero has more time to spend on the truly difficult problems, which in time, he will pass onto the rest of the team.
Switching focus away from individuals and towards the team is a non trivial exercise, but if the agile movement has brought us anything it is methods to engender collaboration, trust and team level thinking.
It doesn’t matter if you get there, every step along the way is an improvement.
Ever since coming across the idea on Eric Ries’s blog I’ve always been a big fan of Continuous Deployment. For those unfamiliar with the term, it means writing your code, testing frameworks and monitoring systems in such a way that it is possible to completely automate the process of going from source control commit to deployment to a live system without posing a quality melt down. This means teams can find themselves deploying 50 times a day as a matter of course.
It’s not without it’s critics, and a lot of people see this as one way ticket to putting out poorly tested buggy code. I think that those folk completely miss the point and that in many scenarios in fact the opposite is true. The thing I really like though, is that, whether or you ever get to the point of automatically deploying every commit to live, every step that you might take to get there is hugely positive.
So, really, what would have to happen in order to employ a Continuous Deployment regime?
18 months ago my then team started to take this idea more seriously, I thought it would be interesting to give an overview of the steps taken towards Continuous Deployment, and since we’re certainly not there yet, what we plan to do in the future.
We started from a point where we would release to live environment every few weeks. Deployments, including pre and post deploy testing could take two people half a day sometimes more. I should also say that we are dealing with machine to machine SaaS systems where the expectation is that the service is always available.
Our first efforts aimed to reduce the human load on deployment through automation. Fear meant that we still needed to ssh into every node to restart but every other step was taken care of. This meant that it eventually became common place to deploy multiple times a week across multiple platforms.
Once a deploy was live we were still spending considerable time on behaviour verification. To address this we worked to improve our system and load testing capability. Doing so meant that we had more time to manually verify deploy specific behaviour, safe in the knowledge that the general behaviour was covered by the tester.
This approach also requires a high level of trust in system monitoring. We have our own in house monitoring system whose capabilities we expanded during this period. In particular, we improved our expression language to better state what constituted erroneous behaviour and we also worked on better long term trend analysis, taking inspiration from this paper . It’s no surprise to me that it came out of IMVU who have been practicing Continuous Deployment for a long time.
Since the act of deployment was now much less expensive we looked to reduce the number of changes that went out in each deploy. At first this felt false, after all if the user can’t use the feature in it’s entirety, what’s the point? We soon realised that smaller chunks were easier to verify and sped us up over time. We took an approach that I’ve since heard referred to as ‘experiments’ so that new functionality could be deployed live but was hidden from regular users. It meant that we could demo new functionality in production, without disrupting the business as usual service.
Breaking down deploys into a few day’s worth of work also improved our lead time meaning that we could more responsive in the event of a change of plan. It was during this period that we switched from time boxing to Kanban. This is interesting since Continuous Deployment is often championed by the lean startup movement.
More recently, actively pursuing Continuous Deployment has taken a back seat, but the next logical steps could be to further flesh out the system test coverage and then look to completely automate deployment to the staging environment (modulo database changes).
However, it doesn’t really matter what we do next, if it takes us a step closer to theoretically being able to deploy continuously it will undoubtedly improve our existing lead time and responsiveness.
This post contains a number of Continuous Deployment resources, but a few further articles I found to be interesting follow include:-
The only way to learn to manage is to do it;
and the only way to do it, is to do in front of people;
and the only way to do it front of people, is make a bunch of mistakes in a very public forum;
and the only saving grace is that, as an inexperienced manager, it’s really not clear quite how many mistakes are being made.
Me, ranting, in a pub, in West London
This, of course, is of small consolation to the manager’s team.
So the question is, how do we train people for team management without causing pain and suffering to the team? I don’t think there’s a simple answer, but it definitely helped me to have a chance to learn something outside of my professional life.
Back in the days when I had silly hair and green shoes, I used to play guitar in a band. Much like software teams, the problems a band faces are as much social as they are technical. A band needs someone to draw the group together, drive things forward and turn a bunch of dreamy-eyed losers into a bunch of dreamy-eyed losers who, you know, might get a gig. I’d love to think that I was in the band for my guitar excellence, but in truth my job was to keep things together. Sadly the Lonely Crowd never quite made it beyond the indie dives of London town, but it taught me a huge amount that I would later apply in managing teams of software developers.
Without trust it’s not possible for the group to work effectively. I’m not talking about trusting someone with a winning lottery ticket, more that I know I can rely on that person in the context of the project. Once the trust is gone the band is gone, it’s not coming back. Similarly, as a manager, my effectiveness is directly related to the trust within the team.
Generally, people who form bands are motivated passionate people, no-ones’s getting paid to be there, and even those more interested in impressing girls/boys than music, need to make sure the band is as good as it can be. The easier it is for the group to concentrate on turning ideas into songs and turning songs into set lists, the more satisfying the whole thing will be. So don’t worry about motivation, focus on removing obstacles and dealing with cranky promoters.
Often the line between creative spark and creative fleurgh is very thin. Someone has to provide a vision for the group to work towards. In my case this meant coming to the band with rough song ideas, I’d bleed over these things in my bedroom, secretly very proud of my work, only for the rest of the guys to mutate it into something excellent. The point is that without that first step nothing would have happened. Remember that the aim is not to be the best musician, it’s to make the best musicians better.
A band has distinct roles, when people talk about the Beatles they rarely start with George Harrison, but his considered rhythm guitar parts made it possible for Lennon and McCartney to steal the show, similarly Bill and Ted were never going to get anywhere on their own. The point is that everyone needs to understand where they fit and exactly what they bring to the group, if the drummer is thinking like a lead guitarist, the band will sound awful no matter what.
Without good feedback the music will suffer, either through a lack of innovation or through a lack of quality control. The key is finding a way to express your thoughts, good or bad, without it being taken personally. Thinking managerially, the aim should be that the whole group can provide good feedback. Doing so effectively requires a high level of trust within the group as well as a sense of when to intervene if the criticism becomes destructive.
Getting a song to a performable state is massive step. It brings the group together and feels like progress. It’s better to have three presentable songs than nine nearly finished ‘things’, not least because it then provides a means for feedback from outside of the group.
Where passionate people collaborate there will always be differences in opinion. Impassioned debate is healthy and a sign that band mates care about the project, but sometimes things get out of hand and it’s necessary for a third party to mediate. Generally it comes down to a breakdown in communication and trust, problems are often best fixed away from the rehearsal room and after the event once all concerned have had a chance to calm down.
So what are you saying Neil, before a new manager starts out they should spend three years in Spinal Tap? Hardly, but training for people management is a tricky subject. At some point it’s necessary to dive in with a real team, accept that mistakes will be made and aim to learn very quickly indeed. The thing is there are plenty of opportunities to gain an introduction outside of work, for me it was guitar wrangling, I’d love to hear what other people have found helpful.
In the previous post, Technical Debt is Different, I talked about the need to treat management of technical debt as a separate class of problem to that of feature requests generated outside of the team.
As with any project above a certain size, team collaboration is key, and that means having a reliable method of prioritising technical debt that the whole team can buy into. This post will describe a method that I have been using over the past year that satisfies this need.
I was new to my current the project and wanted to get an idea from the team of the sorts of things that needed attention. I mentioned this just before lunch one day and by the time I got back from my sandwich I had an etherpad with over 100 hundred items. By the end of the afternoon, I discovered that etherpad really doesn’t deal with documents above a certain size.
It was clear that we needed to a way to reference and store these ideas, I had two main requirements.
The first step was to go through the list and group items into similar themes, this helped identify duplicate or overlapping items. At this stage some items were rewritten to ensure that they were suitably specific and well-bounded.
Now that we had a grouped list of tasks it was time to attempt to prioritise. As discussed in the previous post, prioritising refactoring tasks can be challenging and passions are likely to run high. I felt that rather than simply stack ranking all items, it was better to categorise them against a set of orthogonal metrics. This led to a much more reasoned (though no less committed) debate of the relative merits of different tasks.
Every item was classified according to:-
The simplest metric, this is a very high level estimate of what sort of size the item was likely to be. Estimating the size helped highlight any differences in perceived scope, and in some cases items were broken down further at this point. Size estimation works best when estimates for tasks are relative to one another, however to seed the process we adopted the following rough convention.
Timeliness speaks of how the team feels about the task in terms of willingness to throw themselves into it. Items were assigned a timeliness value from four options.
How much will the team benefit from the change? Is it an area of the code base that it touched often? Perhaps it will noticeably speed development of certain types of features. It could be argued that Value is the only metric that matters, however Value needs to be considered in the context of risk (addressed through timeliness) and effort (addressed through size).
All items for a given Timeliness are measured relatively and given a score of ‘High’, ‘Medium’, ‘Low’. Low value items are rarely tackled, and even then, only if they happen to be in the Opportunity category.
Once all items had been classified, it is time to visualise the work items. To do this we transferred the items to cards and stuck them to a pin board, with timeliness on the horizontal axis and value on the vertical axis (each card contained a reference to the task size). Now it was possible to view all items at once, and from this starting point much easier to make decisions over which items to take next.
Since the whole team had contributed to the process it was clear to individuals why, even though their own proposals were important, that there was greater value in working on other items first. Crucially, we also had a process to ensure that these mid-priority items were not going to be forgotten and trust that they would be attended to in due course.
When a task is completed, we place a red tick against it to demonstrate progress, this helps build trust within the team that we really are working off our technical debt. Sometimes a specific piece of work, as a side effect, will lead to the team indirectly making progress against a technical debt item. When this happens we add a half tick, indicating that this card should be prioritised over other similarly important items so that we get it finished off completely.
This system is effective in reducing the stress that comes with managing technical debt and provided a means for all the team to have a say in where the team spent their effort. However, one area where it is weak is in managing very small, relatively low value tasks that can be completed in an hour or so. Examples might include removing unused code, reducing visibility on public fields, renaming confusingly named classes – in essence, things that you might expect to happen as part of general refactoring were you already working in the area.
To manage these small easy wins, the team maintains an etherpad of ‘Tiny Tasks’ and reviews new additions to the list on a weekly basis. The rule is that if anyone considers a task to be anything other than trivial it is thrown out and considered as part of the process above. These tasks are then picked up by the developer acting as the maintainer during the week.
Generally it is easier if an individual has final say of the prioritisation of tasks, in the case of technical debt this is harder since the whole team should be involved. Therefore, a trusted method of highlighting and prioritising technical debt tasks is needed. By breaking down the prioritisation process into separate ‘Size’, ‘Timeliness’ and ‘Value’, it was possible to have more reasoned discussion over the relative merits of items. Visualising the items together at the end of the categorisation process enables the team to make better decisions over what to work on next and builds trust that items will not be simply forgotten. Very small items can still be prioritised if the team agrees that they really are trivial.
Technical debt is great metaphor to describe what happens to your code base if you don’t continually keep it clean and tidy.
Any Software team accrues technical debt either intentionally to satisfy a short term win, or unintentionally as the design and requirements of a system drift over time. It’s a subject that has been written about extensively and I particularly liked NRG’s attempts to track it as part of their weekly metrics (you’ll want slide 9).
The thing I’ve noticed about servicing technical debt is that it is very different from other work a team might undertake and that it requires an alternative approach to manage it. The principal differences that I see are:-
I am fortunate to work for a company with strong engineering leadership that acknowledges and makes provision for the servicing of technical debt. However, even if the argument for technical debt has been won, deciding how best to tackle debt can be highly contentious and in some cases destructive.
The biggest problem is that of prioritisation. In many agile teams you would hope to have a single product owner who can make prioritisation decisions for product features, in practice this can be hard for an organisation to provide but the key point is that it’s important to minimise the number of final decision makers.
In the case of technical debt it is the dev team that decides, which means thrashing out the priorities across the entire team. Each developer will have a different, often very strong view, on what is important and arriving at a conclusion can be a long and painful journey. Additionally, existing project prioritisation tools such as MoSCoW do not lend themselves to technical debt prioritisation.
A trusted means to prioritise tasks makes it possible to identify a team wide strategy. Without a clear strategy there is the temptation for individuals to ‘go it alone’, this means that over time the overall impact is reduced. Firstly, larger items that are too big for one person are ignored and secondly, if it is not possible to decide on what is important, then collaboration becomes difficult. This means that the impact of smaller items is also diminished since they will feed into the individual developer’s strategic vision rather than that of the team’s overall vision. This in itself can become toxic as it breaks down trust within the team and further hampers collaboration. Rachel Davies has a great post describing the effects of self orientation on team trust.
The fact that technical debt is being tackled at all is a good thing, but it would be nice to do this in an efficient a way as possible. My team and I already spend a significant amount energy on improving our ability to deliver valuable software in a consistent fashion and our approach to managing technical debt should be no less disciplined. The only difference is that this time around, we are our own customer.
It’s clear that some form of prioritisation method is necessary, but committees are generally not a good way to make decisions. One approach is to assign a final decision maker, perhaps a tech lead or senior member of the team, but I really want a system where the entire team buys into the process. If the process is right then it should be rare for someone to have to say ‘this is how it is’.
Over the past year I’ve been working on a system to better manage my team’s technical debt, in my next post I’ll go on to explain the approach.
In addition to speaking on Kanban at XP Day 2010 I also gave a short lightning talk based on my earlier fragile posts on bug tracking (1,2). Initially I was apprehensive about standing up in front a room of agilistas and telling them I’d dispensed with digital bug tracking, but since the previous session had been about throwing everything out (really really everything, just deploy to live), I felt positively conservative.