Too Much Trust

Trust trust trust trust trust trust trust trust trust trust
Excerpt from the management book I wish someone would write

 

A central theme in agile software development is that of trust. The agile (small a) movement speaks of openness, collaboration and collective responsibility – none of which are possible without trust. As a manager my team cannot be effective if they do not trust each other nor can I bring about anything but the most superficial change if they don’t trust me.

I’m not the only one who feels this way, turns out I’m in good company 1 2 3

So I like trust and consider it to be a ‘good thing’ but the point of this post is not to talk about how great it would be if there was more trust in the world. In fact I want to talk about situations where increasing trust can actually be destructive.

The total level of trust is undoubtedly important, but equally important is the distribution of that trust. The greater the differential between the relationship containing the most trust and that containing the least the less chance that the overall group can act as effective team.

A good high level example might be an engineering org and a sales org. It doesn’t matter how much internal org trust exists – if org to org trust is low the company will not perform as well. In fact the lack of inter org trust will felt all the more keenly in contrast to the strong internal trust that exists.

Applying this idea to a single engineering team, if a team has high trust for one another and a new member joins then it will take time for that new member to earn the group’s trust and be accepted as part of the team. This healthy and only natural. However if the team is split down the middle with two groups of high internal trust who do not trust one another then strengthening internal group trust will only entrench the distrust of the other group. In this case increasing trust can actually be harmful.

What I’m saying is that the effectiveness of a group to act as a team can be characterised by the weakest trust links in the group. If the differential between relationships is high then increasing trust in already strong relationships can actually hinder rather than help the team.

From a practical perspective, the manager’s job is always to create an environment where trust can grow, but it is important to focus on the low trust relationships since they are the ones that characterise the effectiveness of the team.

Worried about candidates googling during a phone screen? You’re doing it wrong.

Interviewing is time consuming, companies have a finite amount of time to dedicate to recruitment and inevitably some capable candidates are turned down at CV stage without ever having a chance to shine.

Phone screens are a great way to address this problem, they are typically shorter and often run solo. They allow a company to take more risks and consider candidates from further afield.

My company is still pretty new to phone screening, we’ve been trialling it out in cases where it is difficult for the candidate to attend in person – perhaps they are based overseas. As a result I’ve been doing a lot of reading on how best to construct a decent phone screen. By far the best writing I’ve found is Steve Yegge’s take. I’m not sure how practical it is to fit everything Yegge mentions into a 45 minute call, but I consider it an excellent resource.

A common fear I have seen in other discussions seems to be that candidates will use google to somehow game the system. If this is a genuine concern then one of two things has gone wrong. Either:-

  • The questions are purely fact based and will tell the interviewer nothing about how the candidate thinks.
  • Or, the questions are fine but the interviewer is focusing on the wrong part of the answer.

A question like ‘In Java what is the difference between finally, final and finalize’ will tell you very little about the candidate. Plenty of terrible programmers could answer that without problem and what’s worse, a talented but inexperienced developer might stumble. In short these type of quick fire questions add little value to the overall process.

Something like ‘How does a Hash Map work? How would you write a naive implementation?’ is more interesting, it’s open ended but forces the candidate to talk about a specific area of knowledge – even if they don’t know, you’ll learn how good they are at thinking things through from first principles. The only way that it can be gamed through googling is if the interviewer simply waiting to hear specific terms and is not asking free form follow ups.

I’ve just googled Hash Maps on wikipedia and could probably quickly extract ‘Associative array’, ‘key-value pair’, ‘Collision’ but really if that’s all the interviewer wants to hear then the question is of limited value.

So what I’m saying is that if you’re concerned about googling, then it’s probably the questions or desired answers that are the problem. Furthermore if one in a hundred people do manage to game the system you’ll pick them up in the face to face in an instant.

Why Work At Your Company?

Recruitment is all about relationships and trust, whichever way you look at it common recruitment practices support neither. While there are countless articles focusing on how hard it is to hire good developers little is said about how to find good companies. Trust work both ways and in order to ‘fix’ recruitment both sides of trust equation must be balanced. Examining each in turn:-

Employer-> Candidate trust

Employers have low trust in external recruiters, low trust in CVs, and low trust that candidates can complete a Fizz Buzz question. This means that it’s not possible to invest sufficient time in individual applications, which in turn makes it less likely they’ll ever attract really good people.

Services like LinkedIn and Stack Overflow have made some gains in solving the employer-> candidate trust problem. In LinkedIn’s case they have scaled the ability to ‘ask around’ for recommendations and Stack Overflow provides a feel for someone’s knowledge. Neither is perfect and in truth the best they can do is give me confidence that the candidate is not a total waste of time.

Candidate -> Employer trust

The Candidate -> Employer problem is more interesting not least because it’s generally ignored. Unless you happen to be Google or Facebook candidate->employer trust is a major stumbling block. How can a candidate be sure that they are dealing with a good company? They can’t trust their agent to have a clue (or care) and they themselves will not be aware of a host of interesting companies. As such applications tend towards the bland and generic since candidates cannot afford to spend days tailoring individual introductions, this in turn fuels the employer perception that passionate interested candidates do not exist.

As an example, I work for a small B2B Telecoms company, our work is on the public eye, but our brand is not. Most developers will not be aware of us. Once hired, developers tend to want to stay with us, with working environment and the freedom to ‘get things done’ playing a big part in that. However as a company I have no easy way to express this. It’s not even a case of saying ‘Isn’t my company great!’ it’s much more about about describing the trade offs. Not everyone will appreciate the chaos, pace and variety of working at a small company, some will prefer the promise of a well defined career path, security and greater opportunity to specialise typicallly afforded by a larger organisation. It’s down to personal opinion.

Individual companies can solve this problem by publishing an engineering blog, sponsoring community events, getting people speaking at conferences and generally exposing their culture and values. It could be argued that companies willing to go to these lengths clearly value recruitment more highly than others and deserve the rewards. However, if there was someway that candidates could pull that information rather than have it pushed then that would be hugely valuable.

The closest example I can see is the Joel Test. To me the Joel Test is starting to show it’s age and could benefit from an update, the best it can say is ‘this company is less likely to be a horrific place to work’. Glass Door also addresses this in part, though practically speaking companies must be a of certain size before it becomes useful.

I’m not sure what the solution might be. Perhaps a curated job board/job fair is the way to go, the curator finds a way to characterise companies and makes sure it only backs good companies. This builds trust with candidates, and should mean that it attracts the top people, especially those for whom money is not the top driver. Companies are happy to pay decent rates because they know how good the candidate pool is, further more there is prestige in being associated with the agency.

The Challenge

So, world, here is my challenge to you. How can I, as a company, express my culture and values in a meaningful and standard way so that candidates can approach me with confidence.

‘We only hire the best’ – I don’t believe you

Ask anyone about hiring developers and the advice is always the same ‘only hire the best’. The principle reasons being that

On the face the face of it this seems like great advice, who wouldn’t want to hire the best? It turns out pretty much everybody.

For instance, how long are you willing to wait to fill the position? What if you are really really stretched? What if you’re so stretched that you worry for existing staff? What if hiring a specific individual will mean huge disparities in pay between equally productive staff? What if not making the hire is difference between keeping a key client or losing them? At some point every company has to draw a line and elect to hire ‘the best we’ve seen so far’.

The difference between the great companies and the rest is how to deal with this problem. Great organisations place recruitment at the centre of what they do. If hiring is genuinely everyone’s number one priority then hiring the best becomes more achievable. For starters you might even have half a chance of getting ‘the best’ into your interview room in the first place.

Of the rhetorical questions posed above, in all cases the impact can minimised (though not eradicated) so long as management understands and anticipates the challenges in recruitment. For example “What if hiring them will mean huge disparities in pay between equally productive staff?” A company that intends to hire the best understands the value of keeping the best. So compensation of existing staff, especially longer serving staff relying on annual raises to ensure market parity, must be at an appropriate level. Doing so can be hugely expensive when multiplied over all employees and this cost comes directly from the bottom line. Companies that put recruitment at the core are willing to make the investment. Yishan Wong’s writing on this subject is brilliant.

If hiring really is everyone’s number one priority then there is a trade off to make, something has been deprioritised or sacrificed to make room. As a result hiring is much more than a partitionable activity, it is a statement of corporate identity. Proclamations like “we only hire the best” are meaningless without an understanding of the trade offs and sacrifices made.

People are not our most valuable resource – a response

Pawel Brodzinski recently wrote a post entitled ‘People are not our most valuable resource’ the point being that people aren’t resources at all , they’re people and should be treated as such.

 

“Every time I hear this cliché about people being most valuable resource I wonder: how the heck can you say people are most valuable when you treat them as resource? As commodity. As something which can be replaced with another identical um… resource. If you say that, you basically deny that people in your organization are important.”

I’m in agreement with Pawel on this point, but I’d go further. Not only is a statement like ‘People are our most valuable resource’ degrading and counter productive, even if you restate it as ‘Nothing is more important than our people’ it’s still incorrect. The real value had nothing to do with people and everything to do with teams.

The key thing that a team provides is a means to align the goals of its members. These goals need not be for the greater good of humanity, in fact they’re generally much more mundane. It really doesn’t matter who wins the world cup* or whether project omega will ship by next Tuesday, all that matters is that the team succeeds in its common goal. A group all pulling in the same direction is orders of magnitude more effective than that same group working as individuals – a business cannot be successful without effective teams.

The trouble is the word ‘team’ is massively over used, it’s a buzzword that has become so ubiquitous we don’t even notice it. The tendency to assemble a group of disparate people and label them as a ‘team’ devalues the concept. One area where this is especially true is that of ‘The Management Team’, generally comprised of middle management peers from various disciplines this group often have very little in common in terms of shared goals and identity.

And here lies the problem, if management is unused to working in a team themselves, then the value of a team is less visible. Furthermore, since it is generally individuals, not the team as a whole, who complete the component tasks the team effect is not obvious from afar.

I don’t think you’ll find an organisation that is anti team, simply that it’s hard prioritise the tasks necessary to encourage team formation when the value of teams is poorly understood. It’s easy to measure the cost of co-location but much harder to measure the benefit to the co-located team, hence the true value of the team is passed over.

Not only are ‘people are not our most valuable resource’, people aren’t our most valuable anything just on their own, it’s all about teams.

[In this post I’ve purposely avoided the subject of how to form a team. It turns out that it’s quite tricky, I’d recommend Peopleware as a good place to start.]

* Except if it’s England of course.

 

 

Hell is for Heroes

Last December I attended the London XPDay, the session I enjoyed most was run by Chris Matts on the subject of Heroes in the context of software development teams.

What makes a Hero?

Chris put forward the idea of the ‘Hero’, an unofficial role assumed by an experienced developer critical to the project. There are positive aspects to the role, this is person turned to in the moment of crisis when something must be fixed ‘right now’, they are the person with the deepest knowledge of the system and an indispensable contributor to the project.  As with many things, however, this strength can also be a weakness. The feeling of being indispensable is very powerful and freely sharing knowledge and collaborating with other less experienced/capable team mates only undermines this feeling. If the Hero is no longer the only person who can fix the problem, surely that makes them less important?

In extreme cases the presence of the Hero becomes toxic, reluctance to collaborate is unpleasant, but active obstruction and obfuscation is something else. At this point the team/project has some serious problems. On the one hand it is doomed to failure without the Hero, on the other-hand, the ability of the group to act as a team evaporates and progress on the project is brought to all but a standstill.

What to do when a Hero goes bad?

We spoke for about an hour on the subject and while there were one or two examples that partially dealt with the problem, ranging from ‘move them to another project’ through to ‘fire them’ (!), no-one in attendance was able to provide a truly positive example of recovering from such a scenario.

I am fortunate in never having worked with anyone quite as extreme as the examples presented in the session, but where I have seen glimpses of this behaviour my sense is that overwhelmingly, the behaviour is the product of environment rather than necessarily any failing on the part of the individual.

The participants in the session, seemed to be made up of managers/coaches rather than out and out developers, which may explain why much of the discussion seemed to presuppose that the fault lay solely with the Hero.

Common environmental factors that I have observed include:-

  • Perceived ambiguity over who has technical responsibility for a system
  • Poor performance feedback and/or poorly communicated career development
  • Lack of trust/respect amongst team mates
  • Seemingly overwhelming operational issues
  • Compensation schemes pitting team mates against one another

It is the manager not the Hero who has most influence over these points. So I think that before answering the question ‘What to do when your Hero goes bad?’, a better question, as a manager is ‘What have I done wrong to allow my Hero to go bad?’.

Focus on Teams

Unhelpfully, as with all management problems, the best way to solve the problem is not to have it in the first place. Placing greater emphasis on the performance of the team rather than on the individual can help here. Any action that benefits the whole team is recognised and celebrated so the Hero need not lose prestige by supporting those around him. In fact the Hero’s standing increases since he is now multiplying his own capability through increasing the skills of the team. As a side effect, since the team is now more capable the Hero has more time to spend on the truly difficult problems, which in time, he will pass onto the rest of the team.

Switching focus away from individuals and towards the team is a non trivial exercise, but if the agile movement has brought us anything it is methods to engender collaboration, trust and team level thinking.

In Praise of Continuous Deployment

It doesn’t matter if you get there, every step along the way is an improvement.
Me, praising Continuous Deployment

Ever since coming across the idea on Eric Ries’s blog I’ve always been a big fan of Continuous Deployment. For those unfamiliar with the term, it means writing your code, testing frameworks and monitoring systems in such a way that it is possible to completely automate the process of going from source control commit to deployment to a live system without posing a quality melt down. This means teams can find themselves deploying 50 times a day as a matter of course.

Yeah

It’s not without it’s critics, and a lot of people see this as one way ticket to putting out poorly tested buggy code. I think that those folk completely miss the point and that in many scenarios in fact the opposite is true. The thing I really like though, is that, whether or you ever get to the point of automatically deploying every commit to live, every step that you might take to get there is hugely positive.

So, really, what would have to happen in order to employ a Continuous Deployment regime?

18 months ago my then team started to take this idea more seriously, I thought it would be interesting to give an overview of the steps taken towards Continuous Deployment, and since we’re certainly not there yet, what we plan to do in the future.

We started from a point where we would release to live environment every few weeks. Deployments, including pre and post deploy testing could take two people half a day sometimes more. I should also say that we are dealing with machine to machine SaaS systems where the expectation is that the service is always available.

Reduce manual deployment load

Our first efforts aimed to reduce the human load on deployment through automation. Fear meant that we still needed to ssh into every node to restart but every other step was taken care of. This meant that it eventually became common place to deploy multiple times a week across multiple platforms.

Improve system test coverage

Once a deploy was live we were still spending considerable time on behaviour verification. To address this we worked to improve our system and load testing capability. Doing so meant that we had more time to manually verify deploy specific behaviour, safe in the knowledge that the general behaviour was covered by the tester.

Improve system monitoring

This approach also requires a high level of trust in system monitoring. We have our own in house monitoring system whose capabilities we expanded during this period. In particular, we improved our expression language to better state what constituted erroneous behaviour and we also worked on better long term trend analysis, taking inspiration from this paper . It’s no surprise to me that it came out of IMVU who have been practicing Continuous Deployment for a long time.

Reduce deploy size

Since the act of deployment was now much less expensive we looked to reduce the number of changes that went out in each deploy. At first this felt false, after all if the user can’t use the feature in it’s entirety, what’s the point? We soon realised that smaller chunks were easier to verify and sped us up over time. We took an approach that I’ve since heard referred to as ‘experiments’ so that new functionality could be deployed live but was hidden from regular users. It meant that we could demo new functionality in production, without disrupting the business as usual service.

Embrace lean inspired methodology

Breaking down deploys into a few day’s worth of work also improved our lead time meaning that we could more responsive in the event of a change of plan. It was during this period that we switched from time boxing to Kanban. This is interesting since Continuous Deployment is often championed by the lean startup movement.

The future

More recently, actively pursuing Continuous Deployment has taken a back seat, but the next logical steps could be to further flesh out the system test coverage and then look to completely automate deployment to the staging environment (modulo database changes).

However, it doesn’t really matter what we do next, if it takes us a step closer to theoretically being able to deploy continuously it will undoubtedly improve our existing lead time and responsiveness.

This post contains a number of Continuous Deployment resources, but a few further articles I found to be interesting follow include:-

What being in a band taught me about management

The only way to learn to manage is to do it;
and the only way to do it, is to do in front of people;
and the only way to do it front of people, is make a bunch of mistakes in a very public forum;
and the only saving grace is that, as an inexperienced manager, it’s really not clear quite how many mistakes are being made.

Me, ranting, in a pub, in West London

This, of course, is of small consolation to the manager’s team.

So the question is, how do we train people for team management without causing pain and suffering to the team? I don’t think there’s a simple answer, but it definitely helped me to have a chance to learn something outside of my professional life.

Back in the days when I had silly hair and green shoes, I used to play guitar in a band. Much like software teams, the problems a band faces are as much social as they are technical. A band needs someone to draw the group together, drive things forward and turn a bunch of dreamy-eyed losers into a bunch of dreamy-eyed losers who, you know, might get a gig. I’d love to think that I was in the band for my guitar excellence, but in truth my job was to keep things together. Sadly the Lonely Crowd never quite made it beyond the indie dives of London town, but it taught me a huge amount that I would later apply in managing teams of software developers.

Trust is key

Without trust it’s not possible for the group to work effectively. I’m not talking about trusting someone with a winning lottery ticket, more that I know I can rely on that person in the context of the project. Once the trust is gone the band is gone, it’s not coming back. Similarly, as a manager, my effectiveness is directly related to the trust within the team.

No need to motivate, just don’t demotivate

Generally, people who form bands are motivated passionate people, no-ones’s getting paid to be there, and even those more interested in impressing girls/boys than music, need to make sure the band is as good as it can be. The easier it is for the group to concentrate on turning ideas into songs and turning songs into set lists, the more satisfying the whole thing will be. So don’t worry about motivation, focus on removing obstacles and dealing with cranky promoters.

Provide a vision

Often the line between creative spark and creative fleurgh is very thin. Someone has to provide a vision for the group to work towards. In my case this meant coming to the band with rough song ideas, I’d bleed over these things in my bedroom, secretly very proud of my work, only for the rest of the guys to mutate it into something excellent. The point is that without that first step nothing would have happened. Remember that the aim is not to be the best musician, it’s to make the best musicians better.

Roles and responsibilities

A band has distinct roles, when people talk about the Beatles they rarely start with George Harrison, but his considered rhythm guitar parts made it possible for Lennon and McCartney to steal the show, similarly Bill and Ted were never going to get anywhere on their own. The point is that everyone needs to understand where they fit and exactly what they bring to the group, if the drummer is thinking like a lead guitarist, the band will sound awful no matter what.

Feedback

Without good feedback the music will suffer, either through a lack of innovation or through a lack of quality control. The key is finding a way to express your thoughts, good or bad, without it being taken personally. Thinking managerially, the aim should be that the whole group can provide good feedback. Doing so effectively requires a high level of trust within the group as well as a sense of when to intervene if the criticism becomes destructive.

Limit work in progress

Getting a song to a performable state is massive step. It brings the group together and feels like progress. It’s better to have three presentable songs than nine nearly finished ‘things’, not least because it then provides a means for feedback from outside of the group.

Manage internal tensions

Where passionate people collaborate there will always be differences in opinion. Impassioned debate is healthy and a sign that band mates care about the project, but sometimes things get out of hand and it’s necessary for a third party to mediate. Generally it comes down to a breakdown in communication and trust, problems are often best fixed away from the rehearsal room and after the event once all concerned have had a chance to calm down.

So what are you saying Neil, before a new manager starts out they should spend three years in Spinal Tap? Hardly, but training for people management is a tricky subject. At some point it’s necessary to dive in with a real team, accept that mistakes will be made and aim to learn very quickly indeed. The thing is there are plenty of opportunities to gain an introduction outside of work, for me it was guitar wrangling, I’d love to hear what other people have found helpful.

How I Manage Technical Debt

debt pillIn the previous post, Technical Debt is Different, I talked about the need to treat management of technical debt as a separate class of problem to that of feature requests generated outside of the team.

As with any project above a certain size, team collaboration is key, and that means having a reliable method of prioritising technical debt that the whole team can buy into. This post will describe a method that I have been using over the past year that satisfies this need.

Identify

I was new to my current the project and wanted to get an idea from the team of the sorts of things that needed attention. I mentioned this just before lunch one day and by the time I got back from my sandwich I had an etherpad with over 100 hundred items. By the end of the afternoon, I discovered that etherpad really doesn’t deal with documents above a certain size.

It was clear that we needed to a way to reference and store these ideas, I had two main requirements.

  • It was easy to visualise the work items
  • An easy, non-contentious way to assign priority

The first step was to go through the list and group items into similar themes, this helped identify duplicate or overlapping items. At this stage some items were rewritten to ensure that they were suitably specific and well-bounded.

Prioritise

Now that we had a grouped list of tasks it was time to attempt to prioritise. As discussed in the previous post, prioritising refactoring tasks can be challenging and passions are likely to run high. I felt that rather than simply stack ranking all items, it was better to categorise them against a set of orthogonal metrics. This led to a much more reasoned (though no less committed) debate of the relative merits of different tasks.

Every item was classified according to:-

  • Size
  • Timeliness
  • Value

Size

The simplest metric, this is a very high level estimate of what sort of size the item was likely to be. Estimating the size helped highlight any differences in perceived scope, and in some cases items were broken down further at this point. Size estimation works best when estimates for tasks are relative to one another, however to seed the process we adopted the following rough convention.

  • Small – A week
  • Medium – 2 weeks
  • Large – 3 weeks for 2 people

Timeliness

Timeliness speaks of how the team feels about the task in terms of willingness to throw themselves into it. Items were assigned a timeliness value from four options.

  • ASAP – There is no reason not to do this task right now. Typical examples include obvious items that the team were all highly in favour of, or items that the team had been aware of for some time and feel that enough is enough.
  • Opportunity – An item that lends itself to being worked on while the team is already working in the area.
  • Medium term – An item that is thought of as a ‘wouldn’t it be nice some day’. The items are typically riskier than ASAP or Opportunity and the team need to really commit to it’s execution before embarking on the item.
  • Long term – Similar to medium and generally populated by reviewing the medium section and selecting items that are imposing or risky enough to postpone behind other medium tasks.

Value

How much will the team benefit from the change? Is it an area of the code base that it touched often? Perhaps it will noticeably speed development of certain types of features. It could be argued that Value is the only metric that matters, however Value needs to be considered in the context of risk (addressed through timeliness) and effort (addressed through size).

All items for a given Timeliness are measured relatively and given a score of ‘High’, ‘Medium’, ‘Low’. Low value items are rarely tackled, and even then, only if they happen to be in the Opportunity category.

Visualise

Once all items had been classified, it is time to visualise the work items. To do this we transferred the items to cards and stuck them to a pin board, with timeliness on the horizontal axis and value on the vertical axis (each card contained a reference to the task size). Now it was possible to view all items at once, and from this starting point much easier to make decisions over which items to take next.

Since the whole team had contributed to the process it was clear to individuals why, even though their own proposals were important, that there was greater value in working on other items first. Crucially, we also had a process to ensure that these mid-priority items were not going to be forgotten and trust that they would be attended to in due course.

Technical Debt Board

When a task is completed, we place a red tick against it to demonstrate progress, this helps build trust within the team that we really are working off our technical debt. Sometimes a specific piece of work, as a side effect, will lead to the team indirectly making progress against a technical debt item. When this happens we add a half tick, indicating that this card should be prioritised over other similarly important items so that we get it finished off completely.

Tiny Tasks

This system is effective in reducing the stress that comes with managing technical debt and provided a means for all the team to have a say in where the team spent their effort.  However, one area where it is weak is in managing very small, relatively low value tasks that can be completed in an hour or so. Examples might include removing unused code, reducing visibility on public fields, renaming confusingly named classes – in essence, things that you might expect to happen as part of general refactoring were you already working in the area.

To manage these small easy wins, the team maintains an etherpad of ‘Tiny Tasks’ and reviews new additions to the list on a weekly basis.  The rule is that if anyone considers a task to be anything other than trivial it is thrown out and considered as part of the process above. These tasks are then picked up by the developer acting as the maintainer during the week.

So what does it all mean?

Generally it is easier if an individual has final say of the prioritisation of tasks, in the case of technical debt this is harder since the whole team should be involved. Therefore, a trusted method of highlighting and prioritising technical debt tasks is needed. By breaking down the prioritisation process into separate ‘Size’, ‘Timeliness’ and ‘Value’, it was possible to have more reasoned discussion over the relative merits of items. Visualising the items together at the end of the categorisation process enables the team to make better decisions over what to work on next and builds trust that items will not be simply forgotten. Very small items can still be prioritised if the team agrees that they really are trivial.