I was recently interviewed by London based job board Hacker Jobs. The interview covers a range of subjects but focuses on software development and technical recruitment. There’s nothing too crazy in there but I did manage to get in a quote from Tim Berners-Lee.
People hate change, and the reason they hate change is that they really hate change, and that change is hated because they really hate change…….
I’d love to know who said this
All teams are subjected to continuous environmental change, but it tends to be gradual and hard to perceive at a week by week level. I want to talk about the sharp, often unexpected step changes and go into some strategies to guide a team through the worst.
Before diving in, I want to introduce a model for characterising teams. There are two attributes that I consider critical in determining a team’s ability to function.
- Identity – Who the team perceive themselves to be, what they value.
- Narrative – Why the team exists, what value they bring.
I’m unaware of anyone else talking specifically in these terms but similar thinking appears in Daniel Pink’s ideas of Autonomy, Mastery (both mapping to Identity) and Purpose (narrative) as well as echoes in the higher levels of Maslow’s hierarchy of needs.
Ordinarily, definition of identity and narrative is straight forward. The team will arrive at their own identity over time, while the narrative, for the most part, comes from the commercial arm of the company. In times of change there are no such guarantees. I’ll look at each in turn.
As an individual, our identity is in part context specific and a function of those around us. The same is true for teams. This means that when the environment changes quickly, it can be difficult for a team to define itself. Definition means identifying the skills and attributes that set it apart and most importantly what it values when compared to those around it.
A manager can help speed this process. They have a birds eye view, they know how their team have defined themselves in the past and have more opportunities to interact with the broader business. The manager ought to be able to spot and highlight specific points that will go and form part of the team’s new, long term identity.
Additionally during upheaval it is for the manager to contextualise the actions and focus of other teams/departments. It’s all too easy to enter into a spiral where ‘everyone apart from us is an idiot’. A team needs to understand how they are different, but they also need to collaborate and work effectively with those around them.
Narrative is interesting in that it should be easy to identify. The business is willing to invest in the team for some purpose and that purpose ought to be the team’s narrative.
During times of upheaval this is not a given, and it could take months for a clear narrative to emerge, as the dust settles and the business redetermines the best way for the team to add value.
But waiting months for the new vision is not an option. Put bluntly, if the business cannot provide a compelling narrative quickly then the team manager must arrive at one. Once again it is time to make use of the manager’s elevated view of the organisation to sift through the confusion and draw out something tangible that resonates.
All teams need a sense of identity and a sense of narrative in order to be productive. During times of significant change both of these characteristics come into question. It is up to the team’s manager to act as the catalyst, as the team aims to arrive at new definitions.
Mrs Fragile recently bought a hand made lamp shade online and was disappointed with the results, as a keen crafter she wondered if she could do better, and perhaps even sell some of her own creations.
A key idea in lean startups is that metrics ought to be actionable. On his blog Ash Maurya defines explains Actionable Metrics
An actionable metric is one that ties specific and repeatable actions to observed results.
The opposite of actionable metrics are vanity metrics (like web hits or number of downloads) which only serve to document the current state of the product but offer no insight into how we got here or what to do next.
Tracking sales is of course an obvious thing to do but it is a very coarse measure. A more interesting metric is to look at how easy it is to convert a potential customer into a real customer. Over time we not only expect sales to increase but also expect to get better at selling such that our conversion rates also increase.
In an ideal world I would like to perform Cohort Analysis. This means tracking individual user behaviour and using it determine key actionable metrics. While more commonly applied in medical research in order to study the long term affects of drugs, common examples in the context of Lean Startups might be tracking user sign up and subsequent engagement over time. If it can be shown that 2 months after sign up users generally cease to engage in a service, it provides a pointer to what to work on next, as well as a clean means to determine if progress is being made.
The in-house analytics provided by Etsy do not provide the means to track the habits of specific users, but they do allow for aggregations of periods of time. This means that some level of analysis is still possible, though cannot be describes as true cohort analysis.
I’ve modelled my funnel like so:-
Of those that viewed the shop
- What percentage favourited the shop or a product. There is no reason to assume that someone buying the product will also favourite it, though at this point it is reasonable to assume some level of correlation.
- What percentage bought a product for the first time
- What percentage are returning paying customers buying a subsequent item.
As you can see from the graph, there is not a lot of data. Throughout the process our absolute views and favourites have increased, though it is interesting to see that our favourited percentage has improved. We put this down to improving the pictures and copy, though without more data it’s hard to make any firm statements.
What I’ve not done is break this down on a per product basis, right now we do not have enough products or traffic to justify it but we’re certainly noticing that some products are more popular.
In a few months times I’ll revisit this post and let you know how things are going. With a bit of luck there’ll be some yellow and green on there.
I’ve been an advocate of applying lean thinking to software for some time, and learnt a lot form Eric Ries’s blog. I’ve just finished Ries’s book ‘The Lean Startup’ and naturally am looking for opportunities to apply its ideas in my own work place. However doing so will take time and more immediately I wondered what would happen if I started on something smaller.
Mrs Fragile recently bought a hand made lamp shade online and was disappointed with the results, as a keen crafter she wondered if she could do better, and perhaps even sell some of her own creations. While initially suspicious of my gallant offers to help her run things on lean startup lines so far she’s tolerating my efforts.
I thought it would interesting to document progress through fragile and perhaps receive some feedback/advice along the way. The nice thing is that since this is not a serious venture it should be possible to be more open then would other wise be possible. The project is named Sew Make Do.
We started with the following assumptions to test.
- People would like to buy Mrs Fragile’s lamp shades
- The people that would like to buy the lamp shades are female and in their late 20’s to early 40’s.
- 30cm and 20cm drums will be most popular.
- People will pay ~£28 for a 30cm shade
- People will pay ~£22 for a 20cm shade
- People will suggest custom fabrics to drive product development.
Of these assumptions by far the most risky is No 1. We have no idea if anyone will actually want to buy them. Therefore it makes sense to prioritise testing this assumption. To this end Mrs Fragile set up a shop on Etsy and presented a choice of 3 lamp shades offering a range of styles and sizes. This is our MVP for assumption 1. There is no reason to assume that long term Etsy will be the main distribution channel but it does provide a very quick way to put the product in front of potential customers.
Once, assumption 1 has been tested sufficiently to give us hope to persevere it will be easier to address the remaining assumptions, since all are dependent on sales.
Thoughts on metrics
The lamps shades have been up for a few days now, so far there have no sales but a good number of people have ‘admired’ them. It will be interesting to see if there is a link between the number of views, the number of admires and the number of sales. Longer term it would be interesting to perform cohort analysis on these indicators.
For now though we’re just hoping for the first sale – or possibly our cue to try something else…..
Trust trust trust trust trust trust trust trust trust trust
A central theme in agile software development is that of trust. The agile (small a) movement speaks of openness, collaboration and collective responsibility – none of which are possible without trust. As a manager my team cannot be effective if they do not trust each other nor can I bring about anything but the most superficial change if they don’t trust me.
So I like trust and consider it to be a ‘good thing’ but the point of this post is not to talk about how great it would be if there was more trust in the world. In fact I want to talk about situations where increasing trust can actually be destructive.
The total level of trust is undoubtedly important, but equally important is the distribution of that trust. The greater the differential between the relationship containing the most trust and that containing the least the less chance that the overall group can act as effective team.
A good high level example might be an engineering org and a sales org. It doesn’t matter how much internal org trust exists – if org to org trust is low the company will not perform as well. In fact the lack of inter org trust will felt all the more keenly in contrast to the strong internal trust that exists.
Applying this idea to a single engineering team, if a team has high trust for one another and a new member joins then it will take time for that new member to earn the group’s trust and be accepted as part of the team. This healthy and only natural. However if the team is split down the middle with two groups of high internal trust who do not trust one another then strengthening internal group trust will only entrench the distrust of the other group. In this case increasing trust can actually be harmful.
What I’m saying is that the effectiveness of a group to act as a team can be characterised by the weakest trust links in the group. If the differential between relationships is high then increasing trust in already strong relationships can actually hinder rather than help the team.
From a practical perspective, the manager’s job is always to create an environment where trust can grow, but it is important to focus on the low trust relationships since they are the ones that characterise the effectiveness of the team.
Phone screens are a great way to address this problem, they are typically shorter and often run solo. They allow a company to take more risks and consider candidates from further afield.
My company is still pretty new to phone screening, we’ve been trialling it out in cases where it is difficult for the candidate to attend in person – perhaps they are based overseas. As a result I’ve been doing a lot of reading on how best to construct a decent phone screen. By far the best writing I’ve found is Steve Yegge’s take. I’m not sure how practical it is to fit everything Yegge mentions into a 45 minute call, but I consider it an excellent resource.
A common fear I have seen in other discussions seems to be that candidates will use google to somehow game the system. If this is a genuine concern then one of two things has gone wrong. Either:-
- The questions are purely fact based and will tell the interviewer nothing about how the candidate thinks.
- Or, the questions are fine but the interviewer is focusing on the wrong part of the answer.
A question like ‘In Java what is the difference between finally, final and finalize’ will tell you very little about the candidate. Plenty of terrible programmers could answer that without problem and what’s worse, a talented but inexperienced developer might stumble. In short these type of quick fire questions add little value to the overall process.
Something like ‘How does a Hash Map work? How would you write a naive implementation?’ is more interesting, it’s open ended but forces the candidate to talk about a specific area of knowledge – even if they don’t know, you’ll learn how good they are at thinking things through from first principles. The only way that it can be gamed through googling is if the interviewer simply waiting to hear specific terms and is not asking free form follow ups.
I’ve just googled Hash Maps on wikipedia and could probably quickly extract ‘Associative array’, ‘key-value pair’, ‘Collision’ but really if that’s all the interviewer wants to hear then the question is of limited value.
So what I’m saying is that if you’re concerned about googling, then it’s probably the questions or desired answers that are the problem. Furthermore if one in a hundred people do manage to game the system you’ll pick them up in the face to face in an instant.
It all comes down to two things
- I get to work with people who really love what they do.
- I get to work with people who are insanely open to change.