Sew Make Do – A Lean Startup Experiment

I’ve been an advocate of applying lean thinking to software for some time, and learnt a lot form Eric Ries’s blog. I’ve just finished Ries’s book ‘The Lean Startup’ and naturally am looking for opportunities to apply its ideas in my own work place. However doing so will take time and more immediately I wondered what would happen if I started on something smaller.

Mrs Fragile recently bought a hand made lamp shade online and was disappointed with the results, as a keen crafter she wondered if she could do better, and perhaps even sell some of her own creations. While initially suspicious of my gallant offers to help her run things on lean startup lines so far she’s tolerating my efforts.

I thought it would interesting to document progress through fragile and perhaps receive some feedback/advice along the way. The nice thing is that since this is not a serious venture it should be possible to be more open then would other wise be possible. The project is named Sew Make Do.

Assumptions

We started with the following assumptions to test.

  1. People would like to buy Mrs Fragile’s lamp shades
  2. The people that would like to buy the lamp shades are female and in their late 20’s to early 40’s.
  3. 30cm and 20cm drums will be most popular.
  4. People will pay ~£28 for a 30cm shade
  5. People will pay ~£22 for a 20cm shade
  6. People will suggest custom fabrics to drive product development.

Of these assumptions by far the most risky is No 1. We have no idea if anyone will actually want to buy them. Therefore it makes sense to prioritise testing this assumption. To this end Mrs Fragile set up a shop on Etsy and presented a choice of 3 lamp shades offering a range of styles and sizes. This is our MVP for assumption 1. There is no reason to assume that long term Etsy will be the main distribution channel but it does provide a very quick way to put the product in front of potential customers.

Once, assumption 1 has been tested sufficiently to give us hope to persevere it will be easier to address the remaining assumptions, since all are dependent on sales.

Thoughts on metrics

The lamps shades have been up for a few days now, so far there have no sales but a good number of people have ‘admired’ them. It will be interesting to see if there is a link between the number of views, the number of admires and the number of sales. Longer term it would be interesting to perform cohort analysis on these indicators.

For now though we’re just hoping for the first sale – or possibly our cue to try something else…..

 

 

Too Much Trust

Trust trust trust trust trust trust trust trust trust trust
Excerpt from the management book I wish someone would write

 

A central theme in agile software development is that of trust. The agile (small a) movement speaks of openness, collaboration and collective responsibility – none of which are possible without trust. As a manager my team cannot be effective if they do not trust each other nor can I bring about anything but the most superficial change if they don’t trust me.

I’m not the only one who feels this way, turns out I’m in good company 1 2 3

So I like trust and consider it to be a ‘good thing’ but the point of this post is not to talk about how great it would be if there was more trust in the world. In fact I want to talk about situations where increasing trust can actually be destructive.

The total level of trust is undoubtedly important, but equally important is the distribution of that trust. The greater the differential between the relationship containing the most trust and that containing the least the less chance that the overall group can act as effective team.

A good high level example might be an engineering org and a sales org. It doesn’t matter how much internal org trust exists – if org to org trust is low the company will not perform as well. In fact the lack of inter org trust will felt all the more keenly in contrast to the strong internal trust that exists.

Applying this idea to a single engineering team, if a team has high trust for one another and a new member joins then it will take time for that new member to earn the group’s trust and be accepted as part of the team. This healthy and only natural. However if the team is split down the middle with two groups of high internal trust who do not trust one another then strengthening internal group trust will only entrench the distrust of the other group. In this case increasing trust can actually be harmful.

What I’m saying is that the effectiveness of a group to act as a team can be characterised by the weakest trust links in the group. If the differential between relationships is high then increasing trust in already strong relationships can actually hinder rather than help the team.

From a practical perspective, the manager’s job is always to create an environment where trust can grow, but it is important to focus on the low trust relationships since they are the ones that characterise the effectiveness of the team.

Worried about candidates googling during a phone screen? You’re doing it wrong.

Interviewing is time consuming, companies have a finite amount of time to dedicate to recruitment and inevitably some capable candidates are turned down at CV stage without ever having a chance to shine.

Phone screens are a great way to address this problem, they are typically shorter and often run solo. They allow a company to take more risks and consider candidates from further afield.

My company is still pretty new to phone screening, we’ve been trialling it out in cases where it is difficult for the candidate to attend in person – perhaps they are based overseas. As a result I’ve been doing a lot of reading on how best to construct a decent phone screen. By far the best writing I’ve found is Steve Yegge’s take. I’m not sure how practical it is to fit everything Yegge mentions into a 45 minute call, but I consider it an excellent resource.

A common fear I have seen in other discussions seems to be that candidates will use google to somehow game the system. If this is a genuine concern then one of two things has gone wrong. Either:-

  • The questions are purely fact based and will tell the interviewer nothing about how the candidate thinks.
  • Or, the questions are fine but the interviewer is focusing on the wrong part of the answer.

A question like ‘In Java what is the difference between finally, final and finalize’ will tell you very little about the candidate. Plenty of terrible programmers could answer that without problem and what’s worse, a talented but inexperienced developer might stumble. In short these type of quick fire questions add little value to the overall process.

Something like ‘How does a Hash Map work? How would you write a naive implementation?’ is more interesting, it’s open ended but forces the candidate to talk about a specific area of knowledge – even if they don’t know, you’ll learn how good they are at thinking things through from first principles. The only way that it can be gamed through googling is if the interviewer simply waiting to hear specific terms and is not asking free form follow ups.

I’ve just googled Hash Maps on wikipedia and could probably quickly extract ‘Associative array’, ‘key-value pair’, ‘Collision’ but really if that’s all the interviewer wants to hear then the question is of limited value.

So what I’m saying is that if you’re concerned about googling, then it’s probably the questions or desired answers that are the problem. Furthermore if one in a hundred people do manage to game the system you’ll pick them up in the face to face in an instant.