Sew Make Do – A Lean Startup Experiment

I’ve been an advocate of applying lean thinking to software for some time, and learnt a lot form Eric Ries’s blog. I’ve just finished Ries’s book ‘The Lean Startup’ and naturally am looking for opportunities to apply its ideas in my own work place. However doing so will take time and more immediately I wondered what would happen if I started on something smaller.

Mrs Fragile recently bought a hand made lamp shade online and was disappointed with the results, as a keen crafter she wondered if she could do better, and perhaps even sell some of her own creations. While initially suspicious of my gallant offers to help her run things on lean startup lines so far she’s tolerating my efforts.

I thought it would interesting to document progress through fragile and perhaps receive some feedback/advice along the way. The nice thing is that since this is not a serious venture it should be possible to be more open then would other wise be possible. The project is named Sew Make Do.

Assumptions

We started with the following assumptions to test.

  1. People would like to buy Mrs Fragile’s lamp shades
  2. The people that would like to buy the lamp shades are female and in their late 20’s to early 40’s.
  3. 30cm and 20cm drums will be most popular.
  4. People will pay ~£28 for a 30cm shade
  5. People will pay ~£22 for a 20cm shade
  6. People will suggest custom fabrics to drive product development.

Of these assumptions by far the most risky is No 1. We have no idea if anyone will actually want to buy them. Therefore it makes sense to prioritise testing this assumption. To this end Mrs Fragile set up a shop on Etsy and presented a choice of 3 lamp shades offering a range of styles and sizes. This is our MVP for assumption 1. There is no reason to assume that long term Etsy will be the main distribution channel but it does provide a very quick way to put the product in front of potential customers.

Once, assumption 1 has been tested sufficiently to give us hope to persevere it will be easier to address the remaining assumptions, since all are dependent on sales.

Thoughts on metrics

The lamps shades have been up for a few days now, so far there have no sales but a good number of people have ‘admired’ them. It will be interesting to see if there is a link between the number of views, the number of admires and the number of sales. Longer term it would be interesting to perform cohort analysis on these indicators.

For now though we’re just hoping for the first sale – or possibly our cue to try something else…..

 

 

How I Manage Technical Debt

debt pillIn the previous post, Technical Debt is Different, I talked about the need to treat management of technical debt as a separate class of problem to that of feature requests generated outside of the team.

As with any project above a certain size, team collaboration is key, and that means having a reliable method of prioritising technical debt that the whole team can buy into. This post will describe a method that I have been using over the past year that satisfies this need.

Identify

I was new to my current the project and wanted to get an idea from the team of the sorts of things that needed attention. I mentioned this just before lunch one day and by the time I got back from my sandwich I had an etherpad with over 100 hundred items. By the end of the afternoon, I discovered that etherpad really doesn’t deal with documents above a certain size.

It was clear that we needed to a way to reference and store these ideas, I had two main requirements.

  • It was easy to visualise the work items
  • An easy, non-contentious way to assign priority

The first step was to go through the list and group items into similar themes, this helped identify duplicate or overlapping items. At this stage some items were rewritten to ensure that they were suitably specific and well-bounded.

Prioritise

Now that we had a grouped list of tasks it was time to attempt to prioritise. As discussed in the previous post, prioritising refactoring tasks can be challenging and passions are likely to run high. I felt that rather than simply stack ranking all items, it was better to categorise them against a set of orthogonal metrics. This led to a much more reasoned (though no less committed) debate of the relative merits of different tasks.

Every item was classified according to:-

  • Size
  • Timeliness
  • Value

Size

The simplest metric, this is a very high level estimate of what sort of size the item was likely to be. Estimating the size helped highlight any differences in perceived scope, and in some cases items were broken down further at this point. Size estimation works best when estimates for tasks are relative to one another, however to seed the process we adopted the following rough convention.

  • Small – A week
  • Medium – 2 weeks
  • Large – 3 weeks for 2 people

Timeliness

Timeliness speaks of how the team feels about the task in terms of willingness to throw themselves into it. Items were assigned a timeliness value from four options.

  • ASAP – There is no reason not to do this task right now. Typical examples include obvious items that the team were all highly in favour of, or items that the team had been aware of for some time and feel that enough is enough.
  • Opportunity – An item that lends itself to being worked on while the team is already working in the area.
  • Medium term – An item that is thought of as a ‘wouldn’t it be nice some day’. The items are typically riskier than ASAP or Opportunity and the team need to really commit to it’s execution before embarking on the item.
  • Long term – Similar to medium and generally populated by reviewing the medium section and selecting items that are imposing or risky enough to postpone behind other medium tasks.

Value

How much will the team benefit from the change? Is it an area of the code base that it touched often? Perhaps it will noticeably speed development of certain types of features. It could be argued that Value is the only metric that matters, however Value needs to be considered in the context of risk (addressed through timeliness) and effort (addressed through size).

All items for a given Timeliness are measured relatively and given a score of ‘High’, ‘Medium’, ‘Low’. Low value items are rarely tackled, and even then, only if they happen to be in the Opportunity category.

Visualise

Once all items had been classified, it is time to visualise the work items. To do this we transferred the items to cards and stuck them to a pin board, with timeliness on the horizontal axis and value on the vertical axis (each card contained a reference to the task size). Now it was possible to view all items at once, and from this starting point much easier to make decisions over which items to take next.

Since the whole team had contributed to the process it was clear to individuals why, even though their own proposals were important, that there was greater value in working on other items first. Crucially, we also had a process to ensure that these mid-priority items were not going to be forgotten and trust that they would be attended to in due course.

Technical Debt Board

When a task is completed, we place a red tick against it to demonstrate progress, this helps build trust within the team that we really are working off our technical debt. Sometimes a specific piece of work, as a side effect, will lead to the team indirectly making progress against a technical debt item. When this happens we add a half tick, indicating that this card should be prioritised over other similarly important items so that we get it finished off completely.

Tiny Tasks

This system is effective in reducing the stress that comes with managing technical debt and provided a means for all the team to have a say in where the team spent their effort.  However, one area where it is weak is in managing very small, relatively low value tasks that can be completed in an hour or so. Examples might include removing unused code, reducing visibility on public fields, renaming confusingly named classes – in essence, things that you might expect to happen as part of general refactoring were you already working in the area.

To manage these small easy wins, the team maintains an etherpad of ‘Tiny Tasks’ and reviews new additions to the list on a weekly basis.  The rule is that if anyone considers a task to be anything other than trivial it is thrown out and considered as part of the process above. These tasks are then picked up by the developer acting as the maintainer during the week.

So what does it all mean?

Generally it is easier if an individual has final say of the prioritisation of tasks, in the case of technical debt this is harder since the whole team should be involved. Therefore, a trusted method of highlighting and prioritising technical debt tasks is needed. By breaking down the prioritisation process into separate ‘Size’, ‘Timeliness’ and ‘Value’, it was possible to have more reasoned discussion over the relative merits of items. Visualising the items together at the end of the categorisation process enables the team to make better decisions over what to work on next and builds trust that items will not be simply forgotten. Very small items can still be prioritised if the team agrees that they really are trivial.