Why I still don’t use bug tracking software

Yesterday I wrote a post entitled ‘Why I don’t use bug tracking software‘. I’ve had some good feedback, and I thought it worth writing a follow up post so that I can respond to common and/or interesting responses without undue repetition.

The first point to make is that the post was not entitled “I don’t use bug tracking software and neither should you”, the purpose of the post was, given the prevalence of bug tracking, and the fact my team have found value in not using it, how many other people have had the same experiences, or is my approach unusual?

Turns out it’s pretty unusual.

So the question really is, what it is that is so unusual about my current context?

First of all some background, my immediate team consists of myself, a tech lead and four developers. We work in a technical team team of ~sixty five of which ~fifty are programmers. There’s a lot of freedom in approach and other teams do use bug tracking to varying degrees.

The company is privately funded and has never taken venture capital,  this means we have grown organically in a calm and conservative manner,  in this context wilfully reckless project management practices simply won’t wash. On the technical side my team has responsibilities for a number of web services, some are very new, others have been in a production environment for almost ten years, all are expected to run continuously.

With that in mind, onto the discussion. I’ve drawn from comments posted on fragile as well as Hacker News. For brevity I’ve taken the liberty of editing some comments down, if you feel misrepresented just let me know.

First up

pht says:
7 April, 2010 at 6:33 pm
A couple of questions to understand your seemingly “dangerous” move :
How close are you to the people specifying the product ?
It seems like dropping the bug tracker could only be manageable if you always have the solution to a bug whenever you face one. From what I’ve seen, bugs are more than often the result of a misunderstanding. A bug tracker can help you keep track of the *communication* required to fix the bug. And in conjunction to your VCS, find the line of code that implemented the proper fix.

This is an excellent point, and one that I ought have brought up in the original post, the system is entirely dependent on the assumption that the team has good visibility over the product, and what’s more good access to those specifying the product such that ambiguities can be ironed out early. I would see this as something that would occur either as part of broader project planning or in the form of feedback post demo. If everyone is face to face then a bug tracker is only one of a number of ways to do this.

What’s the size of your team ?
Again, a bug tracker is really a communication facilitator ; if you’re only a couple of coders, and you’re the ones deciding what goes into your product or not, then this is probably very sustainable. I wouldn’t advocate it for any other kind of set ups …

We’re a team of six working in a broader tech team of sixty five. As you say communication is the limiting factor here. We have mitigated the risk through having a set of well specified APIs and treating internal consumers of the service in the way as for external consumers. Abstracting complexity is hardly a new idea, but it would been very easy to group a bunch of loosely related services together, with a long term consequence of increasing the likelihood of subtle and hard to reproduce bugs.

Is your project outside-world facing ? You mentionned you have QA, but do you also get bug reports / reviews from actual users of the productin system, and if so, how ?

The product is outside world facing, issues are raised through our support and account management teams. As you would expect they have a query tracker, with which we interact in cases where it is not possible for them to answer the query themselves. Only a very small number of these queries can be accurately described as bugs. More often than not they refer to general technical questions about the product, or specific requests about their configuration. Given the nature of our business it ought to be rare for a client to detect a bug, in these cases we would always treat it as a priority.

Benjamin says:
7 April, 2010 at 5:00 pm
I wonder how you document your solution to a problem and how others in your company can find out about it as soon as you are gone (maybe forever).

When the bug is fixed, the integration tests are updated to prevent recurrence, such that should the system regress then it will clear how the behaviour was reintroduced. I consider documentation of how the system works to be a separate problem to bug tracking.

Isaac Gouy says:

7 April, 2010 at 5:10 pm
> “If the same bug returns at a later date, well in that case it’s not as small a deal as we thought and we’ll reassess it.”
Given that you aren’t tracking, how would you know that the same bug has returned at a later date?

That’s good point. Reassessing bugs does require a certain level of continuity, as mentioned above, this approach does require small teams working on largely independent platforms, however in practice we have not found this to be a problem given that the number of bugs we have dropped has been so low. The key point is that even if you had a detailed description of the problem and a justification for not fixing it, all information is now old, perhaps years’ old. The bug could provide a starting point for new investigation, but ultimately the system has changed and old assumptions must be reevaluated.

> “However, neither case is particularly desirable and the bug tracker merely helps keep things going in the short term…”
Doesn’t the bug tracker also make it blindingly obvious to everyone in the organization that the team is swamped with bugs which are not being addressed?

Only in terms of the quantity of bugs, which I consider to be a poor metric of load. We do track time spent on unscheduled non project work (including bug fixing) which serves the same purpose.

Chris says:

7 April, 2010 at 7:06 pm
This is a really bad idea. You can fix bugs as they arrive AND keep track of them at the same time. A good bug tracking system isn’t a big hassle to use.
In short, why you would want to throw out a massive amount of potentially useful data is just totally beyond me. There is a lot of good information to be gleaned from bug tracking besides just “which bugs do we need to fix”.
Specifically and in addition to the really good points made in the other comments, here are a few more reasons this is a bad idea:
1. You can’t compare the quality of releases against each other. How many bugs are you shipping with? How many were found?

When you are releasing multiple times a week, I’m not sure how useful this really is. Sure, you want to minimise the amount of time you feel it necessary to spend fixing bugs, but I’d rather spend time understanding how the bug went out in the first and adapting process accordingly rather than tying it back to a specific release.

2. Bug tracking is a useful metric to developers and managers. How many bugs am I creating? Of what severity? Where can I improve?

To me the number of bugs is less valuable tracking the amount of effort spent on bug tracking in total. Severity is interesting, but less so when almost all bugs fit into ‘fix it in the near future’. Continual process improvement is absolutely essential, and aggregation of effort expended unscheduled work is something I already do. Bug tracking software could potentially help with the aggregation of this data, but is certainly not a prerequisite.

3. As the person writing the software you WILL have inconsistencies with your customers in what you consider a must-fix bug. If you ignore a bug and then a support call comes in, there is no way to re-evaluate and learn that maybe you shouldn’t have shipped with that known issue.

Inconsistencies between client and dev team understanding is a good point and covered, in part, above. I would struggle to imagine a situation where we would choose to ignore a bug raised by a client. As part of the resolution we would spend time assessing what we could have differently to prevent the introduction in the first place. Understanding where the communication breakdown occurred is key, improve on that understanding and you reduce likelihood of a similar style of bug being introduced.

nostrademons wrote

I’m a big fan of this approach, but I find that bugtracking is really useful when you want to look back on the project and data-mine to identify problems with your process that you want to fix.
For example, if bugs get identified quickly and fixes mailed quickly, but the fixes sit in code review for a long time, maybe it’s time to lean on your code reviewers a bit more. If 50% of your bugs are CSS regressions, maybe it’s time to invest in a CSS-testing framework. If most of your bugs are crashes caused by memory corruption, you need to straighten out object ownership in your C++.
These kind of overall patterns are very hard to discern when you’re considering one bug at a time, yet fixing the root cause of them can lead to big productivity gains.

I think that this a great point and I can definitely see value in breaking down the amount of effort spent on unscheduled work by category. I don’t think I necessarily need to maintain a bug tracking database to do this if I aggregate on discovery, but as a means of process improvement it could be really powerful in responding to long term trends.

Finally

Ravindra says:
7 April, 2010 at 5:33 pm
Replacing bug tracking by a kanban board !!! Thats one way to physically limit the number of bugs you have.
Bug tracker hosts bugs, with time becomes ‘inventory’ which is a waste.

I agree, there’s definitely a Lean influence in my approach.

So in conclusion, the approach I described works best when the dev team has:-

  • has good access to the client
  • has good visibility over incoming bugs
  • has the freedom to prioritise bugs over scheduled project work
  • runs a system that lends itself to system testing

As a result of feedback I think that, in addition to tracking time spent on unscheduled work, attempting to categorise and aggregate bugs by type could provide interesting process improvement insights.

Why I Don’t Use Bug Tracking Software

Either fix it quickly or not at all.

For many, bug tracking software is central to good software practice. It features as number four of the Joel Test and is used extensively in a Lean or Agile context, as well as in more traditional software development approaches.

My company uses bug tracking software, my team used to use bug tracking software, and now we don’t, so what’s going on?

In general, I think we’re doing pretty well, we use a Kanban board to manage our work flow, we safely release to production five or six times a week, use tests to drive our designs, practice pair programming and have an effective Continuous Integration and Black Box testing set up. We definitely still have areas we can improve on but by most people’s standards we’re on the right track.

So why aren’t we using a bug tracker? Well on identifying a bug any team has four choices:-

  1. fix it immediately
  2. fix it in the near future
  3. add it to a list and plan to fix it at some point
  4. ignore it

Some bugs simply have to be fixed immediately, this is a truism wherever you work. If Google’s search functionality stopped searching, or Microsoft’s Excel stopped multiplying then everything would need to stop in order to get a fix out. Hopefully such occurrences are rare and can be considered a special case.

So we’re left with fixing it in the near future, fixing it at some unspecified point in the future or ignoring it entirely. What I noticed about option three is that low priority bugs are simply never looked at, something more important always gets in the way. So we made a decision, that when we find a bug we either fix it promptly, probably in the next week, or decide simply to ignore it.

This took a some time to get used to. Nobody is comfortable with the idea of dropping bugs, but as soon as we realised that ignoring it and adding it to the bug tracker as a low priority item were pretty much the same thing, we started to feel more comfortable. A bug feels most important on discovery and if we decide not to fix a bug when it’s fresh in our minds and we’re ready to go, realistically, it’s never going to get fixed. If the same bug returns at a later date, well in that case it’s not as small a deal as we thought and we’ll reassess it.

If bugs are to be prioritised over scheduled project work, it’s important to track the amount of effort spent on bug fixing. For my team, this isn’t really a problem since in addition to having responsibility for developing software, we are also responsible for much of our own sys admin, operations and support. Therefore we expect a certain amount of time to be spent on unplanned work and we track time spent on bug fixing as part of this. It turns out that the amount of time spent on unplanned work is fairly predictable and can be accounted for in long term project planning. The only exception to this rule is where the bug is sufficiently large that fixing it will require a significant effort, maybe a week or more’s worth of work. In that case we’ll treat it as a project in its own right and record progress as if it were any other standard planned block of work.

How does this work in practice? The key point is that it encourages us to fix bugs as we find them, and places greater emphasis on not introducing them in the first place. The fact that we know that the bug will be dropped if we don’t fix it in the short term means that we take bugs more seriously. Over the past six months we’ve ignored only two issues, both of which were highly unlikely to recur and both of which resided in third party libraries that we intend to upgrade.

The only downside to not recording bug fixes is that there is then no record of previous faults and no bug driven history of why things are as they are. However, so long as the Black Box tests cover all bugs as they are found the chance of regression is nil. Therefore I don’t really see this as a problem at all.

As an aside, I can see the value in tracking bugs over longer periods in situations where QA and development are separate entities that do not interact as part of the same team, or alternatively where a team is so swamped with bugs that they cannot keep up. However, neither case is particularly desirable and the bug tracker merely helps keep things going in the short term rather than addressing the underlying problems.

So in conclusion, by opting to fix bugs quickly or not at all, we force ourselves to fix more bugs and place greater emphasis on not introducing them in the first place. We no longer have to manage a unmaintainable bug list and ensure that our black box test coverage develops over time.

Edit: This post has generated a good deal of debate, in response I have posted a follow up here where I try to address some of the points raised.