What does developer productivity mean, really? Is it churning out more code or less code? Is it to have less bugs in production or shipping code more often? Is it doing a lot of things or just one thing? Let’s think about this for a moment.
I believe developer productivity is about getting more things done as a developer that contribute to a better product. So how do we know when we’re doing it right? We measure it, of course, you say. Now, here’s the thing: You can’t measure it. That’s right. There’s no known way to measure “developer productivity” in general [1].
Before you do a table flip, just let me tell you that there are other things that you can measure that may have an impact on developer productivity. One example is the time taken from when a developer commits a change to the source code until that change is out in production. Now, this doesn’t say anything about the quality of the code, or if the developer is actually shipping something that improves the product. It’s not a bad measurement. But you have to remember that it measures only that: time from commit to production. Not productivity.
The problem is how you define productivity and our tendency as humans to adapt to what’s held in high regard within an organisation. This could lead to unforeseen consequences, for example:
- A high test coverage is considered good, so the team makes sure that every field in a class is tested. In fact, in order for the field to be tested they have to write get/set methods for all fields, resulting in an explosion in lines of copy/pasted code with little value.
- Less bugs in production is considered good, so the team becomes more and more reluctant to release code to production, knowing that any code has the potential of containing at least one bug. Time between deployments increases. This leads to the product lagging behind the competition.
In short: Any metric can be gamed. You may be tempted to solve this by combining metrics to get an indicator that balances out the drawbacks of each metric. It’s tricky, but it might work for some. I think a better approach is to try to reduce the incentives for gaming them and break up the problem in manageable pieces. Here are some ways to do that:
- Identify common pain points for your teams and provide with metrics for those specific areas, for example time from commit to production. But don’t jump to conclusions about the results. A short time to production could just mean that the test coverage is low or that the unit tests simply don’t test anything of value.
- Detach the metrics from any kind of compensation to, or appreciation of, developers. We’re interested in removing obstacles in the pipeline from idea to a better product, not stalking developers. As I hinted at, people being closely watched will stop taking risks and innovation will drop like a bomb. Besides, it’s creepy and developers will leave.
- Avoid tracking individual developer behaviors. It’s the overall team trends over time that matters. Furthermore, teams shouldn’t be comparing metrics, since each product may be different and and each team may be in a different phase. The team should set it’s own goals and use the metrics that are applicable.
Now you’re in position to provide teams with tools (for example automation) that may help them mitigate the identified pain points. Make them easy to use and some teams may pick them up. When they see that the tools make life easier for them, the good news will spread to other teams and you’ll have a positive spiral of productivity.
So, what about the holistic view? Well, if you were to investigate all the things that affect the work that we as developers do, I bet you’d find some real productivity killers that would require more than tools.
You could interview developers and analyse what exactly is preventing them from delivering stuff. Bear in mind that a developer probably won’t be able to churn out code for eight hours a day. It’s just not how software development is done. A large part of the work is communicating and figuring out how to do stuff.
For example, how are developers communicating? How much communication is asynchronous, thus reducing context switching? Do they spend a lot of time planning? Plans and estimates that rapidly become outdated and then useless? Do they participate in long meetings where one, or possibly two developers do most of the communicating? Do the developers live a long way from the office and must commute hours to get to the office? Maybe they’d rather work remote a couple of days in the week and spend that time working without distraction? Do they receive a lot of emails that interrupt their flow (even though email may be checked later)? Do they have to write a lot of reports that usually are the same or aren’t used for anything?
If we broaden the perspective even more: How clear is the vision of the product and how free are developers to participate in and drive the product towards that vision? Everything helping us doing the right thing and doing that thing right, is in fact, increasing the developer productivity.
[1] http://martinfowler.com/bliki/CannotMeasureProductivity.html