one of the things I’m particularly proud of at my team is how we plan, track progress and learn with each sprint. Our system is based on Scrum, however, we are not orthodox about it.
About the team
I work at the Indoor Location Team at Estimote. Our main product is Indoor Location SDK — a library that allows you to position yourself in a given venue with beacons. Our mission: “We provide precise indoor positioning with which developers create magical experiences without any effort.”
Most of the stuff we do can be classified as one of the following:
- algorithm improvements — mostly research
- code maintenance — introducing features that are not related to the core algorithms, debugging & refactoring
- clients — gathering feedback, support, deployments, etc.
In any given sprint we usually focus on tasks from either the first or the second category, with a pinch of the third.
To organize ourselves, we use a system loosely based on Scrum. Even though Scrum is a great methodology, it doesn’t always work well in a research environment. Here I would like to describe how we work and what methods we introduced to solve some of our problems.
We organize our work around 2-week periods called sprints, which looks like this:
- Our team leader meets with the management and together they agree on our goals for the next two weeks. It’s usually 3-5 goals.
- We have a planning session — a 2-3-hour-long meeting, where we write down everything we have to do in order to achieve these goals. We estimate all the tasks.
- We put the tasks on our board.
- We try to finish all the tasks during the sprint.
Each task starts in the “TODO” list, then goes through “Doing” and “Ready to test”, to finally reach “Done.”
Estimations -> Reality -> Improvement
During each planning session, we look at each task and estimate how long it is likely to take to finish it. If something takes more than 8 hours, there are usually two options:
- it can be decomposed into smaller tasks immediately.
- we don’t know enough to decompose it right now.
In the second case, we create a task with a “Spike” tag, which basically means “Gather enough knowledge so you are able to decompose it and plan properly.”
After the planning session, we get to work. When someone finishes a given task, he or she fills in how many hours it took to finish the task. Some of us rely on memory, others use a stopwatch and measure the exact time. Then at the end of the sprint we know how much time it took to finish every task and we can summarize it.
Usually, our summary looks like this:
Topic 1: Points finished: 32, Hours taken: 37.5, Pending: 24
Person1 (hours at work: 80): Points finished: 38, Hours taken: 29, Capacity factor: 0.475
Points finished: 133, Hours taken: 122, Pending: 24
- Person1 was on conference XYZ
Points vs Hours
In Scrum each task has assigned a number of points. The bigger task - the more points. In our case number of points is estimated number of hours. So basically “points finished” means how much work we have done (with initial estimates) and “hours taken” - how much time it actually took us.
Pending is a number of points that were planned, but not finished during the sprint.
The “capacity factor” is “Points finished” divided by “Hours at work” — this gives you the idea of how much work are you able to do during your working hours. Of course it’s not perfect — it doesn’t take into consideration everything that you have done outside of the team, but if you want to take into account only your contribution toward the team’s goals it’s pretty good.
You may wonder what value we get from this. Well, there are several benefits that come to my mind:
We know what the team’s capacity is, so we can plan better — if, during planning, the number of points exceeds some threshold, we know it’s probably impossible for us to achieve the related goal. We know our capacity for a given topic and person. Our estimations get better. We are more aware of what we are spending time on. As usual with gathering data — you gather data now at low cost, and in the future you might find some use for the historical data.
I think some of the above may sound vague, so here are some real-life examples:
- Once, we had to modify a part of our stack that was just released. The tasks seemed pretty easy, but they required close cooperation with people from another team. It turned out that our estimates were two times too low — we discovered some new bugs and had to coordinate with another team. And now, when we have similar tasks (i.e. requiring the help of people we usually don’t cooperate with or using new parts of the stack) we know that we should multiply the initial estimates by 2.
- We know that in cases of very well defined tasks with no question marks ahead of us we can achieve very high capacity, which is much higher than when we have some research tasks. It may sound trivial, but having some quantitative knowledge about this helps to plan better.
- In March our team restructured — it shrunk in size and in responsibilities, so our team leader had more time for technical tasks. However, we didn’t know how much more time it would be. Since we’ve tracked it, now he can estimate how much of his time goes into technical things and how much into everything else.
- For me, looking at my capacity is helpful in assessing how productive I am. It’s not a perfect metric, but it’s better than no metric at all.
- Recently, two new developers joined our team. We always knew that when someone new joins he/she needs some time to accustom with our code and processes. Thanks to the data we’ve gathered, we know that finishing certain tasks took them about 2-3 times more time than what we estimated in the beginning. That’s a valuable knowledge if someone new should join our team in the future.
We’ve introduced some additional features to the system described above. One of them is the 2-hour-backlog. It’s basically a list of short tasks that are nice to have, but have low priority. It was created for situations when you have some spare time, but there is no sense in starting some big new topic, e.g. on Friday afternoon, one day before vacations, being blocked by waiting for someone else to finish their job etc.
The second one is tracking “distractions.” We started doing it recently and I’m not sure if it is going to work out. But the idea is to write down all the tasks you get from other teams. These are things like adding some functionality to our product, supporting a customer or helping them debug something. Right now we don’t use it much, but there was a moment when there were so many such “distractions” that we’ve decided to track them.
When I joined Estimote we used a physical board as our scrumboard. We really liked this and we preferred it to a digital one, however, it had one disadvantage — it was hard to track progress. There was not enough space in the “Done” column and we usually had to put cards with finished tasks into piles. But it was hard to tell how much we have done and how much there still was to be done. So we came up with a simple idea — we’ve added small bars for each topic we were working on. Each bar had the space for 14 squares. At the beginning of the sprint we used a black marker to mark how many tasks (squares) there were in each topic. Then, whenever we moved a task from “TODO” to “Doing, we marked one square with blue stripes. When it was finished, we marked it with green stripes. This way we could easily visualize how many tasks we have done and how many were still ahead of us.
You can argue that this approach was very naive since a 2-hour task was equal to an 8-hour task and you will be right. However, the system we developed was just right for our purposes. It would be easy to add many improvements to this right at the beginning, but we usually use different approach — we start from the simplest process to improve something and then if we need more — we just add what we need.
When we are doing research, it’s often impossible to plan everything ahead of time. That’s why we do the following:
- We define what we want to achieve in a given sprint.
- We specify what we need to do in order to achieve it (very high level).
- We identify the most important questions we can’t answer yet.
- We specify what we need to do in order to answer these questions.
- We plan for the next 2-3 days — that’s usually the timescale that allows us to gather enough knowledge to get some answers, specify what the problem is and possibly propose a solution.
- After these 2-3 days we meet, exchange knowledge, brainstorm solutions together and plan for the next 2-3 days.
We call it the mini-sprints. The biggest advantages of this approach: We have the big picture of where we are heading, what we do know and what we don’t. Frequent knowledge sharing and brainstorming really speed up the research process. We fail quickly, so we can assess our strategy quickly.
In this post I’ve described our general workflow, how we’ve managed to improve it using some simple techniques, and how we try to be data-driven about how we work. I hope you’ve enjoyed it. If I had to choose the single most important thing in this post, it’s probably the “plan -> execute & track -> learn” approach. It allows you to improve your workflow with each iteration.
A few final notes:
- If you have any feedback (positive or negative), please e-mail me.
- If you are interested in getting e-mails when I publish something new, please subscribe to the newsletter (on the bottom of the page).
- If this was valuable for you, let me know.
- If you have any comments/thoughts, write to me.
My mail: firstname.lastname@example.org.
Have a nice day!