« Fire on Deck| Main | How to Really Measure Software Teams 2 »

How to Really Measure Software Teams

| | Comments (8)

Lately there has been a lot of ink spilled over how to measure technology teams. Small startup teams are reaching the point where they want some kind of metrics, and big-company teams are using so many metrics that they desperately need to cut back to something that makes more sense.

Managers would also like to know what to do once they read metrics. Is more training required? Tougher management? Longer hours? Shorter hours? More people? Less people? We're nerds. We're really good at creating lots of charts, tables, and reports. What we're not so good at is using them for something useful.

I've been living in all of these worlds for a long time, and here are my rules for metrics:

Only measure something you can directly do something about. In the past, we've measured indirect things: errors, lines-of-code written, or time spent. A more important thing to measure would be identifying the obstacles teams face so that you can do something about getting them out of the way.

Measurements are inherently subjective. We confuse facticity with usefulness in the metrics game. That is, we think that because something has a hard measurement, like check-ins, or KLOCs written per week, or feature-points-per-dollar, that it has some special meaning. In fact, the reverse is true: the more factual or objective a measurement seems, the less useful it is in general. That's because the more factual something is the more related to output it is, and software is all about obstacles, not outputs. Clear the obstacles and watch the team perform.

You win at tennis by getting the highest score. But you don't play tennis by watching the scoreboard.

In addition, your selection of measurements is the most important part of the measurement process. Whatever things you put out there to build graphs on is what's going to drive the conversation later, and as we've seen, the conversation should be about meaningful things to do to help teams, not abstract graphs and lots of data points. Picking the things to measure should be the most dynamic and flexible part or metrics, but instead it's the most static and rigid.

We have it exactly backwards: instead of quickly and adaptively measuring teams to determine how we can help them, we're statically measuring teams over a period of time to then sit down and infer what might be going on.

Not good.

An optimum system would describe the perfect world for the team -- what the team is striving for -- in any one area. Then the metric would directly explain why the team can't make it there. As a team member (or outsider), I could read the metric and know immediately what I need to do to help. If I had a dozen teams, I could see patterns across the teams, perhaps allocating more money and time to those problems that multiple teams were having. This puts the onus on the metrics creator to measure something useful instead of on the team to play funny-numbers games.

So how would this work?

1. Describe the goal. Let's take something like requirements. An agile team might define requirements as "Through conversations with the Product Owner and preexisting written information, the team is able to easily determine what needs to be built during each sprint."

2. Ask the team (or other, outside observer) if they are meeting the goal.

3. List the obstacles to the goal and ask the team to pick the one closest to what they're experiencing. I suggest that there are only a few ways goals cannot be met:


  • Awareness - Is anybody aware of other teams that are reaching this goal? No matter how much you talk about TDD, unless the team is intimately aware of another team that's doing TDD effectively, unless they have awareness that something is possible, they're not going to get there
  • Desire - The flip side of awareness is desire. That is, once you know the team next door is doing this thing, and it's helping them, do you really want to change? Lots of times folks are just happy with the way things are, even if they know things can be better, which can be an obstacle in itself.
  • Knowledge to Perform - Wanting to do something isn't going to mean much if you don't know how to do it.
  • Clear User Responsibility - Likewise you should know who is doing what. If it's TDD, everybody's doing it, and that's fine. Say you have a job like buildmaster that are rotated around the group and nobody wants to fill that seat for a sprint. Sometimes the problem is simply that nobody is stepping up
  • Ability to Perform - Do you have the time, tools, and money to do this? You can love Continuous Integration, you can know how to do it, and you can have people ready and willing to set it up, but if you don't have a build server it ain't going to happen.
  • Commitment to Perform - Does the organization have a commitment to do this? Or is it something that you'd like to do but nobody in management is really encouraging?
  • Directing Implementation - Are you being directed that this is what you should be doing? Or does the team feel more left alone to figure it out? Likewise the team can be overly directed -- not given freedom to perform
  • Verifying Implementation - Is anybody from outside the team checking up to see how things are going here? Is there follow-up from the folks at the top on the status? Or does somebody simply write a memo?

Creating this ontology -- this hierarchical dictionary of goals and obstacles -- isn't easy, but it's not rocket science either. I've found you can do it in an hour or two. Large companies can have them pre-canned, which cuts the time down to just a few minutes.

Taking the measurement isn't hard either. If you use a two-part question process where you ask a follow-up question to drive down to a lower level of detail, and you use a computer to do the asking, it takes a team about 15-20 minutes to go through this exercise. When's the best time to take this measurement? As part of the retrospective, when people should be ready to unload about what's wrong and what's holding them up anyway.

I've done this with many teams, and what I've found is that you turn something like "Team X is having this problem with delivering code, but nobody can agree on what it is" to" Team X needs training in Continuous Integration and their Project Manager is interfering with the team adapting effectively by micromanaging source control procedures"

For twenty minutes of work, this ain't bad. It gives everybody somewhere to get started on immediately making things better. And when you roll up the numbers from several teams, you start seeing patterns that you never would have spotted before. I had one client that dropped several million dollars in tools, only to have teams consistently say that the tools were getting in the way of doing their work. Ouch. That's not a face-to-face conversation that any team wants to have with management, but by using a little indirection in terms of the metric it was easy enough to pull off.

You're able to have developers virtually "sit down" with upper management and tell them what's getting in their way. A VP once told me it was like having a beer with the guys on the teams without worrying about political correctness.

Metrics are important, but they're misunderstood and misapplied. Small, quick, adaptive, immediately-useful metrics can really make a difference.

That's not the way they're usually done.

Next time I'll show how a few teams quickly changed to improve productivity by using adaptive metrics.

8 Comments

To me, the only important measures are:
1. to spec?
2. on-time?
3. within budget?
4. with good quality?

whether it's someone building software, a house or a rocket ship, this is what needs to get measured. All this other stuff is noise. How do you set proper goals for software teams and report against the big 4 above?

Dan,

You're absolutely right, but you're wrong too.

From an outside perspective, we really only want to know fit, form, cost, and schedule. This is kind of a no-brainer. It's the "score" for software development, so to speak.

From the inside, however, teams identify and eliminate obstacles that prevent them from doing well -- from being on-spec, on-time, on-budget, and high-quality. This inside discussion happens over stand-ups, retrospectives, morning coffee, etc.

What outsiders want to see is the things you indicate. But immediately after getting answers to those questions, they want to know what the obstacles are to achieving those things. So they can help.

Software development is all about risk management: identifying potential issues by figuring out what's in your way. Then by effectively overcoming them. Good teams are great at risk identification and obstacle elimination. Poor teams are not very good.

If you only look at the "real" numbers, you'll always wonder why things aren't working so well.

You all miss the big point. You need to quantify all top level project objectives. And manage them. See my top level objectives slides gilb dot com.

Tom,

I think you're having a different discussion than we are.

The basis for the discussion is that you have a project with top level project objectives, and that the project will live or die depending on how well they perform.

If you've got a specific link please post it. I went to the site you mentioned and couldn't find something that directly addressed metrics.

For everybody else, when we talk about metrics we're going to get a LOT of input! From all over! But remember that the purpose of measuring something is to manage it. Also remember that objectives -- sorry Tom -- are not what you do. Objectives are whether you get there or not.

I had a friend consultant tell me one day, "Well this agile project work is fairly simple. Make sure your sprints go well, and then the project will go well too"

To which I thought, "no duh!" This is like the basketball coach telling the team, "you guys, this entire basketball process is easy. Simply get a bigger number up there on the scoreboard than the other guys, and we can call it a night."

(grin)

http://www.gilb.com/tiki-download_file.php?fileId=180

top level quantified management objectives slides

and
http://www.gilb.com/tiki-download_file.php?fileId=32
paper on Confirmit Case


http://www.gilb.com/tiki-download_file.php?fileId=278
Conference Slides

If you've got a specific link please post it. I went to the site you mentioned and couldn't find something that directly addressed metrics.
I FIND THAT AMAZING SINCE THERE ARE ABOUT 50 SETS OF SLIDES AND 50 PAPERS IN THE DOWNLOAD SECTION ALL OF WHICH ADDRESS METRICS. I INVENTED THE TERM SOFTWARE METRICS (1976 BOOK TITLE)
BUT, SOME OF THEM ARE NOTED ABOVE. TG

For everybody else, when we talk about metrics we're going to get a LOT of input! From all over! But remember that the purpose of measuring something is to manage it. Also remember that objectives -- sorry Tom -- are not what you do. Objectives are whether you get there or not.
IF COURSE YOU 'DO' OBJECTIVES. YOU SET THEM, YOU NEGOTIATE THEM, AND THEN YOU
APPLY THEM TO MANAGING YOUR PROJECT. THE OBJECTIVES (END STATES YOU WISH TO REACH) ARE THE MEASURE OF WHETHER YOU ARE GETTING THERE AND HAVE FINALLY
GOT THERE.

FOR A MORE DISCIPLINED TERMINOLOGY SEE MY GLOSSARY
http://www.gilb.com/tiki-download_file.php?fileId=25

AND MY BOOK COMPETITIVE ENGINEERING
(I WILL SEND LINK TO FREE COPY BY EMAIL TO INDIVIDUALS WHO EMAIL ME, BUT 2 CHAPTERS ARE ON MY WEBSITE at (they are scales of measure and evo)
http://www.gilb.com/tiki-list_file_gallery.php?galleryId=16

Tom,

Thanks for the links. I spent about an hour on the first one this morning. Sorry I was unable to find them earlier.

I'm not trying to antagonize you, but I don't think we're communicating very well. It seems you are coming at this outside-in and on a quantitative basis. I'm coming at this problem from inside-out on a qualitative basis. There's nothing wrong with quantitative -- in fact, that's the way people naturally come at technology management: management by objective (MBO). The theory is that if something is wrong in a project it's a matter of the structure of the objectives and the quality of the objectives given to them.

That's good and wonderful and you're right on the money. I especially like your emphasis on short cycle times. And you're completely correct that a lack of specific, measurable definition (in addition to working at the wrong conceptual level) dooms many projects.

But a funny thing happens _inside_ the team once all the conversation about definitions and measurements ends. They have obstacles. And the obstacles have nothing to do with either their expertise or their objectives.

These obstacles can be characterized various ways: organizational, internal, obstacles based on knowledge, obstacles based on people, etc. But these obstacles are what keeps the team from succeeding.

You can have the best-structured and defined objectives in the world, but if you don't have the proper equipment, or if you don't know how to complete a task, or if you can't meet with important people, or if management is only paying lip-service to an important aspect of the project? Objectives will not be met.

All I'm saying is that a common language for these obstacles gives the team a mechanism for communicating them to the larger organization. I'm not implicating or dismissing the quantitative material you've presented.

Now -- I think it's important to know your philosophy's limitations, and I think that the trick with quantitative-only metrics and frameworks is understanding when too much detail is a bad thing. But that's a conversation for another day and has nothing to do with what I'm talking about here.

Looks like you've had quite a bit of good experience, and that we've had some very similar experiences as well. For instance I note that we were both key players at the beginning of large, DoD programs.

The core of technology development is not going to change -- are you making something of value for me today? We can have a very specific conversation around that. And we should. But that's not the entire universe of technology development: there is always room around the edges for little improvements here and there.

I've talked about metrics this week too. One serious problem I have with metrics is that the relation between what you have direct influence over and the actual effects are often not understood. Measuring time and deadline does not answer the question if you're doing a good project, just as measuring the distance a runner has travelled does not measure his success (you need his time as well).

I believe in measuring up, and you make a good start with a goal that goes beyond the teams direct span of control. Whether it actually contributes to the business goals is still another matter.

"One serious problem I have with metrics is that the relation between what you have direct influence over and the actual effects are often not understood."

You're absolutely right, Machiel.

I've noticed that of all the teams I have observed, I have never seen a team that was performing poorly in which the team members didn't know exactly what was broken. They might not talk about it, and it might be outside of their control -- hell they might even have the wrong idea at first -- but teams become experts on what's holding them up.

We have all of that home-grown expertise on what's preventing the team from performing, yet we never harness that information for anything.

This seems foolish.

Leave a comment

About this Entry

This page contains a single entry by DanielBMarkham published on September 11, 2009 12:20 PM.

Fire on Deck was the previous entry in this blog.

How to Really Measure Software Teams 2 is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Social Widgets





Share Bookmark this on Delicious

Recent Comments

  • DanielBMarkham: "One serious problem I have with metrics is that the read more
  • Machiel Groeneved: I've talked about metrics this week too. One serious problem read more
  • DanielBMarkham: Tom, Thanks for the links. I spent about an hour read more
  • Tom Gilb: http://www.gilb.com/tiki-download_file.php?fileId=180 top level quantified management objectives slides and http://www.gilb.com/tiki-download_file.php?fileId=32 paper read more
  • DanielBMarkham: Tom, I think you're having a different discussion than we read more
  • Tom gilb: You all miss the big point. You need to quantify read more
  • DanielBMarkham: Dan, You're absolutely right, but you're wrong too. From an read more
  • Dan Tiernan: To me, the only important measures are: 1. to spec? read more

Information you might find handy
(other sites I have worked on)





Recently I created a list of books that hackers recommend to each other -- what are the books super hackers use to help guide them form their own startups and make millions? hn-books might be a site you'd like to check out.
On the low-end of the spectrum, I realized that a lot of people have problems logging into Facebook, of all things. So I created a micro-site to help folks learn how to log-in correctly, and to share various funny pictures and such that folks might like to share with their friends. It's called (appropriately enough) facebook login help