« How to Really Measure Software Teams 2| Main | Vacation Optimization »

How to Really Measure Software Teams 3

| | Comments (2)

Ever do the retrospective dance? You know the one, where at the end of the sprint everybody plays all the retrospective games: start-stop-continue, timeline, word-pong, or sprint-painting -- and then nothing in your team actually changes? Maybe somebody takes notes, there's an "action list", you create new stories, or whatever, but the next sprint there you are with the same items all over again?

That's a fun game, right?

Teams do this all the time. They're really good at going through the motions of doing a retrospective -- after all, everybody knows retrospectives are the most important part of agile -- but they suck wind when it comes to actually improving themselves.

Makes you wonder: are these teams really agile?

There are all sorts of tests out there to tell if a team is "really" agile -- the Nokia Test comes to mind as a popular example. Based on my experience, I have one simple rule for whether you're an agile team or not: you have to be constantly improving through the use of an efficient feedback cycle.

If you are constantly improving, you can start with nothing and end up with a hyperperforming team. It might take a while, but it'll happen. If you are not constantly improving, no matter how many of the rituals and behaviors you do, you're never going to amount to anything.

Which gets us back to the retrospective dance.

Aside from apathy, retrospectives have two main problems:

People are not communicating. Nobody wants to make waves; the Project Manager is very risk adverse and doesn't want to hear about problems in front of the Product Owner; we've tried before and never could fix this; Joe always dominates the retrospective and he has his pet issues; there's not enough time to speak; the dog ate my homework.

When people do speak, the language they use is imprecise. Vague problems lead to ineffective solutions. We agile folk are really happy with just any old thing anybody can throw on the whiteboard. Its really funny: when we do user stories we realize that there's no way the user story title can cover everything we want, yet when we do retrospectives we're all too happy to have something vague and unformed splat on the wall and stick there without further clarification, conversation, or definition.

"Testing sucks", "Everybody keeps breaking the build", "we need better time management", "we need to do something about the framework" -- these are all retrospective items I've seen teams come up with. And guess what? Next sprint they have the same ones.

Poorly expressed problems lead to poorly executed remedies.

Eventually people get tired of putting stuff up there and the problems "go away". Not exactly the way retrospectives are supposed to work!

What if teams had at high level, pre-canned list that covered most of the problems they might face? Is such a thing possible? Unlike the actual things we create in projects, the goals we have when creating stuff are fairly static: we list stuff to do, we plan ahead, we develop stuff, we check it, we put it somewhere the user can see, we watch for their feedback. It's not like any of that is going to change anytime soon -- the way we do it might change, but the general goals aren't going anywhere. And the obstacles are fairly static at a high level too -- you don't know stuff, you don't have time, nobody cares about this, your manager is discouraging it.

Would that work?

Case Study #1:

This first team's job was changing parts of a web application that did configuration management for products to sell. They had been running over a year using 4-week sprints (a little on the long side for my taste). Talking to the PM, everybody seemed happy and liked agile, but management was concerned that there was a lot of rework, and the release dates were iffy. While nobody was complaining, the team wasn't getting the performance it should, either.

So we took 20 minutes and gave them this adaptive metrics thing.

adaptive metrics for a software team
To read, simply find a green box (good) or a red box (bad)
Then read the associated column and row


Looking at the results, this team was really good at project management -- getting organizational obstacles out of the way. There wasn't much desire to change the way they were doing things in this area, although there wasn't a lot of oversight either. Probably because things were going so well. For the agile rituals, like team owning the sprints and cross-functional collaboration, there was a vague need to do better but nothing definite -- yet.

More interesting was Business Modeling: knowing how the software you are developing fits into the larger business and having the opportunity to streamline processes while you deliver solutions. The team really felt a perceived need to do this better, but looking at the rest of the row in the chart, the odds were stacked against it. There was really no commitment to perform from management. Looks like the directors and VPs were happy to stick the team in a room with a fake product owner and wait on code to come out the other end. There was no awareness on the team's part of any anybody else in the organization being able to solve this problem either -- not a good sign. Subordinate to that, they didn't know who to go to for help or how to go about doing it in an agile project.

Most tellingly is the results for Analysis and Design -- for some reason the bugaboo of agile teams. When asked: do you talk about approaches to developing software before you start developing? Do you talk about what the general structure of the problem before you start hacking away at it? The team said no -- it was a huge problem and something must be done about it. If you asked this same team if they should do "analysis and design" they'd all say "Heck no!" But if you described what analysis and design looks like in an agile team they'll tell you that they need to do more of it.

Very interesting.

The reason why A&D sucks with them is very simple -- the organization isn't supporting it. Nobody knows who's supposed to take the lead on this (anybody on the team), how other teams are doing it (failure of the organization to socialize best practices), or how to do it (simple training).

20 minutes of work for the measurement, another 30 minutes to talk it over with the team, and we had things on our retrospective like "talk to the director about whether we should be helping the business organize its processes" and "find other teams who are doing agile modeling and have them give us some tips" instead of "reduce rework" and "manage our time better"

Case Study #2:

This team was tasked with creating automated test environments and documenting them for a large enterprise-wide software rollout.

The problem was that the rollout was going to affect dozens of other major projects, all in unpredictable ways. For the other projects to continue running, somebody had to provide a working, automated test environment. In addition, this environment would keep changing as the major product changed. Also the needs of the consumer projects would change as they evolved.

So they had about 40 moving targets. Their job was to create software (and docs) each sprint to connect up all the targets and keep the lights on. Testers and developers had to keep testing.

The team had been running for two months on week-long sprints. (They ended up doing Kanban Scrum, but that's a story for another day) Somebody in senior management approached me and said "I've got a bad feeling about this team, Daniel. They seem to be spinning their wheels. Could you take a look?"

I met and chatted with the team, and as part of that discussion we performed a quick measurement.

Assessment for team that creates test environments


This is a very interesting graph, because it doesn't show a lot of reds. That's unusual. What it does show is some issues with Project Management (reading the third row from bottom). Looks like the last three columns, commitment to perform, directing implementation, and verifying implementation, are slightly red. Wonder why?

Remember that when we say "Project Management" we're not talking about the Project Manager. We're talking about the ability to clear obstacles outside the team. That could be their PM, their PO, or somebody else. We talking about goals and obstacles here, not people. Clearly there was no support for the goals of project management. We asked the team why is that?

"Because everybody we deal with has a different idea of what the project is supposed to accomplish, including our sponsor group. They all disagree with each other," they said.

Ouch. I know I'd love being a Product Owner when the organization doesn't know what it wants. It's such a fun job. Not!

The next generally red row is Requirements. No surprise there: things were changing so fast for this team that week-long sprints and Kanban were the only thing keeping it in one piece. Not a lot we can do about that. The results did indicate a desire for more training, so that's worth a shot.

Finally Environment was an issue. The team did not have the computers, software, and process set up to get their work done. According to the chart, this was due to not having anybody step up to handle it. Talking to the team some more, however, it turned out that providing the build environment for the team was a rotating job and everybody wanted to do it a different way. This caused confusion and people lost time trying to figure out the new rules each sprint. Rotating build master -> good idea. Changing things so the rules are slightly different each sprint -> not-so-good idea.

So we had an all-hands meeting with all of the sponsors, who were generally outside the loop. Just what the heck is our scope? Our acceptance criteria? We had a product owner, but she (and the team) were getting so many messages nobody knew what the general strategy was.

We drew up some guidelines for the software environment the team was working in that all the buildmasters agreed to. From now on, Sharepoint would be the repository for docs-in-process. Builds would be ran twice daily. Taking work offline wasn't okay, except in a few situations.

The sponsor group made some tough decisions, and on some issues they just couldn't agree. So the team made their own decisions and emailed them out to the distribution list. The new build process and environment issues cut thrashing time down during each sprint. Requirements were still going to suck, but with training we made things move a little faster.

--

Let's get real. There are no magic bullets, and no special tools or games are going to make your sucky project wonderful. But I can guarantee you that adapting to your problems and overcoming them will make your sucky project wonderful. Having a massively adaptive team means getting really good at spotting problems early on and doing something about them.

It's interesting to note that both of these teams were happy with agile and thought they were great at it. From the inside, things were cruising along. That's because the big problems had "dropped off the whiteboard" and became background noise. (One wonders what the sponsors and customers of each project would have thought had we asked them to assess the teams, but I digress)

Real problems take a while to sort out -- and much of that time is simply stating the problem in a way that leads to a solution. This adaptive metrics system cannot solve your problems. Hell, as we saw in the last example at times it can't even accurately state what they are. But what it can do is jump-start a conversation about what you're dealing with. And perhaps it can do it in a little better terms than you by yourself.

As a numbers guy, I love that there are all kinds of cool reporting potential around these metrics. Do web teams experience environment issues more than desktop app teams? Is training continuing to be a factor in poor performance? Are we over-managing or under-managing? What's the problem with deploying code that we have? Lots of folks have opinions about these at the enterprise level. With this tool, a standardized language, and a database, you could finally start separating opinion from fact. Not only would it help if you had a few projects, but the more projects you had the more useful it would be. Lord knows what we could learn at an industry level with a two or three level hierarchy and a thousand measured teams.

But the important thing to me about this is what it means at the team level -- how it makes a difference tomorrow in your work. If you're suffering through sprint after sprint of the same problems without anything changing, you should give it a shot! You might learn something.

Or remember it.




2 Comments

Yes, I've seen the "retrospective dance" - and I can imagine how this exercise can give a team a useful new perspective on its process.

On the other hand, I'd think that a better long term solution to the problem is to coach the team in better problem analysis - teach them to not just try to find a quick fix to the apparent problem at hand, but have a meaningful conversation about what's really happening, to get everyone's input, to think outside the box, to ask five whys, use fishbone diagrams etc. pp.

I agree that there are a lot of problem analysis tools that teams do not use.

For some reason, however, it seems teams in general are really bad at analysis. Beats me why.

And while it might be better for teams to understand problem analysis, I feel that the roll-up potential with what I'm talking about is too great to ignore. And I find teams unwilling to spend a lot of effort on traditional problem analysis.

Anything is better than the current situation.

Leave a comment

About this Entry

This page contains a single entry by DanielBMarkham published on September 13, 2009 12:20 PM.

How to Really Measure Software Teams 2 was the previous entry in this blog.

Vacation Optimization is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Social Widgets





Share Bookmark this on Delicious

Recent Comments

  • DanielBMarkham: I agree that there are a lot of problem analysis read more
  • Ilja Preuss: Yes, I've seen the "retrospective dance" - and I can read more

Information you might find handy
(other sites I have worked on)





Recently I created a list of books that hackers recommend to each other -- what are the books super hackers use to help guide them form their own startups and make millions? hn-books might be a site you'd like to check out.
On the low-end of the spectrum, I realized that a lot of people have problems logging into Facebook, of all things. So I created a micro-site to help folks learn how to log-in correctly, and to share various funny pictures and such that folks might like to share with their friends. It's called (appropriately enough) facebook login help