« How to Really Measure Software Teams| Main | How to Really Measure Software Teams 3 »

How to Really Measure Software Teams 2

| | Comments (0)

I teach a lot of project and program managers in my business. And I've been there, done that -- I've ran a lot of projects and programs. One of the things that fascinates me most is the difference between theory and practice.

In theory, you have this value structure from the organization -- what's important to it, what the plans are for next year, what fires need to be put out. In theory you simply define and prioritize these things, create and allocate a strategy, and end up with a list, matrix, GANTT chart, or something similar that gives you the next things to do. In theory, by having a value tree and using SWOT and a bunch of other stuff, like QFT, you end up with the next chunk of work for the next time frame. In theory it's even better than that, because by comparing your structure with your results, you can create metrics that show when you're not doing what you want to do.

In practice I have yet to see this work from top-to-bottom in anything but the minds of the creators. This doesn't mean I consider these things valueless -- far from it. A lightweight, cyclical system of work prioritization, allocation, and measurement is absolutely necessary for large organizations to survive.

But in practice, things get messy quickly.

In practice, you have multiple competing priorities that block one another yet each must be done first. In practice, you don't have enough time to put out the last fire before the next one pops up. In practice, department X wants you to do things one way while department Y requires an exactly opposite way. In practice, you have no support and you're on your own. In practice, the business can't make the decisions required of it in a timely fashion. In practice, Congresspeople ring up any time they like and it's a fire drill until they are made happy. In practice, you have no idea what the market will like and you're lucky to have some idea of how the next week is going to play out.

Practice is much different than theory.

It used to be the answer was simple -- blame the practitioners! And still see this from people from all corners of the development world. I had a TDD proponent tell me last month that the reason developers weren't adopting TDD was that they were lazy. A friend that creates process models for organizations confided that things would be a lot better "if management would just crack down on those knuckleheads"

I'm not saying that everybody is perfect, and sometimes "cracking down" or "bucking up" or whatever is exactly what's called for. Sometimes, of course, it's not. But what I AM saying is that a good set of measurements to take is the perceived problems the teams are having with getting their jobs done. Perhaps by looking at the things that are preventing the teams from getting what they want, you can get what you want too.

It's good for me. It's good for you. It's not the dawning of the Age of Aquarius, and we all don't have to start singing Kumbaya and wearing tie-dyed shirts , but it's at least a few steps forward in having a productive conversation about what to fix.

Sounds great but very nebulous, Daniel. What's it actually look like?

Case study 1: Large government program kick-off: A large DoD agency was planning to replace a number of its internal systems with an single enterprise-wide system. As part of that, the agency's StratPlan was matched to it's objectives and objectives mapped to business use cases, creating a process value-tree and a traditional value tree. This gave the organization a prioritized, valued list of business processes that needed to be supported by the new system as well as a list of qualities the future system had to support.

The initial work went well. Large pieces were scoped and departments reached an understanding regarding control and deployment. Key issues were resolved. At that point the team switched from business-strategy to deatiling out the business use-cases and associated documents. A permanent program manager was assigned, contractors who specialized in DoD funding requirements were brought in, and the program went from a straight line to a dozen things all heading in seemingly different directions. At some point, things seem to bog down. When I checked in on the requirements/scoping team, they were unhappy.

Oddly enough, nobody wanted to actually identify what the problems were. So we used this adaptive metrics process I've been describing. We simply took the things the team was trying to accomplish and bounced them off some common obstacles. Note that when I say "the things the team was trying to accomplish" I mean the generic things that requirements teams do as part of its job: the general reason they're there. I don't mean the goals of the specific program or project.

Metrics of a team organizing a large program using Use Cases
To read: Find a red (bad) or green (good) process area.
Read the associated process and obstacle
(row heading and column heading)


Now things look interesting. Reading the dark greens, we can see that there is clear user responsibility to refine the system definition and understand stakeholder needs. That's awesome -- we got folks who are supposed to be finding out what we need. We also have a clear commitment to perform defining the system. That's good too -- somebody up there thinks our work is important.

But a funny thing happened on the way to defining an enterprise system. Reading the reds, we have a large perceived problem with integrating our work into the rest of the program -- the requirements were simply viewed as an additional amount of "paperwork" that needed to be completed for funding to occur. In addition, we have a large perceived problem with using requirements to manage the scope of the system. Ouch! One of the critical parts of determining business needs and value -- using the work to drive the prioritization and scoping of the system -- wasn't happening.

No wonder they were unhappy.

An interesting pattern emerged here as well. We found that rows with more than three red indicators indicate blocks that the team is completely unable to overcome and will not be able to overcome. They're organizational dead-stops.

And no wonder nobody wanted to talk about it. The organization clearly was not ready to use a detailed requirements/business valuation process as part of its governance and program management methodology. That's fine and dandy -- orgs have a right to do things in any fashion they want -- but the team had a responsibility to communicate to the organization that they were an obstacle. And they were able to do that in a way that didn't ruffle any feathers.

Case Study #2: Sig Sigma Voice of Customer on a Distributed Workforce: Another large, highly-distributed government agency had a need to streamline its equipment procurement process. They were big into Six Sigma, so that's what they decided to use.

The Six Sigma team came to us for assistance in Voice of the Customer -- finding out what's broken. We took the pieces of procurement and asked the org -- spread out all over the world -- what was hurting. This took a couple of hours to set up, a day to administer (due to time differences), and an afternoon to report back. We're not talking a 90-day strategic consulting engagement. It was quick and easy.

Metrics of a team beginning a Six Sigma process
Qualitative and Quantitative both metrics both have their uses
This VOC metric kicked off a Six-Sigma project


So what do we have? Starting with the green boxes again, looks like we know how to do the annual inventory update and management does a great job of checking in to make sure the advanced equipment shipment notification is done correctly.

But geesh, (reading the second-to last line) our training and knowledge of the software we're supposed to use sucks. We don't have the time, tools, and money to train, management doesn't seem to care if we do, and nobody ever checks our training status. In addition (reading the last line) even though we know who's supposed to be in charge of equipment disposal, nobody in upper management ever checks to make sure it's being done correctly.

Now that paints quite a detailed picture of a mulit-billion dollar procurement process for just a couple of day's worth of work, doesn't it? It was almost like taking a few of the guys out for beers and finding out "what's really going on". And it gave the Six Sigma team a place to start their DMAIC work: focus on training and equipment disposal.

A strict metrics guru would say that the organization in both of these cases failed to define values that were necessary for teams to succeed and then to measure those values. For instance, in our first example the use of requirements to scope programs is well known, and a simple check now and then would resolve that problem. In the second example, it is obvious that measurements can be taken in regards to training penetration.

But there are problems with this approach. First, as measurement wonks might nave noticed by now, we are not counting beans here. Instead we are asking people for their opinion. This tool is a way to begin talking about beginning to define measurements, not a hard measurement in itself. Secondly, it's totally obvious after the fact to see what's broken, but at the time, these things weren't obvious at all. Thirdly, you can't measure everything. Even the most measurement-happy folks will admit that there is a limit to what an organization can tolerate in regards to statistics accumulation.

You have to have some way of rapidly surveying and then adapting your metrics to individual team situations, working from the bottom-up to harvest lessons that the organization can use. This is adaptive metrics. If you use a standardized grid, or better yet a hierarchy of processes and obstacles, then each piece of measurement rolls up into an organizational bottom-up picture of obstacles.

[soap box]Good metrics are about learning stuff. If you're reading a graph or a report and you don't have an "ah-ha" moment, you're wasting your time. You should know something significantly more after a measurement than before it or it wasn't worth doing. [/soap box]

As you can see, I've focused strictly on "hard-core" metrics guys here.

I know my agile friends are probably scratching their heads (or reaching for the analgesic) so I'll do one more part where we just do technology development with agile teams.

Leave a comment

About this Entry

This page contains a single entry by DanielBMarkham published on September 12, 2009 1:39 PM.

How to Really Measure Software Teams was the previous entry in this blog.

How to Really Measure Software Teams 3 is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Social Widgets





Share Bookmark this on Delicious

Information you might find handy
(other sites I have worked on)





Recently I created a list of books that hackers recommend to each other -- what are the books super hackers use to help guide them form their own startups and make millions? hn-books might be a site you'd like to check out.
On the low-end of the spectrum, I realized that a lot of people have problems logging into Facebook, of all things. So I created a micro-site to help folks learn how to log-in correctly, and to share various funny pictures and such that folks might like to share with their friends. It's called (appropriately enough) facebook login help