Saturday, September 1, 2007

Driving Requirements Down the Pipe

by Jim Johnson

An agile development process improves the odds of success; try executing a baseline project to take six months and cost no more than $750,000

“A requirement saved is a requirement earned,” to paraphrase Ben Franklin. A saying that might be just as germane is, “If you watch your pennies, the dollars will take care of themselves,” but you’ll have to change it to, “If you watch your requirements, the projects will take care of themselves.” Let’s face it: a project is a collection of requirements, just like a dollar is a collection of pennies. There is one big difference, though—all pennies have the same value and risk. Requirements are not created equal, and they should not be considered as such.

If clairvoyance were a true science, then you could rely on traditional methods to estimate the cost of a project or a group of projects. However, we all know that these estimates are both difficult and inaccurate. We get them wrong most of the time. (See Fig. 1.)

Instead of doing poor and inaccurate estimates, why not change the whole way you look at the problem?

The Project Portfolio

Your proposed projects can be viewed as a list of investment choices. You will allocate resources to the ones that will add the best value to your portfolio.

You may not know it, but you have a resource pool. It’s called your development staff. The cost of the pool is the budget for this staff. Your goal is to produce output that has high value and is delivered in a timely manner. The strategy that delivers the best result with the shortest delay is to quickly move output from the pool down a pipeline and into the hands of the stakeholders. (See Fig. 2.)

How an Optimized Pipeline Works

Suppose you have a pool of 20 developers, all using the Extreme Programming model. This would mean you have 10 developer pairs. Now, pretend you have five “stories” or projects, and each project has 30 index cards, adding up to 150 weeks of work. If you divide this work among your pool of developers, you will have 15 weeks of work.

If you mark each index card with its risks and gains, you can then sort them by their value. Of course, government mandates and political items may rise to the top of the pool, but that’s life in the big city.

Each week you assess progress, review your risk and gain assumptions, look at new requirements, and schedule the following week’s work prioritized by value. When the week’s work is completed, it is then moved to the testing area, checked by users and, if appropriate, put into production. The pipeline channels resources to the most essential current activities and puts functional, high-value tools into the hands of users.

In recent years, project managers have been learning how to manage the process of their projects, so it’s not too far-fetched to believe they could learn how to manage a project pipeline. The financial management of the pipeline is smooth, constant, and consistent. Admittedly, changing from a project-based management structure to a requirements-driven and pipeline-based management structure might temporarily be a wrenching experience, especially for project managers.

At Standish, we believe pipelining can be used by most organizations, and we are looking at a number of firms that have implemented all or parts of a pipelining solution. Many of the tenets described in the Standish book My Life is Failure are basic tools for a pipelining strategy.

The pipelining strategy consists of three major parts:

1. Maintaining the Right Resource Pool

First, all of your resources (internal personnel and outside consultants) must be put into a pool rather than assigned to specific projects or programs. Each person is then identified by his or her skills and capabilities. You will need a real-time database that is continually updated if you are to execute this strategy.

As people gain experience, they will be able to perform a greater variety of tasks, thereby broadening not only their usefulness, but also the flexibility of your resource pool. That pool should be considered a high-value corporate asset and treated as such.

2. Strict Baselines and Gating

Second, you need to define a lean baseline for any new project. The baseline consists of the minimum list of requirements needed to provide essential business value.

In addition, you need a gating strategy. Any new project needs to go through a formal gating process (as outlined in My Life is Failure). A project cannot make it through a gate unless the minimum requirements that have been set for that gate have been completed.

A key function of the gating process is that it weeds out many low-value or unfeasible projects before they are started, and discards in-progress projects that are hopelessly failing before a large amount of money has been spent.

The procedure should be the same whether you outsource a project or temporarily use part of your resource pool. If you use the resource pool, then once the baseline is done, the people are put back into the resource pool. This project is then put into the application portfolio. Any new work for this project will have to compete with other applications in the portfolio.

3. Microproject Stacking

Third, you need to adopt a “net value of work” strategy. This is the heart of pipelining execution. Tasks—or what we term “microprojects,” for the application portfolio—are broken down into finite timeframes, such as a week for the Extreme Programming model or a month for the Scrum model. You might decide to use two weeks, but the top limit should be no more than a month.

Each piece of work or microproject will be ranked by gain and risk, with the highest expected value on the top of your work stack.

Microprojects within the application portfolio will be assigned from the top of the microproject stack. Each microproject must be a stepping-stone, meaning that there must be solid criteria and tests for completion.

You may consider a slightly different strategy as an alternative. For example, as each worker accomplishes his or her goal, that worker goes back into the resource pool or gets to choose from a limited number of items—say from the top 10 microprojects within the microproject pool—that can be accomplished by the person’s skill set.

So, in summary, there are three major parts to pipelining: pooling resources, building foundation baselines, and stacking microprojects.

This strategy is similar to using an iterative process, but one difference is that it impels the organization to iteratively maximize its ROI or gain. Now let’s look at these three major parts in greater detail.

Resource Pooling

As we stated in My Life is Failure, the human resources component of “Management 101” emphasizes that the staff is your most valuable asset. Not surprisingly, one of the key project success factors identified in Standish Group’s CHAOS research is a competent staff.

There are six key practices for ensuring staff competency within a resource pooling environment. First, identify the required competencies and alternative skills. Second, provide a good, continuous training program to enhance staff skills. Third, recruit both internally and externally to provide a balance of experience. Fourth, provide incentives to motivate the staff. Fifth, ensure the staff is microproject-focused. Finally, sixth, provide a technology environment that reduces complexity by using a limited number of standard infrastructure components and tools.

To ensure a competent staff, you must understand your microproject environment. You should know the range of activities that need to be undertaken and be able to match skills with those activities. Certainly you will need a variety of resources, such as database designers, screen designers, and testers. The challenge consists of properly identifying the required competencies, the required level of experience, and the expertise needed for each identifiable task.

You also will need to calculate the number of resources with a given skill that are required, when they will be needed, when they will be released, and whether you have ample resources internally or must call on external personnel. This is a balancing act in itself.

Resource pooling is kind of like the old typing pools, but different. You will have a database of people, and their skills must be described in sufficient detail for you to be able to match skills to requirements. You will also need visibility into the activities and availability of each member of your resource pool.

When a resource becomes available, that person will work on the most highly rated available microproject that meets his or her skills. If the top microproject does not require the skills of the available resource, that resource goes top-down to the first microproject that meets his or her skills. Here is an interesting problem. You might have shortages of certain skills that cause bottlenecks. You may end up with a collection of high-priority microprojects on the top of the stack that stay there for too long a period. This phenomenon is known as “topping the stack.”

The right response is to periodically check the highest-level microprojects and assess where you need to acquire people to add to your resource pool, or to train existing members in new skills.

The more standard your infrastructure is, the more likely you will be able to develop a resource pool that covers a greater range of microprojects, thus relieving many of the stack-topping incidents.

Another phenomenon you may see is known as “bottom churn.” This occurs when low-priority items go through the pipe at a higher velocity than the top and middle. Bottom churn happens when too many of your resources only have skills that address the lower-priority microprojects.

As above, you can correct this issue by altering your resource pool. Bottom churn is bad, because it means you are losing much of your financial advantage. If anything, you want to stave off the bottom. The ultimate goal is to obtain value and to create velocity by moving the right microprojects off the stack as fast as possible.

“Geeks are easy. Give them more toys, don’t yell at them, and they will be happy,” says Ken Beck, father of Extreme Programming. Everyone has preferences for what they like working on and what they do not like working on. The more enjoyable the task, the higher the chances the task will be done well and quickly. So you could set up a bidding system where workers could select from a small number of microprojects from the top of the stack that meet their skill level.

If a microproject sits on top of the stack too long, a trigger would go off and it would then be assigned to the next available qualified resource, thus again preventing topping. The hope would be that everyone would get to work on things they like as much as possible.

A “Recipe for Success”

Projects with US normalized labor cost less than $750,000 have a 73 percent change of coming in on time and on budget. Your chances go down dramatically with each added dollar. Using an agile process will improve your odds. (See Fig. 3.)

For a long time, IT executives have followed the waterfall method, in which the software lifecycle flows through phases: requirements analysis, design, implementation, testing, and maintenance. The waterfall method is attributed to W.W. Royce, but Royce actually stated that this method “is risky and invites failure.” In fact, he proposed an iterative process that relies on feedback from each phase to decide what should be done next. . Our basic “Recipe for Success,” written in 1999, calls for ingredients of minimization, communications and standard infrastructure. You then mix in good project management, iterative development process, project management tools and adherence to key roles. Bake no longer than six months at no more than six people and at no more than $750,000 in US labor costs. It isn’t really the cost that is the big issue, it’s the time.

The Recipe for Success is just as relevant today. However, we now believe that it should only be applied to baseline projects, and should marry a gating system with an agile, iterative process.

Strict adherence to the Recipe for Success calls for a baseline project to end after no more than six months. Once baseline requirements have been defined for the foundation, the project gets a free ride until the end of the six months. But when that period ends, the resources must be put back into the pool. This should happen no matter where the project is in the process. (This strict adherence to the six-month timeframe is also known as time boxing.)

At the end of the six months, the project also should go into the portfolio. If the stakeholders say that the result is not usable, a decision must be made either to junk the project, or to specify the remaining work that has to be done, break the work into microprojects, and place them in appropriate positions on the microproject stack.

You should not consider six months as an optimal goal. A shorter timeframe—say, three months—produces even greater improvements in the flow of usable function to users. The key is that a short timeframe causes great concentration on the requirements that are truly needed.

On top of the Recipe for Success, as I have said, we layer a gateway system and an agile development process.

The three most popular agile processes are RUP, Extreme, and Scrum. All of these are based around an iterative process. The iterative process is the real silver bullet for developing software. Each of the three processes has some unique advantages, but they all get the job done.

The Rational Unified Process (RUP) software development methodology has already married the agile process to a gating system. RUP breaks down the development of software into four gates. Each gate consists of iterations of the software at that gate in development. A project stays in a gate until the stakeholders are satisfied, and then it either moves to the next gate or is canceled.

Gate one, called inception, deals with business requirements. Gate two, called elaboration, focuses on design. Gate three is called construction, and it is where you build and test the product. Gate four is called transition and is the phase that deals with deployment and change management. In this gate you fine-tune the performance, make any final adjustments, educate the users, and install the product. Of course, the user could reject the product and send you back to gate one.

There are basically four parts to Extreme Programming, but it does not include gates. The parts are planning, designing, coding, and testing. Unlike other methodologies, these are not phases; rather, they work in tandem with each other. Planning includes user stories, stand-up meetings, and small releases. The rules of the design segment stress that functionality should not be added until it is needed. In the coding part, developers work in pairs, the code is written to an agreed standard, and the user is always available for feedback. In testing, the tests are written before the code.

Scrum is similar to Extreme Programming. Both of these processes work really well after foundation building is complete, when you get into microproject stacking. It takes some effort to layer a gating system on these methodologies, but it can be done.

No matter what methodology you use, all the foundation building should be done within the six-month timeframe, using a labor force of no more than six full-time or equivalent persons at each gate.

Microproject Stacking

Remember that a microproject is a single or a limited number of requirements. Each of the requirements or sets of requirements is assigned a cost, risk, and gain. The microproject must be completed within the micro timebox limit—a week, two weeks, or a month’s work for a single resource or resource pair.

On the surface, this might seem like the simplest part of pipelining—assign cost, risk and gain and put them in line or on top of the stack by priority. But assigning a priority may not always be so easy. Just getting the whole team to agree on the cost, risk, and gain for each microproject is a challenging task in itself. You also have to consider non-monetary priorities, such as customer satisfaction, or regulatory compliance and mandated timeframes.

Second, once you go to a microproject concept, the number of projects that you must prioritize explodes. That makes stacking them more complicated.

Third, the relationships and dependencies between microprojects get extremely complicated. For example, “If I do microproject A, I don’t need to do this other microproject B.” Another case might be, “If I do this microproject, I’ll need to do these other three microprojects first.”

0 komentar:

Powered by WebRing.