* Transformation

Successful Change Programs Begin with Results

Harvard Business Review

Most corporate change programs mistake means for ends, process for outcome. The solution: focus on results, not activities.

Many companies make huge investments in per­formance improvement efforts which fail to have much significant impact on operational and financial results. While there are many companies that are able constantly to improve measurable results, others work and work and invest and invest with little to show for it.

The reason for these failures is the ardent pursuit of activities that sound good, look good, and allow managers to feel good but in fact contribute little or nothing to bottom-line performance. These activi­ ties, many of which parade under the banner of “total quality” or “continuous improvement;’ typically advance a managerial philosophy or style such as interfunctional collaboration, middle management empowerment, or employee involvement. Some focus on measurement of performance such as competitive benchmarking, assessment of customer satisfaction, or statistical process controls. Still other activities aim at training employees in problem solving or other techniques.

Companies introduce these programs under the false assumption that if they carry out enough of the “right” improvement activities, actual performance improvements will inevitably materialize. At the heart of these programs, which we call “activity cen­tered;’ is a fundamentally flawed logic that confuses ends with means, processes with outcomes. This logic is based on the belief that once managers benchmark their company’s performance against

competition, assess their customers’ expectations, and train their employees in seven-step problem solving, sales will increase, inventory will shrink, and quality will improve. Staff experts and consultants tell management that it need not in fact should not focus directly on improving results because eventu­ally results will take care of themselves.

The momentum for activity-centered programs continues to accelerate even though there is virtually no evidence to justify the flood of investment. Just the opposite: there is plenty of evidence that the rewards from these activities are illusory. In 1988, for example, one of the largest U.S. financial institutions committed itself to a “total quality” program to improve operational perfor­mance and win customer loyalty. The company trained hundreds of people and communicated the program’s intent to thousands more. At the end of two years of costly effort, the program’s consultants summarized progress: “Forty-eight teams up and running. Two completed Quality Improvement Stories. Morale of employees regarding the process is very positive to date:’ They did not report any bot­ tom-line performance improvements because there were none.

The executive vice president of a large mineral­ extracting corporation described the results of his company’s three-year-old total quality program by stating, “We have accomplished about 50% of our training goals and about 50% of our employee partic­ipation goals but only about 5% of our results goals:’ And he considered those results meritorious. These are not isolated examples. In a 1991 sur­ vey of more than 300 electronics companies, spon­sored by the American Electronics Association, 73% of the companies reported having a total quality pro­ gram under way, but of these, 63% had failed to improve quality defects by even as much as 10%. We believe this survey understates the magnitude of the failure of activity-centered programs not only in the quality-conscious electronics industry but across all businesses.

These signs suggest a tragedy in the making: pur­suing the present course, companies will not achieve significant progress in their overall competitiveness. They will continue to spend vast resources on a vari­ety of activities, only to watch cynicism grow in the ranks. And eventually, management will discard many potentially useful improvement processes because it expected the impossible of them and came up empty-handed.

If activity-centered programs have yielded such paltry returns on the investment, why are so many companies continuing to pour money and energy into them? For the same reason that previous gener­ations of management invested in zero-based bud­geting, Theory Z, and quality circles. Years of frustrating attempts to keep pace with fast-moving com­petitors make managers prey to almost any plausible approach. And the fact that hundreds of membership associations, professional societies, and consulting firms all promote activity-centered processes lends them an aura of popularity and legitimacy. As a con­ sequence, many senior managers have become con­vinced that all of these preparatory activities really will pay off some day and that there isn’t a viable alternative.

They are wrong on both counts. Any payoffs from the infusion of activities will be meager at best. And there is in fact an alternative: results-driven improvement processes that focus on achieving specific, measurable operational improvements within a few months. This means increased yields, reduced delivery time, increased inventory turns, improved customer satisfaction, reduced product development time, increased inventory turns, improved customer satisfaction, reduced product development time. With results-driven improvements, a company intro­ duces only those innovations in management meth­ods and business processes that can help achieve spe­cific goals. (See the insert,”Comparing Improvement Efforts:’)

An automotive-parts plant, whose customers were turning away from it because of poor quality and late deliveries, illustrates the difference between the two approaches. To solve the company’s prob­ ems, management launched weekly employee­ involvement team meetings focused on improving quality. By the end of six months, the teams had gen­erated hundreds of suggestions and abundant good­ will in the plant but virtually no improvement in quality or delivery. In a switch to a results-driven approach, manage­ ment concentrated on one production line. The plant superintendent asked the manager of that line to work with his employees and with plant engineering to reduce by 30% the frequency of their most preva­lent defect within two months. This sharply focused goal was reached on time. The manager and his team next agreed to cut the occurrence of that same defect by an additional 50%. They also broadened the effort to encompass other kinds of defects on the line. Plant management later extended the process to other pro­duction lines, and within about four months the plant’s scrap rate was within budgeted limits.

Both activity-centered and results-driven strate­gies aim to strengthen fundamental corporate com­petitiveness. But as the automotive-parts plant illustrates, the approaches differ dramatically. The activ­ities path is littered with the remains of endless preparatory investments that failed to yield the desired outcomes. The results-driven path stakes out specific targets and matches resources, tools, and action plans to the requirements of reaching those targets. As a consequence, managers know what they are trying to achieve, how and when it should be done, and how it can be evaluated.



The Activity-Centered Fallacy

There are six reasons why the cards are stacked against activity-centered improvement programs:

Not Keyed to Specific Results. In activity­ centered programs, managers reform the way they work with each other and with employees; they train people; they develop new measurement schemes; they increase employee awareness of customer atti­tudes, quality, and more. The expectation is that these steps will lead to better business performance. But managers rarely make explicit how the activity is supposed to lead to the result.

Seeking to improve quality, senior management at a large telecommunications equipment corpora­tion sent a number of unit managers to quality train­ ing workshops. When they returned, the unit heads ordered orientation sessions for middle manage­ment. They also selected and trained facilitators who, in turn, trained hundreds of supervisors and opera­ tors in statistical process control. But senior manage­ment never specified which performance parameters it wanted to improve costs, reject rates, delivery timeliness. During the following year, some units improved performance along some dimensions, other units improved along others, and still other units saw no improvement at all. There was no way for man­agement to assess whether there was any connection between the investment in training and specific, tan­gible results.

Too Large Scale and Diffused. The difficulty of connecting activities to the bottom line is complicated by the fact that most companies choose to launch a vast array of activities simultaneously across the entire organization. This is like researching a cure for a disease by giving a group of patients ten different new drugs at the same time.

In one case, a large international manufacturer identified almost 50 different activities that it wanted built into its total quality effort. The company’s list involved so many programs introduced in so many places that just to describe them all required a complex chart. Once top managers had made the investment and the public commitment, however, they “proved” their wisdom by crediting the programs for virtually any competitive gain the company made. But in fact, no one knew for sure which, if any, of the 50 activities were actually working.

Results Is a Four-Letter Word. When activity-centered programs fail to produce improvement in financial and operational performance, managers seldom complain lest they be accused of preoccupation with the short term at the expense of the long term—the very sin that has supposedly caused companies to defer investment in capital and human resources and thus to lose their competitive edge. It is a brave manager who will insist on seeing a demonstrable link between the proposed investment and tangible payoffs in the short term.

When one company had little to show for the millions of dollars it invested in improvement activities, the chief operations officer rationalized, “You can’t expect to overturn 50 years of culture in just a couple of years.” And he urged his management team to persevere in its pursuit of the activities. He is not alone in his faith that, given enough time, activity-centered efforts will pay off. The company cited above, with almost 50 improvement activities going at once, published with pride its program’s timetable calling for three years of preparations and reformations, with major results expected only in the fourth year. And at a large electronics company, the manual explaining its management-empowerment process warned that implementation could be “painful” and that management should not expect to see results for a “long time.”

Delusional Measurements. Having conveyed the false message that activities will inevitably produce results, the activities promoters compound the crime by equating measures of activities with actual improvements in performance. Companies proclaim their quality programs with the same pride with which they would proclaim real performance improvements—ignoring or perhaps even unaware of the significance of the difference.

In a leading U.S. corporation, we found that a group of quality facilitators could not enumerate the critical business goals of their units. Surprised, we asked how they could possibly assess whether or not they were successful. Their answer: success consisted of getting 100% of each unit’s managers and employees to attend the prescribed quality training—a centerpiece of the corporation’s total quality program. The Malcolm Baldrige National Quality Award encourages such practices by devoting only 180 points out of a possible 1,000 points to quality results. The award gives high marks to companies that demonstrate outstanding quality processes without always demanding that the current products and services be equally outstanding.

So when staff experts and improvement gurus show up with their evangelistic enthusiasm and bright promises of total quality and continuous improvement, asking only for faith and funds, managers greet them with open arms.

But the capability of most of these improvement experts is limited to installing discrete, often generic packages of activities that are rarely aimed directly at specific results. They design training courses; they launch self-directed teams; they create new quality-measurement systems; they organize campaigns to win the Baldrige Award. Senior managers plunge wholeheartedly into these activities, relieving themselves, momentarily at least, of the burden of actually having to improve performance.

The automotive-parts plant described earlier illustrates the pattern. Senior managers had become very frustrated after a number of technical solutions failed to cure the plant’s ills. When a staff group then asserted that employee involvement could produce results, management quickly accepted the staff group’s suggestion to initiate employee involvement team meetings—meetings that failed to deliver results.

The futility of expecting staff-driven programs to yield performance improvement was highlighted in a study conducted by a Harvard Business School team headed by Michael Beer. It analyzed a number of large-scale corporate change programs, some of which had succeeded, others of which had failed. The study found that companywide change programs installed by staff groups did not lead to successful transformation. As the authors colorfully put it, “Wave after wave of programs rolled across the landscape with little positive impact.”1

Bias to Orthodoxy, Not Empiricism. Because of the absence of clear-cut beginnings and ends and an inability to link cause and effect, there is virtually no opportunity in activity-centered improvement programs to learn useful lessons and apply them to future programs. Instead, as in any approach based on faith rather than evidence, the advocates—convinced they already know all the answers—merely urge more dedication to the “right” steps.

One manufacturing company, for example, launched almost 100 quality improvement teams as a way to “get people involved.” These teams produced scores of recommendations for process changes. The result was stacks of work orders piling up in maintenance, production engineering, and systems departments— more than any of these groups were capable of responding to. Senior managers, however, believed the outpouring of suggestions reinforced their original conviction that participation would succeed. Ignoring mounting evidence that the process was actually counterproductive, they determined to get even more teams established.

Results-Driven Transformation

In stark contrast to activity-centered programs, results-driven improvements bypass lengthy preparation rituals and aim at accomplishing measurable gains rapidly. Consider the case of the Morgan Bank. When told that his units would have to compete on an equal footing with outside vendors, the senior vice president of the bank’s administrative services (responsible for 20 service functions including printing, food services, and purchasing) realized that the keys to survival were better service and lower costs. To launch a response, he asked the head of each of the service functions to select one or two service-improvement goals that were important to internal “customers” and could be achieved quickly. Unit heads participated in several workshops and worked with consultants but always maintained a clear focus on launching the improvement processes that would enable them to achieve their goals.

In the bank’s microfilm department, for example, the first goal was to meet consistently a 24-hour turnaround deadline for the work of a stock-transfer department. The microfilm department had frequently missed this deadline, sometimes by several days. The three shift supervisors and their manager laid out a five-week plan to accomplish the goal. They introduced a number of work-process innovations, each selected on the basis of its capacity to help achieve the 24-hour turnaround goal, and tracked performance improvements daily.

This project, together with similar results-driven projects simultaneously carried out in the other 19 units, yielded significant service improvements and several million dollars of cost savings within the first year of the initiative – just about the time it usually takes to design the training programs and get all employees trained in a typical activity-centered effort. The experience of the Morgan Bank illustrates four key benefits of a results-driven approach that activity-centered programs generally miss:

Companies introduce managerial and process innovations only as they are needed. Results-driven projects require managers to prioritize carefully the innovations they want to employ to achieve targeted goals. Managers introduce modifications in management style, work methods, goal setting, information systems, and customer relationships in a just-in-time mode when the change appears capable of speeding progress toward measurable goals. Contrast this with activity-centered programs, where all employees may be ritualistically sent off for training because it is the “right” thing to do.

In the Morgan Bank’s microfilm department project, the three shift supervisors worked together as a unified team—not to enhance teamwork but to figure out how to reduce customer delivery time. For the first time ever, they jointly created a detailed improvement work plan and week-by-week subgoals. They posted this work plan next to a chart showing daily performance. Employees on all three shifts actively participated in the project, offering suggestions for process changes, receiving essential training that was immediately applied, and taking responsibility for implementation.

Thus instead of making massive investments to infuse the organization with a hodgepodge of improvement activities, the microfilm department and each of the other administrative services introduced innovations incrementally, in support of specific performance goals.

Empirical testing reveals what works. Because management introduces each managerial and process innovation sequentially and links them to short-term goals, it can discover fairly quickly the extent to which each approach yields results. In the Morgan Bank’s microfilm department, for example, the creation of a detailed improvement work plan and week-by-week subgoals which were introduced during the first two weeks of the program enabled management to assess accurately and quickly the impact of its actions in meeting the 24-hour turnaround goal.

New procedures for communicating between shifts allowed management to anticipate workload peaks and to reassign personnel from one shift to another. That innovation contributed to meeting deadlines. A new numbering system to identify the containers of work from different departments did not contribute, and management quickly abandoned the innovation. By constantly assessing how each improvement step contributed to meeting deadlines, management made performance improvement less an act of faith and more an act of rational decision making based on evidence.

Frequent reinforcement energizes the improvement process. There is no motivator more powerful than frequent successes. By replacing large-scale, amorphous improvement objectives with short-term, incremental projects that quickly yield tangible results, managers and employees can enjoy the psychological fruits of success. Demonstrating to themselves their capacity to succeed not only provides necessary reinforcement but also builds management’s confidence and skill for continued incremental improvements.

The manager of the bank’s microfilm department, for example, had never had the experience of leading a significant  upgrading of performance. It was not easy for her to launch the process in the face of employee skepticism. Within a few weeks, however, when the chart on the wall showed the number of missed deadlines going down, everyone took pleasure in seeing it, and work went forward with renewed vigor. The manager’s confidence  grew and so did employee support for the subsequent changes she implemented.

In another example, a division of Motorola wanted to accelerate new product development. To get started, a management team selected two much-delayed mobile two-way radios and focused on bringing these products to the market within 90 days. For each product, the team created a unified, multifunction work plan; appointed a single manager to oversee the entire development process as the product moved from department to department; and designated an inter-functional team to monitor progress. With these and other innovations, both radios were launched on time. This success encouraged management to extend the innovations to other new product projects and eventually to the entire product development process.

Management creates a continuous learning process by building on the lessons of previous phases in designing the next phase of the program. Both activity-centered and results-driven programs are ultimately aimed at producing fundamental shifts in the performance of the organization. But unlike activity-centered programs that focus on sweeping cultural changes, large-scale training programs, and massive process innovation, results-driven programs begin by identifying the most urgently needed performance improvements and carving off incremental goals to achieve quickly.

By using each incremental project as a testing ground for new ways of managing, measuring, and organizing for results, management gradually creates a foundation of experience on which to build an organization-wide performance improvement. Once the manager of Morgan’s microfilm department succeeded in meeting the 24-hour turnaround goal for one internal customer department, she extended the process to other customer departments. In each of the other 19 service units, the same expansion was taking place. Unit managers shared their experiences in formal review conferences so that everyone could benefit from the best practices. Within six months, every manager and supervisor in administrative services was actively leading one or more improvement projects. From a base of real results, managers were able to encourage a continuous improvement process to spread, and they introduced dozens of managerial innovations in the course of achieving sizable performance gains.

Putting the Ideas into Practice

Taking advantage of the power of results-driven improvements calls for a subtle but profound shift in mind-set: management begins by identifying the performance improvements that are most urgently needed and then, instead of studying and preparing and gearing up and delaying, sets about at once to achieve some measurable progress in a short time.

The Eddystone Generating Station of Philadelphia Electric, once the world’s most efficient  fossil-fuel plant, illustrates the successful shift from activity-centered to results-driven improvement. As Eddystone approached its thirtieth anniversary, its thermal efficiency  the amount of electricity produced from each ton of coal burned had declined significantly . The problem was serious enough that top management was beginning to question the plant’s continued operation.

The station’s engineers had initiated many corrective actions, including installing a state of-the-art computerized system to monitor furnace efficiency , upgrading plant equipment and materials, and developing written procedures for helping operating staff run the plant more efficiently . But because the innovations were not built into the day-to-day operating routine of the plant, thermal efficiency  tended to deteriorate when the engineers turned their attention elsewhere.

In September 1990, the superintendent of operations decided to take a results-driven approach to improve thermal efficiency . He and his management team committed to achieve a specific  incremental improvement of thermal-efficiency worth about $500,000 annually without any additional plant investment. To get started, they identified  a few improvements that they could accomplish within three months and established teams to tackle each one.

A five-person team of operators and maintenance employees and one supervisor took responsibility for reducing steam loss from hundreds of steam valves throughout the plant. The team members started by eliminating all the leaks in one area of the plant. Then they moved on to other areas. In the process, they invented improvements in valve-packing practices and devised new methods for reporting leaks.

Another employee team was assigned the task of reducing heat that escaped through openings in the huge furnaces. For its first sub-project, the group ensured that all 96 inspection doors on the furnace walls were operable and were closed when not in use. Still another team, this one committed to reducing the amount of unburned carbon that passed through the furnace, began by improving the operating effectiveness of the station’s coal-pulverizer mills in order to improve the carbon burn rate.

Management charged each of these cross-functional teams not merely with studying and recommending but also with producing measurable results in a methodical, step-by-step fashion. A steering committee of station managers met every two weeks to review progress and help overcome obstacles. A variety of communication mechanisms built awareness of the project and its progress. For example, to launch the process, the steering committee piled two tons of coal in the station manager’s parking space to dramatize the hourly cost of poor thermal efficiency . In a series of “town meetings” with all employees, managers explained the reason for the effort and how it would work. Newsletters reviewed progress on the projects—including the savings realized— and credited employees who had contributed to the effort.

As each team reached its goal, the steering committee, in consultation with supervisors and employees, identified the next series of performance improvement goals, such as the reduction of the plant’s own energy consumption, and commissioned a number of teams and individuals to implement a new round of projects. By the end of the first year, efficiency improvements were saving the company over $1 million a year, double the original goal.

Beyond the monetary gains—gains achieved with negligible investment—Eddystone’s organizational structure began to change in profound ways. What had been a hierarchical, tradition-bound organization became more flexible and open to change. Setting and achieving ambitious short-term goals became part of the plant’s regular routine as managers pushed decisions further and further down into the organization. Eventually, the station manager disbanded the steering committee, and now everyone who manages improvement projects reports directly to the senior management team.

Eddystone managers and workers at all levels continue to experiment and have invented a number of highly creative efficiency-improving processes. A change so profound could never have happened by sending all employees to team training classes and then telling them, “Now you are empowered; go to it.”

In the course of accomplishing its results, Eddystone management introduced many of the techniques that promoters of activity-centered programs insist must be drilled into the organization for months or years before gains can be expected: employees received training in various analytical techniques; team-building exercises helped teams achieve their goals more quickly; teams introduced new performance measurements as they were needed; and managers analyzed and redesigned work processes. But unlike activity-centered programs, the results-driven work teams introduced innovations only if they could contribute to the realization of short-term goals. They did not inject innovations wholesale in the hope that they would somehow generate better results. There was never any doubt that responsibility for results was in the hands of accountable managers.

Philadelphia Electric—and many other companies as well—launched its results-driven improvement process with a few modest pilot projects. Companies that want to launch large-scale change, however, can employ a results-driven approach across a broad front. In 1988, chairman John F. Welch, Jr. launched General Electric’s “Work-Out” process across the entire corporation. The purpose was to overcome bureaucracy and eliminate business procedures that interfered with customer responsiveness. The response of GE’s $3 billion Lighting Business illustrates how such a large-scale improvement process can follow a results-driven pathway.

Working sessions attended by a large cross-section of Lighting employees, a key feature of Work-Out, identified a number of “quick wins” in target areas. These were initiatives that employees could take right away to generate measurable improvement in a short time. To speed new product development, for example, Work-Out participants recommended that five separate functional review sessions be combined into one, a suggestion that was eagerly adopted. To get products to customers more quickly, a team tested the idea of working with customers and a trucking company to schedule, in advance, regular delivery days for certain customers. The results of the initial pilot were so successful that GE Lighting has extended the scheduling system to hundreds of customers.

Another team worked to reduce the breakage of fragile products during shipment— costly both in direct dollars and in customer dissatisfaction. Sub-teams, created to investigate package design and shipping-pallet construction, followed sample shipments from beginning to end and asked customers for their ideas. Within weeks, the team members had enough information to shift to remedial action. They tried many innovations in the packaging design; they modified work processes in high-risk areas; they reduced the number of times each product is handled; they collaborated with their shippers, suppliers, and customers. The payoff was a significant  reduction in breakage within a few months.

The Lighting Business has launched dozens of such results-oriented projects quickly—and as each project achieves results, management has launched additional projects and has even extended the process to its European operations.

Opportunities for Change

There is no reason for senior-level managers to acquiesce when their people plead that they are already accomplishing just about all that can be accomplished or that factors beyond their control—company policy, missing technology, or lack of resources—are blocking accelerated performance improvement. Such self-limiting ideas are universal. Instead, management needs to recognize that there is an abundance of both underexploited capability and dissipated resources in the organization.

This orientation frees managers to set about translating potential into results and to avoid the cul-de-sac of fixing up and reforming the organization in preparation for future progress. Here is how management can get started in results-driven programs:

Ask each unit to set and achieve a few ambitious short-term performance goals.There is no organization where management could not start to improve performance quickly with the resources at hand—even in the face of attitudinal and skill deficiencies, personnel and other resource limitations, unstable market conditions, and every other conceivable obstacle. To begin with, managers can ask unit heads to commit to achieve in a short time some improvement targets, such as faster turnaround time in responding to customers, lower costs, increased sales, or improved cash flow. They should also be asked to test some managerial, process, or technical innovations that can help them reach their goals.

Periodically review progress, capture the essential learning, and reformulate strategy.Results-driven improvement is an empirical process in which managers use the experience of each phase as data for shaping the next phase. In scheduled work sessions, senior management should review and evaluate progress on the current array of results-focused projects and learn what is and what isn’t working.

Fresh insights flood in from these early experiments: how rapidly project teams can make gains; what kind of support they need; what changes in work methods they can implement quickly; what kinds of obstacles need to be addressed at higher levels in the organization. Managers and employees develop confidence  in their capacity to get things done and to challenge and overturn obsolete practices.

Armed with this learning, senior management can refine strategies and timetables and, in consultation with their people, can carve out the next round of business goals. The cycle repeats and expands as confidence  and momentum grow.

Institutionalize the changes that work—and discard the rest. As management gains experience, it can take steps to institutionalize the practices and technologies that contribute most to performance improvement and build those into the infrastructure of the company. In Motorola’s Mobile Division, for example, in its new product development project, a single manager was assigned responsibility for moving each new product from engineering to production and to delivery, as opposed to having this responsibility handed off from function to function. This worked so well it became standard practice.

Such change can also take place at the policy level. A petroleum company, for example, experimented with incentive compensation in two sales districts. When the trials produced higher sales growth, senior management decided to install throughout the marketing function a performance-based compensation plan that reflected what it had learned in the experiments. In this way, a company can gradually build successful innovations into its operations and discard unsuccessful ones before they do much harm.

Create the context and identify the crucial business challenges.Senior management must establish the broader framework to guide continuing performance improvement in the form of strategic directions for the business and a “vision” of how it will operate in the future. A creative vision can be a source of inspiration and motivation for managers and employees who are being asked to help bring about change. But no matter how imaginative the vision might be, for it to contribute to accelerated progress, managers must translate it into sharp and compelling expectations for short-term performance achievements. At Philadelphia Electric, for example, the Eddystone improvement work responded to top management’s insistent call for performance improvement and cost reduction.

A results-driven improvement process does not relieve senior management of the responsibility to make the difficult  strategic decisions necessary for the company’s survival and prosperity. General Electric’s Work-Out process augmented but could never substitute for Jack Welch’s dramatic restructuring and downsizing moves. By marrying long-term strategic objectives with short-term improvement projects, however, management can translate strategic direction into reality and resist the temptation to inculcate the rain dance of activity-centered programs.

1. See Michael Beer, Russell A. Eisenstat, and Bert Spector, “Why Change Programs Don’t Produce Change,” HBR November–December 1990, p. 158.