Companies can reduce risk and allow organizational learning by breaking major process improvements into a series of small, reversible experiments. But when the change involves a new information technology, it's harder to make incremental updates. This approach reduces risks and allows people to learn from each, and make adjustments as they go. But when the change involves a new information technology, it's harder to make incremental updates. Computer systems often don't come in a multitude of small parts; they generally come in huge packages.
The conventional wisdom when implementing such software packages is to identify all the stakeholders and include representatives of each group on the implementation team. Executives are supposed to communicate the expected benefits of the new software, and the front line workers are shown how the new software will allow them to execute their work. Typically large software packages are rolled out without modification to take advantage of embedded "best practices" in either a "big bang" or a "rolling implementation" across multiple geographies. The software is driven from the top and often imposed on the organization with little upfront process analysis. The focus is on achieving the delivery date, with little front line engagement.
So is there a way to implement big process and system changes while engaging workers and enabling organizational learning? There are certainly ways to make it easier.
1. Jointly map the work flows before you implement the technology.
Consider the approach to front line engagement taken at Martin Health System, a community healthcare system in Stuart, Florida, which implemented a new enterprise-wide system for managing patient records and core operations. As Roger Chen, director of performance excellence, and Lisa Cannata, director of learning and organizational development, told me, it's vital to iron out the people and process issues before introducing the technology. "You want to standardize your practices first." Chen and Cannata assigned three process improvement facilitators to work with staff to create 60 current-state value stream maps — that is, a representation of how work flowed through departments — and 50 future-state value stream maps before going live. "We were able to identify workarounds that were supported by the old system but could be withdrawn, and we uncovered the medical staff relationships and how they would change. For example, the new system replaced a manual process for order entry. The physicians needed lots of handholding to use the automated process." By surfacing and addressing training and support needs related to the new system, they gave people a sense of control over their destiny.
2. Break the new system into chunks for implementation wherever possible.
Zack Lemelle, a former divisional CIO at Johnson & Johnson, told me that he got around IT big bangs by using "joint application design / rapid application development" sessions. These would engage front line workers, data modelers, programmers, analysts, and technical designers in designing a mockup. "We would take a new process design and translate it into mockups of computer screens. Together the users and the IT team defined user requirements and managed expectations for both custom development and package configuration. We were able to quickly define what the finished product would look like before it was finished. This approach helped us reduce time to delivery and increase the benefits. Rather than follow the traditional 'waterfall' approach of defining big chunks of requirements upfront — which always proved problematic — we would do it in smaller, bite-sized chunks. And we replaced quarterly releases with biweekly or monthly ones."
In addition to chunking for gradual rollout of functional capabilities, some choose to chunk geographically and run in small test markets — a kind of experiment for learning and "proof of concept" before rolling out more broadly. It's important to schedule sufficient time between rollouts for reflection and redesign of the next implementation.
3. Create boards to share plans and progress.
Solar Group, a €1.4 billion wholesaler of technical products in northern Europe, has created boards that display performance on the implementation of an enterprise-wide system (SAP) and new processes. They show key measures, plans and dependencies, and change management in each of the seven countries involved. According to Klaus Petersen, global process manager, each of four major process teams has a room with a map on the wall showing the process, performance indicators, and plans. These cross-functional process teams — order-to-cash, for instance — meet every morning to review progress and align on actions. They have a "board walk" every week where they visit other teams' rooms to make sure the work is aligned. Every other week there is also a management meeting that starts at one of the boards. And there is a program management office at headquarters that has daily and weekly meetings at their boards, which show overall performance, priorities and interdependencies between teams.
Solar has turned over control of the rollout to the business (rather than driving it from IT as an IT project). Before going live with each implementation, business managers monitor performance to get a baseline. Then they measure performance again after the system is implemented. When problems arise, these same business managers tell IT the order that problems need to be solved.
There should be plenty of time for business and front line ownership when implementing large software packages. These are big decisions that take a while to make, and which are rolled out over a number of years. Are there ways you can enable front line workers to influence the design of their work? Are there ways to break up the "big bang" into reversible experiments? If there are, people will develop a sense of ownership and many will become ambassadors of change.
Question: Have you seen ways to implement big process and system changes that reduce risk, engage workers, and enable organizational learning?