Determining the right set of indicators can be tough in the public sector, where outcomes may be less clear than the bottom-line metrics of profit-making enterprises. Two examples of RPM tools that have predictive power are what we call DICE methodology and rigor testing.
DICE derives its name from a few elements that determine the outcome of almost all change projects:
The duration of the project or time between major review milestones
The performance integrity of the project team
The organization’s commitment to change, specifically that of senior managers and local-area staff
The additional effort required for implementation beyond the usual work requirements
For a particular change program, scoring along each of these dimensions generates an overall score that can be calibrated to a database of other change projects that have been executed around the world. That, in turn, generates a distribution of likely outcomes, allowing senior management to assess objectively whether the project falls into one of the three categories that we call “win, worry, or woe.” (See the exhibit “DICE Predicts Whether the Team Is Set Up for Success.”) Taken together, these four elements offer a litmus test for assessing the probability of success of any given transformation effort. They shine a light on specific actions that can improve the probability of success early enough for course corrections.
Rigor testing provides roughly a dozen simple questions, in three groups, about dependencies and milestones that, when scored, flag structural or behavioral issues and raise potential problems early. In the first group—testing whether the risks and issues have been explicitly defined and addressed—there are questions like these: Would someone with no experience on the project be able to read and understand the road map? Are key issues/risks sufficiently exposed and addressed? Do milestones adequately reflect the necessary engagement of key stakeholders at appropriate points?
The second group of questions, which probes whether the road map is clear enough to be readily implemented, includes the following: Are milestones defined to a level that is sufficient to describe how the road map will be achieved? Are the timing and sequencing of milestones logical?
The third group, examining whether the impact and timing have been correctly identified, includes these questions: Do financial impacts (revenue and costs) reconcile to the overall target for a given area? Do operational key performance indicators clearly serve as lead indicators of subsequent delivery of financial or other impact? Is the timing of overall benefits consistent with the timing of the milestones with which they are associated?
By using rigor testing, a railroad that was investing heavily in improving its on-time record was able to develop a clearer view of some of the risks to successfully implementing its on-time initiative, enabling its leaders to actively manage these risks. The rigor test helped to highlight, for example, that the leading indicator correlating with on-time arrival was the speed with which passengers moved on and off the trains and across the platform at some key stations. Managers quickly diverted resources to ensure that employees who coordinated traffic on the platforms were able to take on new, more public roles, interacting with commuters and managing the flow of crowds at these stations. This response caused on-time performance to increase substantially and to stay at the higher level.