Information Strategy for Value-Based Health Care

Information Strategy for Value-Based Health Care

Article image

Information Strategy for Value-Based Health Care

  • Add To Interests
  • PDF

  • In This Article
    • In an effort to curtail escalating costs, growing numbers of health care policymakers, payers, and providers around the world are embracing value-based health care.

    • To enable value-based health care, decision makers need relevant, high-quality data—but identifying and sourcing such data can be problematic for many organizations.

    • Many companies attempt to tackle the problem by pursuing a “big data” approach, but that rarely works, given the nature of the challenges these organizations face and the inherent complexity of health care data.

    • The answer instead lies in taking a disciplined, strategic approach to data selection and collection, one that begins with an organization asking itself the right questions.


    In an effort to curtail escalating costs, growing numbers of health care policymakers, payers, and providers around the world are embracing value-based health care, an approach that focuses on optimizing the relationship between treatment costs and outcomes. The trend is already bearing fruit, with a number of health care systems and organizations using value-based approaches to raise the quality of care while significantly driving down costs.

    Translating concept into practice, however, can be challenging. Generally, the biggest hurdle is obtaining the necessary data, which must be in the right format and of sufficient quality for decision makers to discern critical relationships between investment and results. (The desire for high-quality data extends to consumers as well; according to a recent BCG survey of 9,000 consumers in nine countries, a majority of consumers feel that they lack the data on health outcomes necessary to make informed decisions when choosing health care providers.) But securing such data can be problematic. Many companies struggle not only to determine precisely what data they need to meet their objectives but also how best to get it. Far too often, data are fragmented, insufficiently validated, inaccessible, difficult to work with, or otherwise inadequate.

    Faced with this quandary, health care organizations frequently resort to the “big data” approach—gathering and crunching enormous volumes of information—in the hope that useful findings will eventually come to light. But because of the nature of the challenges these organizations face and the inherent complexity of health care data, such overly broad, scattershot efforts rarely work.

    We believe that the answer lies instead in taking a disciplined, strategic approach to data selection and collection.

    See “Consumers Seek Hard Data on Health Care Outcomes,” BCG article, August 2012.
    Asking the Right Questions

    Essentially, an information strategy to support value-based health care must do two things: determine the specific business problem to be solved and identify the analysis and data needed to solve it. An organization can best achieve these goals by answering a series of strategic and tactical questions. (See Exhibit 1.)


    What is our strategic focus? Is the organization focused on a specific location, patient population, disease, or diagnostic or therapeutic area? For example, Kaiser Permanente, an integrated U.S. managed-care provider, has devoted considerable effort to optimizing outcomes associated with medical implantation, since that type of surgery is an important component of the company’s portfolio. (See “Kaiser Permanente’s Implant Registries,” below.) A global pharmaceutical company, by contrast, might focus on improving patient outcomes for users of a particular drug within a specific population.

    Kaiser Permanente's Implant Registries

    Implantation of medical devices, such as artificial joints and pacemakers, has become increasingly prevalent in the U.S. and is expected to surge in the years ahead. While these devices and procedures can deliver tangible benefits, they can also bring risks, outsized costs, and uncertainty. Many of the new, increasingly technical (and expensive) devices coming onto the market, for example, are introduced with little or no evidence of improved clinical effectiveness. New surgical techniques can also fail to deliver. And for any given procedure, some patient cohorts will be at greater risk of suboptimal outcomes than others.

    In an effort to manage the impact of these challenges on its business and members, U.S. managed-care provider Kaiser Permanente (KP) has developed a series of orthopedic and cardiac registries. (Perhaps the best known is the Kaiser Permanente National Total Joint Replacement Registry, which was launched in 2001. The largest registry of its kind in the U.S., it houses data on more than 75,000 knee replacements and 43,000 hip replacements through 2010.) These registries track the incidence, outcomes, and comparative effectiveness of devices and procedures utilized within KP’s system.

    A key feature of the registries is the depth and breadth of the data employed. Surgical data are supplemented by information on patients (including demographics and medical histories) drawn from the company’s administrative databases and extensive electronic medical records system. This wealth of data helps KP discern patterns and cause-and-effect relationships that might otherwise remain undetected.

    The registries have delivered on multiple levels. KP has gained insights that have allowed it to produce better results for patients, including less postoperative pain, fewer infections, and a reduced need for follow-up procedures. The company is also better able to quickly identify patients at risk of poor clinical outcomes. In addition, the registries have allowed KP to notify patients about product advisories or recalls 19 times since 2008. Finally, they have helped KP target its quality-improvement and research efforts and materially lower its related costs for implant surgeries.

    KP has advanced the industry’s general body of knowledge by sharing its registry data externally. The data have been used in a range of research studies and enabled international comparisons with countries such as Sweden, Norway, Australia, and others that have established similar registries.

    See “The Kaiser Permanente Joint Registries: Effect on Patient Safety, Quality Improvement, Cost Effectiveness, and Research Opportunities,” The Permanente Journal, Spring 2012, Vol. 16, No. 2.

    What do we need to understand? Once a strategic focus has been determined, the organization must identify the patient outcomes it most needs to understand. Key questions include the following:

    • Which clinical measures provide hard evidence of patient outcomes? What measures are critical for particular diseases, diagnostics, therapeutics, and patient populations?

    • How do these clinical outcomes vary? Do they differ by location, patient population, or provider? Which variations are most important to understand?

    • What causes the variations? Are the variations driven, for example, by differences in diagnostic or therapeutic approaches, clinical processes, or execution?

    Which analyses do we need to perform? Gaining a thorough understanding of the most relevant outcomes requires several types of analysis:

    • Performance assessment identifies variations in outcomes among clinical centers. This benchmarking can serve as the starting point for performance improvement initiatives. Identifying outliers, in particular, can help managers reduce or eliminate poor clinical practices in favor of best practices. Performance assessment can also be used to enable outcomes-based contracting.

    • Patient segmentation analysis determines whether a given variation in outcome can be explained by differences in patient populations (caused, say, by different environmental factors or different demographic or genetic characteristics). For example, such an analysis might reveal that a particular drug is more effective in specific subpopulations, thus helping the drug’s manufacturer improve patient outcomes through better targeting. Patient segmentation analysis is also critical in making the proper risk adjustments to ensure that variation in the patient mix is accounted for in complementary analyses.

    • Comparative-effectiveness analyses compare the effects of different diagnostic or therapeutic approaches. Cost effectiveness analysis, for instance, assesses the clinical outcomes of different approaches relative to their cost. Comparative-effectiveness analyses provide information that can be used for a range of purposes, such as designing clinical decision-making tools or in formulary and protocol development.

    • Clinical-input evaluation explains how different clinical processes, applied to the same protocol in similar patient populations, can produce different outcomes. Such analysis could, for example, identify a specific provider that is particularly good at encouraging patient adherence to drug regimens, thereby increasing the regimens’ effectiveness.

    • Signal detection analysis reveals potentially significant correlations between inputs and outcomes. It can identify, early on, both adverse and desirable outcomes that may stem from different clinical inputs or previously unknown variations in patient populations. Working to understand these outcomes and relationships can improve clinical results.

    What data are required? Since each of the analyses described above requires different types of data, determining the specific data characteristics necessary for addressing the problem at hand is essential. Health care organizations should focus on three dimensions of the data that they are considering: the definition of variables, the number and nature of observations, and the data’s quality and integrity. (See Exhibit 2.) The relative importance of each dimension will vary depending on the organization’s strategic focus and the type of analysis it needs to conduct.


    An organization focused on a particular therapeutic area, for example, would be guided by the unique data requirements of the relevant disease. Chief among the organization’s considerations would be the following:

    • The Disease’s Duration. The data set for a chronic disease would necessarily include more longitudinal information than would the data set for an acute disease.

    • The Characteristics of the Therapies Used to Treat the Disease. Greater therapy diversity or smaller differences among outcomes will increase the data and analytical requirements. In addition, the rate of therapy evolution will affect the value of, and necessity for, historical data.

    • The Setting in Which Care Is Provided. This will affect the quality and uniformity of the data. Is the disease treated in a specialty facility or a multispecialty environment? Is it treated primarily in academic medical centers or in a range of clinical-care settings?

    • The Size and Homogeneity of the Patient Population at Risk. The difficulty in obtaining an adequate sample size is amplified for rare diseases whose outcomes vary by patient population.

    The type of analysis chosen will also inform the data requirements. A performance assessment to identify relevant variations across providers, for example, requires an adequate sample size for each provider and specific, well-structured outcomes measures. It may also require risk adjustment data to correct for differences in factors such as population age. A clinical-input evaluation conducted to encourage continuous performance improvement requires a different set of characteristics, including input or process measures, historical data for establishing a baseline, and data that can be collected on a timely basis to provide regular feedback and identify inputs or processes associated with better outcomes.

    The bottom line is that there is no single, perfect data set suitable for all types of analysis. And no amount of computational power can make up for incomplete or incorrect data.

    When building any data set designed to implement value-based health care, organizations should make sure to include the following:

    • Outcomes measures, selected and supported by the relevant clinical-specialist groups, that will spur clinical improvement and innovation

    • Input or process measures that can be linked to outcomes to help identify new outcomes-improvement levers

    • An adequate sample size with sufficient penetration (defined as the percentage of patients in a given patient population represented in the data set) to ensure that the sample is representative and will reveal patient subpopulations with different response patterns

    • Longitudinal data across care settings for tracking outcomes over time

    • Standardization, including both standardized definitions and standardized structure and coding

    Data can come from a variety of sources, such as clinical trials, disease registries, electronic medical records (EMRs), and insurance-claims data sets. Each source has its strengths and weaknesses. Clinical trials, for example, represent the gold standard for quality. But they are often limited in sample size, duration, and the number of variables tracked, and their findings may not be generalizable to more diverse populations. Disease registries track data over extended periods but are few in number, and there are often lags in data entry. EMRs collect data in real time, but often the data captured are not standardized. Insurance-claims data sets have a large sample size but do not track outcomes. When choosing among the various sources, let the characteristics of the data guide the way.

    A Critical Factor: Clinician Buy-In

    Addressing these four questions can help lay the foundation for an effective information strategy. But to succeed, health care organizations must also gain buy-in from clinicians, who have a unique vantage point and role in the value delivery process. Winning organizations will engage clinicians on multiple levels, from defining metrics and analytical methodology to collecting data and interpreting findings.

    Failure to gain sufficient clinician buy-in can severely compromise an effort. Consider the U.S. Health Care Financing Administration’s initial attempt to disseminate data on hospital mortality rates, for example. From the clinicians’ perspective, the effort was problematic because it relied on claims data rather than clinical data and had numerous methodological flaws. Feeling that they lacked a voice in key decisions, clinicians viewed the initiative as something imposed upon them rather than created by them. The project was ultimately terminated.

    The results were dramatically different when the U.S. Society of Thoracic Surgeons launched a database to track cardiac-surgery outcomes. The database was conceived by a small group of surgeons who determined the outcomes to track, the mechanisms for data collection, and the methodology for analysis and reporting. But over time, the effort gained increasingly broad buy-in from the clinical community as well as from other key players. (For example, Blue Cross and Blue Shield Association, a U.S. federation of insurers, gave the database a significant boost when it made participation a mandatory requirement for cardiac surgeons seeking inclusion on its list of preferred providers.) The database has become a well-established, highly credible entity. It has given rise to more than 100 journal articles that have contributed to the body of knowledge about thoracic surgery and helped improve outcomes.

    Value-based health care could transform the health care industry and the fortunes of many stakeholders. But its success is critically dependent on decision makers having the right information. Health care organizations won’t find this information spontaneously or by mindlessly crunching every piece of data they can find. Rather, they will need to develop an optimized information strategy that is based on business needs, proper analysis, and a careful vetting of data and data sources.


    The authors thank Benjamin Berk, MD, and Josh Kellar for their contributions to this article.

    To Contact the Authors

    • Alumnus
    • Boston
    • Senior Partner & Managing Director
    • Minneapolis

    David M. Shahian et al., “Public Reporting of Cardiac Surgery Performance: Part 1—History, Rationale, Consequences,” Annals of Thoracic Surgery, September 2011, 92, pp. S2–S11.
    Manuel Caceres, Rebecca L. Braud, and Harvey Edward Garrett, Jr., “A Short History of the Society of Thoracic Surgeons’ National Cardiac Database: Perceptions of a Practicing Surgeon,” Annals of Thoracic Surgery, January 2010, 89, pp. 332–339.
  • Add To Interests
  • PDF