Many health plans pride themselves on making data-driven decisions. But as anyone who has pored over utilization and other metrics can attest, following the wrong data points or faulty interpretations of that information can leave organizations chasing down problems that don't exist. How can health plans be driven by data without spinning their wheels?
Dr. Bruce Vanderver, chief medical officer at Maryland Physicians Care, an Evolent Health Services partner, seeks to avoid this common pitfall by using run control, a statistical method that can help determine where to focus utilization management and other improvement efforts. He recently spoke with Evolent Health Services Regional President Katie McKillen, about the tool's benefits and how it has helped him make better sense of data.
What do health plans too often get wrong when trying to spot utilization trends?
Managers are constantly looking at key metrics, like length of stay, to figure out whether processes are moving in a desirable direction. The problem is that it can be very difficult to know when something is truly changing and when you're just seeing natural variation. We call this a signal-to-noise problem. When you're looking at a pattern, how do you know what is a true signal of change and what is just noise—the normal ups and downs around an average?
Humans have evolved to recognize patterns, even where there aren't any. We might see a data point that we think is off, but when we investigate it, it turns out that it's nothing.
Have you experienced this firsthand?
Certainly. At one of my previous employers, we used to sit down every month and look at hospital utilization graphs. They were nicely laid out, breaking down length of stay, admissions and other metrics by hospital and clinical department, while showing us the variation in utilization within these different categories. Some of this variation was totally normal. But at the end of every meeting, we would have between 12 and 20 queries—different things that we wanted to look at based on what we saw in the graphs.
Digging into these potential issues took a lot of time. And very often, it turned out that we were merely chasing noise—things that weren't true problems, statistically speaking.
It takes a lot of time to chase down a root cause when there's nothing there to find! It took so much time that, often, just as we were finishing one analysis, we’d have the next month's data to look at. As a result, we never had the time to review the analysis or implement changes, because we were so busy trying to nail down what was really a problem.
How did you find the true problems?
We implemented run control charts to help us home in on the real issues.
Fundamentally, run control helps us look at processes to understand how they move and change. Run control uses statistics around standard deviation to help you to very quickly determine if a data point that looks significantly different than other data points is truly out of the norm, or if it’s just noise. Conversely, there can also be times when you may not identify that a process has changed. Run control charts can also pick up subtle changes in process that the human eye may not see.
How do these charts work?
We can use these charts to compare data against three sets of "guardrails," at one, two and three standard deviations both above and below the mean.
One standard deviation above and below the mean covers about two-thirds of the population. While we expect to see most data points falling in that range, there will still be about one-third of the data points outside that first guardrail. In these cases, we generally don't need to investigate. With two standard deviations, 95% of the population falls between the top guardrail and the bottom guardrail, so even more of the data points will fall in that range, but it's not unusual to have a data point falling outside of two standard deviations. Three standard deviations cover 99.7% of the population, so if a data point falls outside of that third standard deviation, it's extremely unlikely that happened just through chance or normal variation.
This sample run control chart shows four areas where LOS has trended out of range, indicating a potential problem.
How did this tool affect the focus of your improvement efforts?
We ended up with only two or three queries a month, and those could be completed in less than a week. By limiting the number of things you investigate, you have time to identify which are real and which are actionable. From there, you can put changes in place before the next month's review. That's another advantage: when you make changes based on recent data, you can see much more quickly if a process or measure goes back to its previous performance.
When we did this, we didn't have to wait months and months to collect data. We could see within about two months whether the process had gone back to the way it had been, so we could very quickly evaluate whether the interventions had yielded results.
How can health care organizations leverage run control for utilization management and other improvement efforts?
Run control can be really beneficial for utilization management. There are two ways that we can apply it. One is on typical utilization metrics. For example, we can use run control charts to understand if we have the expected number of admissions per 1000. You can also look at hospital utilization to tell if your length of stay is going as expected.
We can also apply run control to operational metrics like turnaround times to identify if the process is changing, so we can address it before we start having a problem.
There's a third way to use run control that some organizations have begun to look at. Computer models can look at actual utilization of services and use run control models to determine if there have been changes. For example, we might not have prior authorization on adult diapers, because we don't think we need it. But if we run an analysis in the background on adult diaper utilization, and if the model reveals a sudden increase, we can review whether we need to put that on prior authorization.
What's the most important thing you've learned about leveraging the right data in the right way?
That it's really hard. When people see one of these run charts, it's human nature to say, "The numbers are starting to go up, so there must be a problem." You need to help people understand the underlying data and the statistical analytics used, so they can accept that they need to ignore what their eyes are telling them. When people do that, they are more efficient in stepping through the reports to home in on the real issues. Getting people to that point—walking them through and reinforcing over and over again how to look at the data—is probably the most challenging part.