Generative AI can likely help devise and update high-value pathways, but it’s not going to replace expert consensus anytime soon.
Like many of my colleagues, I’m bullish on the use of artificial intelligence to transform specialty care management for the better. We’re embracing a world where, for instance, more high-quality authorization requests are rapidly approved and clinical reviewers spend half as much time conducting manual case reviews, thanks to this technology. Evolent’s latest acquisition will accelerate this transition here.
Occasionally we’re asked what the generative AI revolution might mean for high-value clinical pathways and guidelines for complex conditions. Could AI help sift through the evidence, evaluate it and craft a set of recommended interventions for a given clinical scenario?
I’m skeptical that this is the future. However, I do believe AI could be a valuable asset to help us create and update pathways more efficiently. Let’s review the steps of pathway creation process and see what role AI might play.
Step 1: Curating the medical literature. Say you’re seeking evidence to help identify the highest value regimen for a certain cancer indication. A few decades ago, this involved using the library card index and Dewey Decimal System, navigating the stacks, and finding the articles. Search engines and the digitization of medical literature have made the task much easier, but it still takes many hours.
AI could dramatically shorten this process. While tools have so far received mixed reviews, we can expect they will rapidly improve. In the meantime, researchers will need to double- check the work of their AI co-pilots to guard against hallucinations and make sure they are sweeping up all relevant evidence.
Step 2: Evaluating the strength of the evidence. You have all relevant research in hand. Now you need to weigh it. Are the patient populations in clinical trials representative of your real-world populations? Was the trial randomized? Was the new drug compared against the current standard of care? What endpoint was measured, was the statistical analysis done correctly, and was the outcome clinically meaningful?
Separating good evidence from bad could be a challenge for AI. For example, it may have difficulty identifying conflicts of interest or bias. But with appropriate rules and quality audits, I believe the technology could help at least synthesize the research and flag potential flaws, thus reducing the time involved for human experts to weigh the findings.
Step 3: Identifying preferred regimens. There’s no formula for deciding which regimens should be on pathway (though we have tried). That’s because no two pathway decisions are identical. For example, the bar changes for clinical scenarios that have many regimens to choose from versus those with unmet needs. Also, head-to-head comparisons of different regimens typically don’t exist. One drug might appear to offer a small survival advantage over others, but the populations across clinical trials may differ, as do the treatments received by the control group.
Real-world evidence may exist for some regimens but not others. Ultimately, while we seek out all reliable, high- quality evidence to inform decisions, pathway determinations often result from consensus among a group of clinicians, researchers and others. We have to take the evidence, along with the lived experiences of providers, and weigh it together. It’s a distinctly human endeavor, and it’s hard to imagine that fundamentally changing for complex decision-making, though I believe there are ways AI may help us make these decisions better and faster.
About the Author
Follow on Linkedin More Content by Andrew Hertler, MD, FACP