2895: Elsevier - AI in Healthcare: The Role of ClinicalKey AI
Tech Talks DailyMay 15, 2024
2895
25:3914.48 MB

2895: Elsevier - AI in Healthcare: The Role of ClinicalKey AI

How can AI transform the future of clinical decision-making? Today on Tech Talks Daily, we explore this pivotal question with Rhett Alden, CTO at Elsevier Health. Rhett brings deep insights into the integration of AI with clinical knowledge to enhance healthcare delivery.

Rhett discusses Elsevier's latest innovation, ClinicalKey AI, poised to revolutionize how clinicians access and utilize medical information. Developed with the sophisticated blend of Elsevier's vast clinical knowledge assets and advanced AI technologies, ClinicalKey AI is designed to deliver trusted, transparent, and timely support to medical professionals. Rhett highlights how this tool leverages a retrieval-augmented generative architecture to provide accurate responses, greatly reducing the time clinicians spend researching complex cases.

Delving deeper, Rhett explains the rigorous validation frameworks that underpin ClinicalKey AI, ensuring the accuracy and reliability of the AI responses while addressing potential biases in AI applications. He emphasizes the importance of transparency, with the system designed to allow clinicians to easily trace and verify the sources of information it provides.

Localization of content is another key aspect Rhett touches upon. ClinicalKey AI is customized for diverse global healthcare systems, adhering to local protocols and regulations, thereby enhancing its applicability and effectiveness worldwide. Looking ahead, Rhett shares Elsevier's vision for the future—transitioning from on-demand tools to anticipatory, embedded clinical advisors, with a particular focus on expanding into the fast-evolving field of oncology.

Join us as Rhett Alden from Elsevier Health unveils how ClinicalKey AI and generative AI are setting new standards in healthcare technology. After the episode, we invite you to share your thoughts: How do you see AI shaping the future of clinical decision support systems in your environment?

[00:00:00] Is the future of healthcare already here? Today we're going to dive into the world of clinical

[00:00:07] key AI. My guest today is Rhett Olden, CTO for Health Markets at Ellspear. And the talk promises

[00:00:16] to revolutionise the speed and the accuracy in which clinicians can access critical medical

[00:00:22] information. And we'll discuss the integration of generative AI with Ellspear's vast medical content

[00:00:29] and how it's creating new possibilities for clinical decision support. And also hopefully

[00:00:34] answer questions like, how does this technology manage to enhance clinical decision making

[00:00:40] while ensuring reliability? And of course, most importantly, transparency. It's a huge topic right

[00:00:47] now. So let's find out more from Rhett himself. So buckle up and hold on tight as I beam your ears

[00:00:54] all the way to St. Louis, where Rhett is waiting for us today calmly, even though it's 95 degrees

[00:01:00] Fahrenheit there. But enough rambling for me. Let's get Rhett onto the podcast now.

[00:01:08] So a massive warm welcome to the show. Can you tell everyone listening a little about who you are

[00:01:14] and what you do? My name is Rhett Olden. I am the Chief Technology Officer for

[00:01:20] the Ellspear Division of Health Markets. I've been there a number of years now, but I've spent

[00:01:25] my entire career in health care, leading technology and developing solutions for patients and clinicians,

[00:01:32] which is what I do now. Well, it's a pleasure to have you on the podcast. And one of the things

[00:01:37] I always try and do very early on in the podcast episode is find out a little bit more about the

[00:01:43] story or the origin story. So can you tell me a little bit more about what inspired your work

[00:01:49] there to develop this advanced clinical decision support tool and what makes it stand out from

[00:01:54] existing solutions in the health care industry? Because there's got to be a story there, right?

[00:01:58] Yeah, maybe part of that story is why I'm at Ellspear, right? Because I spent my career

[00:02:06] in pretty big companies like GE Healthcare and some pharma companies developing solutions. And

[00:02:11] one thing we always struggled with was clinical content knowledge, right? You could build great

[00:02:18] solutions, but if you didn't have the backing of really robust clinical knowledge assets,

[00:02:25] clinical pathways, a robust decision-making tool around drug interactions, et cetera,

[00:02:31] it's become the challenge, right? If you really get something that's really impactful.

[00:02:37] So Ellspear is in its heritage a publisher and has immense repositories of clinical knowledge

[00:02:47] from peer-reviewed journals, books, articles, all sorts of things that we could tap into.

[00:02:53] And that's what attracted me to Ellspear because in my career in healthcare technology,

[00:02:59] it's this marriage of clinical content, robust clinical content, and technology that really

[00:03:06] excites me because I think we can solve some really hard problems and decision supports one

[00:03:11] of those. I mean, it's really challenging when you think about the terminologies,

[00:03:16] the complex interactions, and particularly now with the acceleration of medicine,

[00:03:23] it's just increasingly complex for clinicians. So having those assets available is a real game changer.

[00:03:30] And one of the things that put you on my radar was clinical key AI. And we are in a world at

[00:03:35] the moment where everyone's going crazy for AI. So what I always try and do is demystify some of

[00:03:40] that world, talk about it in a language everyone can understand and solve some of those real world

[00:03:44] problems that we're talking about here. So how does clinical key AI specifically enhance day-to-day

[00:03:50] operations and decision-making processes for clinicians? And maybe if you've got a real world

[00:03:57] example or case story just to bring that to life as well.

[00:04:02] Yeah, I think a simple way to think about this is speed the question. So in a typical

[00:04:12] environment, a clinician might be dealing with a patient that has cardiovascular issues, diabetes,

[00:04:20] maybe also may be pregnant. And that confluence of comorbidities creates a really complex

[00:04:29] environment. When you think about drug-drug interactions, contraindications, as well as

[00:04:37] how do you achieve all of that while keeping that upcoming birth healthy? So that's a complex

[00:04:46] problem. And in the studies we've done, we found that for complex questions like that, it can take

[00:04:53] a clinician anywhere from 10 to 15 minutes going through literature, looking for appropriate

[00:05:01] diagnoses and treatment options for a patient like that. With a conversational approach,

[00:05:07] that can be on the order of seconds. And that's because they can, in a conversational way,

[00:05:13] construct the question. What is the appropriate treatment for a patient that has hypertension,

[00:05:21] type 2 diabetes, is currently on insulin and blah, blah, blah. And that kind of question

[00:05:28] can be answered through our system of clinical QAI seamlessly. Now, the challenge, of course,

[00:05:36] is can you do that in a trusted way? Can you have an answer that is going to garner the right

[00:05:44] level of transparency to a clinician? So we've worked hard to use generative AI in a way that

[00:05:52] provides that level of transparency and trust. And the way we've done that is by leveraging

[00:05:58] the content that I alluded to earlier through search technology. The jargon in the business

[00:06:05] would be a retrieval augmented generation architecture. But what that really is, is

[00:06:11] being able to leverage LLMs to search indexed content at its source rather than relying on

[00:06:21] an LLM that would be prone to errors in terms of using its probabilistic or neural net matching.

[00:06:28] So the model that we use kind of maintains the integrity of the source content.

[00:06:35] And we've built that intentionally to maintain the trust and clinical accuracy of the system.

[00:06:42] And one of the things I wanted to highlight is everyone's going crazy for AI at the moment. They

[00:06:46] have been for the last 18 months, we've seen a lot of mainstream adoption. But of course,

[00:06:51] it's nothing new. And you've had over two decades of experience in leveraging AI and machine learning

[00:06:58] at Alpsphere. So how has your approach to technology evolved, particularly in the context

[00:07:03] of creating clinical AI? Because I would imagine that there's been quite a few big technological

[00:07:09] challenges you've faced along the way. Is any of those that you could share and maybe how the

[00:07:13] companies overcame them? Yeah, I mean, it's a good point. I mean, Alsphere, as we said,

[00:07:19] has historically been a publisher. We've also been deeply involved in natural language processing,

[00:07:26] machine learning, data extraction models heavily for decades, as you point out.

[00:07:32] I think what's been new and a bit frustrating with generative AI is that LLMs are a bit mysterious,

[00:07:43] in the sense that they're not well understood theoretically. And they're also not necessarily

[00:07:50] well understood in terms of when and if hallucinations occur. So what we've tried to do

[00:07:56] is really understand the pros and cons and build a system that first and foremost would control and

[00:08:04] manage hallucinations. Because we thought that was one of the biggest challenges in generative AI,

[00:08:10] at least in healthcare. And then also work on assisting to really validate our offering.

[00:08:18] So we have a really robust validation framework that we've built specifically to manage the

[00:08:26] accuracy and precision of the system that we've developed. And we think that's been somewhat

[00:08:32] lacking in the field, really understanding the robustness and the integrity of solutions

[00:08:38] in the wild. As we all know that something like CHAT-GPT will hallucinate, but we don't know

[00:08:46] when or how or why necessarily. We just know that it can be suspect. So what we've tried to do is

[00:08:54] put better guardrails on that within our system so that clinicians can actually have a baseline

[00:09:01] of integrity in the model. And I'm glad you mentioned hallucinations there because

[00:09:08] you're working in an area where information accuracy is paramount in healthcare and there

[00:09:13] is no margin for error there. So how do you at Clinical Key AI ensure that reliability,

[00:09:20] ensure that trustworthiness of AI-generated medical content it provides to clinicians? Because as I

[00:09:25] said, there is no margin for error, people need to trust this for it to be effective.

[00:09:30] That's right. Now a couple of things that I want to be transparent about here is it's never

[00:09:37] zero. There will be hallucinations just by the very nature of the system, but what we try to do

[00:09:44] is provide really a paramount level of transparency for the user. So when you use the system, it

[00:09:54] actually exposes the references that were used to generate the response. And not only that,

[00:10:01] it shows the individual paragraphs that were used to construct the response. So remember,

[00:10:06] I talked about a retrieval architecture. We're not relying on probabilistic matching of next

[00:10:13] word within CHAT-GPT within an LLM. We're relying on summarizing data from peer-reviewed journals

[00:10:21] and plenary publications. So what that means is that for the clinician, they can still refer back

[00:10:29] to the source material very easily. They can check those references, they can validate it if they need

[00:10:36] to and have that level of transparency. And what we found is that our responses are very accurate

[00:10:45] within certain constraints. For example, it isn't necessarily good at calculations. Most LLMs aren't.

[00:10:53] So we inform the user that if you're trying to calculate a complex dosage, this isn't the right

[00:10:59] tool. But if you're trying to really look at differential diagnoses and options for treatment,

[00:11:05] our tool can be very valuable in terms of how it's able to summarize the information.

[00:11:11] And I also wanted to highlight at Elsevier Health, there's a global footprint in content

[00:11:16] distribution and healthcare solutions around the world. So how does Clinical Key AI adapt to those

[00:11:23] diverse healthcare delivery systems and not to mention, of course, the cultural nuances across

[00:11:28] so many different countries and continents? And can it customize information to fit regional healthcare

[00:11:33] practices and patient demographics? Because there's no such thing as a one size fits all here.

[00:11:39] Yeah, yeah. I mean, you're sort of opening up a real interesting conundrum, right? Because I think

[00:11:47] people that aren't in healthcare think that, well, everybody's a patient, we're all human.

[00:11:52] How different is it? It's actually quite variable from country to country in terms of how patients

[00:11:58] are treated, what the protocols are, what drugs are actually legally allowed for sale in that

[00:12:05] country. So there's a lot of variability. And so our content teams manage that. And we have

[00:12:14] specialized content for different regions of the world, different countries, that we can

[00:12:18] inject into the solution to allow that level of customization. Because as I mentioned earlier,

[00:12:25] the system relies on a retrieval structure. So locale can be used as a differentiator

[00:12:31] to pull up appropriate content based on the locale that an individual was in.

[00:12:36] And that customization is super critical, as you point out, because you don't want to be

[00:12:44] making recommendations that aren't quote unquote best practice for a given locale.

[00:12:50] Because A, you lose credibility and also B, that's just not a way medicine is practiced in

[00:12:56] that space. So we're working on that and we have solutions there based on that retrieval system

[00:13:03] and leveraging again the content that we have is localized for that region.

[00:13:09] Now as someone that is an eternal optimist, I've often thought, maybe I'm a bit naive here,

[00:13:15] but I've often thought that if the whole world and all the healthcare systems could begin

[00:13:19] sharing all that information, we could be making so much more progress at solving real-world

[00:13:25] medical issues around the world if we just pulled all our resources and used AI to solve some of

[00:13:29] those problems. But of course, there is a flip side of that. So what ethical considerations

[00:13:35] come into play, particularly concerning things like patient data, their privacy and the potential

[00:13:40] for AI bias, which is another can of worms as well. And how do you navigate these challenges?

[00:13:45] Dr. Michael Sacklett Yeah. I mean,

[00:13:48] these challenges are not trivial to solve. Let's talk first about bias. The advantage that we have

[00:13:58] is that we rely on peer-reviewed journals and articles that have been reviewed by medical

[00:14:04] experts only. We're not pulling information off Reddit or some other untrusted source.

[00:14:15] But that content can still contain biased information, either racial bias, gender bias,

[00:14:21] et cetera. So we have a system to manage that. We've actually adopted a quality management system

[00:14:30] similar to a medical device that actually provide feedback and error handling so that we can

[00:14:36] mitigate those biases if and when they occur. The bias in our system is extremely low or close to

[00:14:44] zero. In fact, we have very little instances where we've seen real bias occurring. But that's

[00:14:51] not to say it doesn't exist. We have to passively monitor that. So that's an area I think where

[00:14:57] we've really leaned in to make sure that we're addressing those societal needs as we go forward.

[00:15:04] Now, the question around global content or how do we better distribute this, I think that's been

[00:15:10] an ongoing mission of Elsevier for the past 140 years. We want to be able to make knowledge as

[00:15:19] transparent and I would say less boundary-less. What I mean by that is

[00:15:29] national boundaries shouldn't matter. We should be able to use scientific information and medical

[00:15:35] information as broadly as possible. So we're also trying to work with plenary healthcare institutions

[00:15:43] in the US and elsewhere to look at how we can move some of their best practices into

[00:15:49] a framework like this so that underrepresented communities or challenged communities can have

[00:15:55] access to the best healthcare in the world, not just the Mayo clinics or the Cleveland clinics

[00:16:02] of the world. Because I think the goal here is around leveling the field, not trying to make

[00:16:10] it even more hierarchical. And I think generative AI can do that. We're providing some of the

[00:16:17] best practice information to any clinician using this, whether it's a small clinic or

[00:16:23] community hospital or a large academic center, which is really exciting.

[00:16:30] It really is. And if the pace of technological change is moving at breakneck speed right now,

[00:16:35] there's also a reality it might never move this slow again. So if we look beyond clinical key AI,

[00:16:41] how do you envision the future of AI and machine learning in healthcare? And are there any other

[00:16:46] areas within healthcare that you might be planning to introduce AI-driven innovations in the future?

[00:16:51] I appreciate you're probably locked down as to what you can share, but is there any other areas

[00:16:54] that excite you around that? Right now we have a tool where clinicians can come to it when they need

[00:17:02] to understand complex questions and gain insights, which is great.

[00:17:09] The next level is to really anticipate that for the clinician. So being embedded in a clinical

[00:17:17] workflow and being able to kind of prompt and almost be a background advisor to a clinician

[00:17:29] as they go through their clinical workflow and stops is the next level. Because we can do that.

[00:17:37] We understand the patient data, we understand the clinical needs, and it's a question of how do we

[00:17:44] work more seamlessly as a partner with the clinician? And so being able to push information

[00:17:51] is the next step. And one area of high interest is oncology.

[00:17:56] You know, oncology is an area that moves incredibly quickly,

[00:18:01] partially because it is so life-threatening. So the velocity of new drugs in that space is very high.

[00:18:07] The velocity of change and protocols and management is very high. It's a perfect area for

[00:18:15] really driving something like generative AI in the clinical space and helping improve patient

[00:18:22] outcomes through that area. And if we do have any healthcare professionals listening, or maybe

[00:18:28] they've just discovered this podcast searching for something around this area, maybe they're keen on

[00:18:33] integrating AI tools like clinical key AI into their practice, but they do have some concerns.

[00:18:40] They might be hesitant due to concerns around technology adoption challenges or impact on that

[00:18:45] clinician-patient relationship and all those separate concerns out there. Is there any

[00:18:51] particular advice to that person listening that you would offer?

[00:18:56] So we've been really clear about people trying it. You know, the best way to use this is to

[00:19:05] try it for a few weeks and see how it operates. And we're more than happy to offer

[00:19:11] retrials to anyone that's interested. And the feedback has been phenomenal. You know, we'll

[00:19:17] have people coming back to us saying, this thing actually improves so much the way I work in my ED

[00:19:25] rotation in the emergency room department, or how it's been so astounding in terms of complex cases.

[00:19:34] So the best way for people to overcome their anxiety and fear is first to try it,

[00:19:41] but then understand and ask critical questions about quality, quality, quality, quality.

[00:19:48] Because I think that within the healthcare space, the most important aspect is can you actually

[00:19:55] demonstrate clinical efficacy? Do you actually have safety protocols in place? And big tech

[00:20:03] hasn't really done that because it will slow them down. And I think this is where

[00:20:09] healthcare companies like Elsevier can make a big difference by really leaning in on the quality

[00:20:17] and clinical efficacy of the solution. So that would be my answer to your clinicians out there.

[00:20:24] Try before you buy and ask some tough questions, you know, around how it actually operates. And

[00:20:31] don't be afraid to ask those questions. It's really critical.

[00:20:34] Matthew Fantastic advice. And for everyone listening,

[00:20:38] myself and indeed you, there is a real pressure on us all to be in a state of continuous learning.

[00:20:44] Now, as someone who's leading the way in this industry, I've got to ask,

[00:20:48] where or how do you self-educate? Anything you could share there?

[00:20:51] Dr. Richard D. Well, you know, it's interesting when we started this journey

[00:20:58] realistically with generative AI maybe 20 months ago, really early, you know, because we saw

[00:21:05] the opportunity, you know, with the emergence of open AI and Thropic and others. The question was,

[00:21:12] how do you use it, you know, for a harder use case? So what we set up was rapid prototyping

[00:21:19] teams to really A, understand the technology better. And then B, we pulled in clinicians

[00:21:26] early into the process during the research and the envisioning, because you need to understand

[00:21:32] your customer, right? And have empathy for what the customer actually needs.

[00:21:38] So we have clinicians on staff. We have two partner hospital systems that work with us

[00:21:45] and we've embedded with them to really build a solution that can meet the needs.

[00:21:51] So I would say that, you know, like a lot of things, be ready to pivot, be ready to change

[00:21:59] and be okay with some failures. Like we made some mistakes, everybody does. But the key point was

[00:22:06] having that target of what does the customer really need? Because that's what we focused on.

[00:22:12] Dr. Richard D.

[00:22:13] Well, I think that's the perfect moment to end on today. But for anyone listening that would

[00:22:18] love to dig a little bit deeper on everything we talked about today, find out more information,

[00:22:23] try before they buy and all that good stuff. Well, do you like to point everyone listening

[00:22:27] if they want to find out more information or contact your team?

[00:22:30] Dr. Richard D.

[00:22:30] Yeah, I would say go to elsevier.com and check out the Clinical Key AI site where you can request

[00:22:40] to download and try the system. And that's the best way to try it. And feel free to ask questions.

[00:22:47] That's where I would start. And also, I'm really excited to see how we can work with you.

[00:22:53] So this is probably one of the most important moments, really since the 90s that I can think of.

[00:23:03] Now I'm showing my age a bit. I've been around in the dot-com era back in early 2000s.

[00:23:08] That was a huge moment of change in technology. I think this thing is as impactful as mobile devices.

[00:23:16] So it has the opportunity to really enhance our lives. It also has the opportunity to screw

[00:23:23] things up. So this is where we really need to be cautious and you need to look for partners that

[00:23:31] really are concerned about doing it the right way. And that's my biggest takeaway here. And I think

[00:23:37] Elsevier is one of those. Dr. Richard D.

[00:23:40] Yeah, I completely agree with you. And a lot of people have likened it to the arrival of smartphones

[00:23:46] and mobiles, etc. But I think it also reminds me a lot of when the app store came the year after.

[00:23:52] When the first app store came out, everything was gimmicky. Developers were just playing.

[00:23:56] It turned your smartphone into a can of beer or a chainsaw or a musical instrument. But then

[00:24:01] after that period, it became right. What are the real world problems that we can solve here? And I

[00:24:06] think with Generative AI, we now have a unique opportunity to create a tool that can act as an

[00:24:11] expert in our world, national and regional health care delivery like what you're doing.

[00:24:15] And what I especially love is you're encouraging people to think bigger of what we can do with

[00:24:20] this technology and solve real world problems. So thank you for sharing that with me today.

[00:24:25] Thanks, Neil. Thanks for having me. Really appreciate it.

[00:24:28] We covered a lot of ground today with around the powerful capabilities of clinical key AI

[00:24:34] from its impact on clinical efficiency to the ways it ensures trustworthy and localized

[00:24:40] medical advice. I think it's clear that this tool is setting a new standard in health care

[00:24:46] technology. But what are your thoughts on the future of AI in health care? The good, the bad,

[00:24:53] the in between, whatever it might be, how do you see advanced decision support tools shaping

[00:24:59] clinical practices? Share your views with me. Join the conversation on how technology continues to

[00:25:06] transform health care. You can do that by emailing me tech blog writer outlook dot com,

[00:25:11] Twitter, LinkedIn, Instagram, just at Neil C. Hughes. But more than anything, just thank you

[00:25:16] for listening as always. Join me again tomorrow. We'll keep exploring the new frontiers of

[00:25:20] technology. But until next time, don't be a stranger.