Speaker 1: Today on This Week Health. Sometimes owners of the models become tiger moms and tiger dads. They really want to promote their little models. Sometimes we put things in too fast without the right stakeholders at the table to think about, okay, the algorithm might have a great AUC curve, but how do you make sure that it's put into the right place at the right time for the right intended audience?
Speaker 2: Welcome to Town Hall, a show hosted by leaders on the front lines with interviews of people making things happen in healthcare with technology. My name is Bill Russell, the creator of This Week Health, a set of channels dedicated to keeping health IT staff current and engaged. For five years, we've been making podcasts that amplify great thinking to propel healthcare forward. We want to thank our show partners, Meditech and Transparent, for investing in our mission to develop the next generation of health leaders. Now on to our show.
Speaker 3: We'd like to welcome Eric Kuhn, who's currently the chief health information officer at Duke Health. Welcome to the show, Eric.
Speaker 1: Thank you, Matt. Love to be here.
Speaker 3: It's really great talking to you. I heard you speak at Scottsdale, and you were on a panel talking about AI, and it seems like we've beaten AI to death over the last few months of conversation in the informatics world. But you have a specific niche, which I think is great for people who are listening, around academics, AI governance, and your theories behind that. Let's talk through just a couple of those things here in the next few minutes, specifically about AI governance and how you formulate your thoughts around how a system does that.
Speaker 1: Yeah, well, happy to. I think this is a really exciting area. As much as we think that we're done talking about AI, I think it keeps being a really exciting area that I think will command our attention. I mean, for all the right reasons, because the possibilities are quite endless. I think we spend the last few years thinking about how predictive modeling would start helping us make better decisions as clinicians, so that we can focus our attention and resources on patients who are most at risk, so that we can be smart about how we take care of them. And then over the last six months, generative AI has now become a household name, and everybody, at least in informatics circles, and everybody's talking about chat GPT and how it's going to change the world. So I think that's kind of the world we're living in. Maybe this is stating the obvious. But when it comes to AI governance, I think first and foremost, I feel more strongly than ever, it's about doing right by the patient. As excited we all should be about the possibilities of AI, we need to make sure that AI that is used in clinical care and clinical operations is safe, effective, and equitable. And what I mean by that is that how do we know that the AI tools are doing what they say they're supposed to be doing? How do we make sure that it's not going to lead to unintended negative consequences, such as clinicians being bombarded with pop-ups and suggestions and alerts? Alert fatigue is not a new concept that AI could add to it. How do we make sure that these AI algorithms that could work really well on the first day, doesn't all of a sudden decide to call everybody low risk just because somebody changed how sodium is reported in the lab systems? And more and more, we know that computer algorithms are just algorithms. How do we make sure that we don't build in bias into these algorithms so that we exacerbate the inequities that we already have in healthcare and in the society in general? So those are some of the key things that I've been thinking quite a lot about with others over the last few years.
Speaker 3: But do you think with these, you know, really important conversations that we have to have around the specific things that you've just called out, do you think that's going to change the speed of AI healthcare deployments in healthcare? And what are your thoughts on whether that's the right thing to do? How do we make it faster? Should we necessarily slow it down?
Speaker 1: Yeah. So I think, so one way I hear your question is, are we going too fast or too slow with deploying AI in our current environment? I think you may not like my answer. And I think as with anything new, we are doing things both too slow and too fast. And let me explain what I mean by that. I think we are in some ways going too slow because everybody is coming at it from different angles. We have vendors knocking on our doors. We have internal data scientists who want to build their own models with our faculty members. I mean, they're all coming from the great place. But because there is so much enthusiasm and this is so new, it is in some ways, there's not a lot of way to create what I like to call an AI factory. The ability to really take the raw inputs of ideas and turn them into models and then turn them into predictable ways of getting them in the hands of clinicians or the recipients of these tools and measure their, whether they're effective and retire those that are not. So one other challenge is that because technologically everybody has developed their own tools, even things that look and smell and actually work similarly underneath need different pieces of infrastructure to plug in. It's kind of like if you are an electricity company, you have to support several voltages and several plugs across ecosystem. And that makes us too slow. But I think we are also at times too fast because there are times when people think so strongly about the model. I think at Scottsdale, I talk about sometimes owners of the models become tiger moms and tiger dads. They really want to promote their little models and want them to go to an Ivy League college 18 years down the road. And I think sometimes we put things in too fast without the right stakeholders at the table to think about, OK, the algorithm might have a great AUC curve, wonderful, lots of promise. But how do you make sure that it's put in to the right place at the right time for the right intended audience so that it will actually influence decision making for the better? So that's the piece that I think sometimes worries me when you have something new that's exciting, that's trying to ride the hype cycle, the crest of the hype cycle and then try to get it in. And we don't want to slow that down. We can create a safe way for folks to experiment and then fail fast.
Speaker 2: We'll get back to our show in just a moment. I'm going to read this just as it is. My team is doing more and more to help me be more efficient and effective. And they wrote this ad for me. And I'm just going to go ahead and read it the way it is. If you're keen on the intersection of health care and technology, you won't want to miss our upcoming webinar, our AI journey in health care. See, that's keen is not a word that is in my vocabulary. So, you know, it's written by somebody else. Maybe ChatGPT. Who knows? We're diving deep into the revolution that AI is bringing to health care. We're going to explore its benefits, tackling the challenges head on. We're going to go all in from genomics to radiology, operational efficiency to patient care. And we're doing it live on September 7th at 1 p.m. Eastern time and 10 a.m. Pacific time. So if you are interested in this webinar, we would love to have you sign up. You can put your question in there ahead of time. And we take that group of questions. We give it to our panelists and we discuss it. And it's going to be a great panel. I don't have them confirmed yet, but I really am excited about the people who I've been talking to about this. So join us as we navigate the future of health care. Trust me, you don't want to be left behind. Register now at ThisWeekHealth.com. Now, back to our show.
Speaker 3: And what you're talking about is really just the necessary breakdown and the necessary evaluation so that we can accelerate the work. And we can do great things. I think all of us want to do the great things. And your developers who turn into the Tiger Moms really have a passion. You talked about sometimes having to tell them their baby is ugly, which I thought was a great analogy. Along those lines, when you get into the model and you start looking at these things closely, what are your thoughts on this sort of what we've called and termed in the hype cycle the black box AI? Where not really most people know what the heck's going on. Certainly frontline clinicians probably don't either care or want to learn all the subtleties of the science. How do you evaluate that in light of governance? The scientists that are coming to you that may really have a passion and also the end product, which is first our clinicians using it and then the effect it has on patients. How do you evaluate that black box?
Speaker 1: Yeah. So I'm not sure I'm a fan of the word black box. I think it sort of has interesting connotations. But I mean, I do think that algorithms that are really complex, billions of parameters are here to stay, at least in the medium term. So I think, and as you are alluding to, when you and I are taking care of patients, we are not going to think about exactly where is this algorithm coming from? We just want to know that, yeah, I think it works and maybe I should pay attention to its advice. So bridging that gap is in some ways our responsibility in building AI governance, building trust. I do think that having a fit for purpose and thoughtful evaluation process is going to increase that trust. I also think that making sure there's the right monitoring mechanism so that we know that things that work on day one will continue to work no different than our blood pressure machines in our clinics or in the ICU. And I do think that there are approaches to making these, quote unquote, black box algorithms a little bit more transparent. But at the end of the day, the frontline clinician works with the currency of trust and efficiency. So how do we make sure that institutionally we provide tools that are trustworthy and safe and equitable? I think those are some of our ongoing challenges.
Speaker 3: It's basically continuing this concept of governance, understanding where things are, and then giving a stamp of approval that, hey, it's safe. We've evaluated it and we're going to commit to our patients to go back and evaluate this and make sure that we're not introducing the bias. We're not delivering bad information, that there's something in the training that didn't go awry. All the things that are showing up in the news cycles. And I think that's important.
Speaker 1: And I do think that it's no accident that government regulators are beginning to look into this. We are all watching with interest how it evolves, both in Europe, where they have taken a more proactive stance, and what's happening in the current administration. And I think folks are appropriately concerned about how generative AI might influence society as a whole within healthcare and otherwise. And I think we need to be thoughtful. We had no idea what the internet was going to do to society 20 years ago. I can't live without a search engine and all my favorite shopping sites now, but I'm not sure about social media's had the right impact on our society. So I think it's really interesting. I think this yin and yang push and pull of a revolutionary technology, how we harness it. It's in some ways, not just the responsibility of a single organization, but all of us in society in general. It's a fascinating time to be thinking about this.
Speaker 3: Well, wonderful. Hey, your insights are fabulous, Dr. Poon. Thanks so much for joining me today. And I look forward to catching up with you very soon. Thank you.
Speaker 2: Great. Well, thanks for having me. Gosh, I really love this show. I love hearing what workers and leaders on the front lines are doing, and we want to thank our hosts who continue to support the community by developing this great content. If you want to support This Week Health, the best way to do that is to let someone else know about our channels. Let them know you're listening to it and you are getting value. We have two channels, This Week Health Conference and This Week Health Newsroom. You can check them out today. You can find them wherever you listen to podcasts. You can find them on our website, thisweekhealth.com, and you can subscribe there as well. We also want to thank our show partners, Meditech and Transparent, for investing in our mission to develop the next generation of health leaders. Thanks for listening. That's all for now.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now