Exploring AI in Healthcare: Insights from the StoryPlus Project Team
Discover how AI is transforming healthcare through the StoryPlus project, highlighting ethical concerns, care management, and the future of AI in medicine.
File
Story 2020 Critical Decisions Perceptions of AI in Healthcare Management
Added on 09/27/2024
Speakers
add Add new speaker

Speaker 1: Artificial intelligence comes in many forms, from natural language processing to problem-solving algorithms. Specifically, in the healthcare field, machine learning is used to identify patients with high risk of being hospitalized. These algorithms are used by care managers to provide impactful assistance to patients. The goal of our StoryPlus project was to capture all the nuances of AI in healthcare and to present multiple perspectives on the algorithm. AI will continue to become an integral part of the healthcare system, and to ensure that it is moving towards an ethical route, we must bring awareness to potential problems and push for transparency and accountability. Let me introduce you to the team. My name is Dana Huang, and I am a rising sophomore studying neuroscience and economics at Duke University. Megan Liu is a rising sophomore interested in linguistics, anthropology, and global health. Emily Moyer is a rising junior studying psychology and on the pre-med track. Rifa Nanjaba is a rising junior majoring in neuroscience and on the pre-med track. Dana Younish is a rising junior studying public policy, and she is on the pre-med track as well. Mingyong Chen is our team project manager, and she is working as a master's student in the MFA program. Finally, Matthew Kenney is our team's faculty advisor, and he is an assistant research professor in computational media arts and cultures. Over the course of 44 days, we conducted 600 minutes of interviews, cut out 104 video clips, worked on 40 documents, talked to 10 healthcare professionals, compiled research in seven themes, all in a six-student and one faculty team. We are so excited to present to you our final product.

Speaker 2: So at the beginning of our project, we started off by conducting some background research because not everybody on the team was super familiar with AI and how it's used in the healthcare industry. So we kind of went online and collected a bunch of articles, read them, summarized them, and discussed them with each other. And we learned a lot about biases in AI and its potential and things like that. So that was super helpful because we're kind of giving us some foundational knowledge about AI. And it also helped us with organizing our thoughts and ideas for what we would want to ask our interviewees in the future. So our interviews consisted of physicians, care managers, and data analysts. So we had a pretty wide variety of people to talk to and experiences to learn about. So we all kind of split up the interviews and made sure that we learned more about each of our interviewees before our time with them so that we could kind of make the most of our one-hour Zoom call. And so there was someone to take notes at all times, and the interviews were recorded on Zoom. And so after all of our interviews, we kind of went through all of the notes and recorded videos and extracted common themes that appeared, which we'll be talking about later.

Speaker 3: So one of the things that we originally picked up on as one of our main themes for our final project, as well as the documentary and the website, was this idea of how we could capture care manager stories through humanities research using care manager anecdotes. So one of our biggest themes was how care management can help patient outcomes and patient health. Beyond that, we were looking for themes that would explain the use of artificial intelligence and algorithms in care management to our audience and also explain it to us, since we did not have a lot of experience in that field prior to the project. So some of the other things that we picked up on was besides our algorithm, how AI could possibly be used in healthcare, how it's used currently, and also looking at our preliminary research, we saw a lot of instances in which AI picked up biases and exhibited these kinds of unethical practices in the workplace that they were trained to work in. So one of our other themes was how we can make AI more ethical and what are the current problems with the AI and the algorithm that's used in care management currently. So one of the things that we ended up looking into was what these biases are, as well as in the future, how AI can be reformed to be a better tool for care managers.

Speaker 4: So after we did our preliminary research and interviewed all of our interviewees, we began the process of deciding how we were going to disseminate and organize all the research that we had done, and we decided that a web-based documentary would be the most creative and engaging way to share all of our information. But we also wanted to create a more traditional linear-style documentary that was video-based to present to our stakeholders and also serve as an introduction to our website. Technology is an integral part of our world today, and its

Speaker 1: influence is growing exponentially. Pop culture has many references to AI taking over the world and destroying humankind. While technology might not be at this level yet, there is a pressing question regarding job security. Artificial intelligence is making leaps in all industries, especially the healthcare field. From robot-assisted surgery to a data-sorting algorithm, AI is becoming more and more prevalent in medicine, and the algorithms are far from perfect.

Speaker 3: Bias is a real problem in all of AI. By the time we received the claims data from CMS

Speaker 5: and the data that they had cut in that data set, there's approximately a two- to three-month lag in that data. Machine learning hasn't quite reached that stage in terms of our trust.

Speaker 2: Is AI taking over the healthcare field for worse?

Speaker 1: To better understand the role of AI in healthcare, we must first explore the field of care management. I have a list of patients who have an A1c

Speaker 6: between 9 and 11, and I'm calling those patients and trying to get them back connected with their providers, specifically at Lincoln Community Health Center, because that's the clinic that I'm working through. And I do an initial recruitment call, see what barriers they have, see what SDOH barriers and clinical needs that they may have, and I connect them. So right now, I would say that I would start off my morning with running a report within our electronic medical records, which are reports that say, like, oh, you need to follow up with this patient, you need to send this mail, you need to contact this provider. And then I take a look at my Excel sheet that sort of specifies what specifically I have to do with patients, and then I do various follow-ups. As far as what I do, I do a lot of research. I do a lot of research on clinical

Speaker 7: support calls. We are given a list of random patients that are in that 70 to 100 who maybe have not had a hospitalization or a referral, but need to be called so that we can demonstrate to Medicare that we're trying to reach as many of their patients as possible to engage them in these services. So it's a little bit like, I mean, it's almost like a solicitation call, and I, you know, we have to kind of be very quick about saying we're associated with Duke, it's a Medicare-covered service, we're not trying to get money out of them, it's a telephone call, it's not a visit where you have to go out and expose yourself to COVID. The way that we know we want to engage these patients in complex care management is they have a risk score, and it's an algorithm that's used by the hospital that's in collaboration with Medicare, and the patient gets a risk score, and the risk is their risk of being re-hospitalized or hospitalized if they've not just recently been. Those risk scores are based off of

Speaker 8: their conditions and I think ED admissions. The risk score is really used for the Medicare population, and Medicare does not have a role. In general, I would say they're mostly used internally, those risk scores, so our team leads are essentially the managers of the care managers, the care managers have them, and they use those to figure out which patients should we be working

Speaker 9: with. So basically what we do is our data team will stratify that list of patients for us and it's numbers that go anywhere from like 100 to and below, and then what we do is with that information we basically, you know, cut the list and then distribute it and then folks know who

Speaker 10: they're supposed to outreach to. So what we are doing is we are aggregating information from two data sources. We're using claims, which are essentially receipts, and EHR data, and from those sources, from those two sources, we are aggregating three different levels. We are looking at diagnosis, procedures, and medications. The model is not too complicated, the model has two layers and produces 31 outputs, and those 31 outputs are the likelihood of overall admission, and the other 30 are specific admissions, for specific admissions of interest in the MSS-PICR model, and those cover things like sepsis, heart failure, respiratory infections, mental issues, and so on and so forth.

Speaker 4: We really wanted to focus on the video aspect of our research and that way of presenting our research, mostly because a lot of the research that we were doing and all of the interviewees that we talked to were really at the forefront and really embedded in the research that we were doing. The data scientists that we talked to, for example, were the ones that created the model we were researching, and the care managers that we talked to shared a lot of really personal stories and anecdotes about patient interactions that they had that were especially meaningful.

Speaker 11: In addition to our traditional documentary, we created a web-based documentary that allows viewers to interact more with the material. The lead-in pages include our 20-minute traditional documentary, which is intended to pique the viewers' curiosity about the risk of unplanned admission model and the potential of AI in healthcare. In the next stage of the documentary, the interface is a care manager's desk that allows them to click on sticky notes, a tablet, and other objects to explore more about their new role as a care manager and how they should interact with the risk model. These lead to more in-depth and detailed video clips from our interviews and texts about the different topics. The topics include more about care manager interactions with patients through anecdotes from our interviews, the future of AI, more about the model, and some ethical concerns. These are all similar to the topics that were covered in our traditional documentary. Our intent was to allow viewers to learn more about things that they found interesting but were too technical or too lengthy for the traditional documentary. This includes more information about the architecture of the risk of unplanned admission model. The web documentary will be a long-term project and will continue to be updated by next

Speaker 4: year's StoryPlus team. Transitioned into an online format, there was that additional layer of all of us conducting our research over the web as well and, you know, the documentary production and the website set up over Zoom essentially as well. Interesting to see because there was so much collaboration and high engagement and interaction. And so we definitely learned a lot from every

Speaker 2: single person and our understanding of AI has changed a lot. Talking to different people at

Speaker 3: interviews like physicians, nurses, care managers, they all had these different visions of what AI could do in their fields in the future to help patients. Our interactions with care managers

Speaker 11: and data scientists really showed me how machine learning has the potential to make medicine more personal and give providers more time to spend with their patients. I've come to see how AI is such a

Speaker 1: great tool in the healthcare field and how it complements all of the care managers and the healthcare professionals. This project has also helped to dispel any fears about AI taking over the healthcare field and having doctors become automated because I've realized that AI is just a tool and always be a tool because it cannot replace human interaction.

ai AI Insights
Summary

Generate a brief summary highlighting the main points of the transcript.

Generate
Title

Generate a concise and relevant title for the transcript based on the main themes and content discussed.

Generate
Keywords

Identify and highlight the key words or phrases most relevant to the content of the transcript.

Generate
Enter your query
Sentiments

Analyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.

Generate
Quizzes

Create interactive quizzes based on the content of the transcript to test comprehension or engage users.

Generate
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript