Enhancing Patient Experience: Strategies and Implications for Healthcare Providers
Explore the importance of patient satisfaction, measurement methods, and initiatives at ChristianaCare and Cleveland Clinic to improve hospital care experiences.
File
Using Data to Measure and Improve Patients Satisfaction with Hospital Care
Added on 09/28/2024
Speakers
add Add new speaker

Speaker 1: Good morning, my name is Samir Thaker, I'm one of the internal medicine residents here at ChristianaCare. And I'd like to talk to you today about patient experience and what we can do to provide our patients with a better experience in the hospital. Now patient experience metrics have traditionally played second fiddle to more process and outcome measures like door-to-balloon time and 30-day survival. In part, this has been due to the difficulty of measuring patient satisfaction in a useful way. Given the amount of work it takes to track even straightforward events like pre-op antibiotic administration, it's not hard to understand why clinicians have been reluctant to attempt measuring a much slipperier concept like patient experience. I'd like to make the case today, however, that patient experience deserves our serious attention. I'll review some basics of how patient satisfaction is commonly measured. I'll also describe two initiatives I've been involved with to improve patient satisfaction, one here at ChristianaCare and another at Cleveland Clinic. And finally, I'll touch on the implications the evolving science of patient satisfaction has for our clinical practices. Now as we know from decades of health services research, and for many in this room from personal experience, substantial numbers of patients report low satisfaction with hospital care, even at institutions where quality is otherwise excellent. Here you see one example of this divergence at our own hospital. Now I think we'd all agree that improving patient satisfaction is a laudable goal in the abstract. But there are some specific reasons why I think it's imperative that we as a profession begin paying serious attention to patients' experience and working to improve it. On the clinical side, studies have shown that satisfied patients are more likely to adhere to treatment recommendations. And for some conditions, we have evidence that greater patient satisfaction correlates with better health outcomes. Financially, we know that patients who report greater satisfaction are more likely to stick with their physician and the health system in which that physician works. Satisfied patients are dramatically less likely to bring malpractice suits against their caregivers. And as we move into a world of value-based purchasing, where reimbursement is tied to outcomes, physicians' incomes are going to be directly affected by what their patients say about satisfaction. Finally, regulatory bodies like the American Board of Medical Specialties and ABIM are increasingly going to include patient satisfaction among the skills needed for board certification. Now for years, there have been patient satisfaction surveys developed by academic researchers and private vendors. These surveys varied widely in quality, and the results they produced were often limited by small sample sizes. To better track and improve patient experience, the Centers for Medicare and Medicaid Services developed the Hospital Consumers Assessment of Healthcare Providers and Systems Survey, a mouthful more commonly known as HCAHPS. The design of the survey followed a rigorous multi-year process guided by survey design experts at the RAND Corporation, Research Triangle International, and Harvard Medical School. And the measuring satisfaction is far from an exact science. These groups made every effort to use best practices in the field. The result of their work is a 27-item survey, again known as HCAHPS, that's mailed to a random sample of patients after discharge. It asks patients to rate things like communication with staff, pain control, and overall willingness to recommend their hospital to others. The way the survey is fielded is standardized across all U.S. hospitals, and CMS regularly audits groups that collect and manage HCAHPS data to encourage compliance. Response rates on the survey range from about 25 to 40 percent nationally. The response scale for these two questions is typical of the HCAHPS survey, asking patients to either rate the frequency with which a certain event happened, or asking patients to rate their level of agreement with a certain statement. The survey also asks patients to rate hospital staff on things like courtesy and their ability to explain things in an understandable fashion. Finally, HCAHPS asks patients to rate their overall hospital experience, once on a zero to 10 scale, and then in terms of how likely they'd be to recommend the hospital to friends and family. One important thing to note about these questions is that CMS sets a high bar for performance, counting only what are called top box scores. What that means is that for these questions, the only responses that count are nines and tens for the question on the left, and a definitely yes for the likelihood to recommend question on the right. Now, HCAHPS began as a demonstration project and then moved to a voluntary tool that hospitals were encouraged to use. For the past several years, hospitals have been required to participate in the HCAHPS program and report their scores publicly to receive full Medicare reimbursement. Finally, in 2013, payment for inpatient care will be tied to performance on the survey. Here you see the Hospital Compare website run by CMS, where visitors can plug in their zip code and compare HCAHPS results alongside things like central line infection rates and readmission statistics for hospitals across the country. In addition to the nudge of public reporting, CMS is also pushing hospitals to take patient satisfaction seriously by attaching financial consequences to poor performance. As part of the Affordable Care Act, a portion of each hospital's Medicare reimbursement is tied to meeting certain process and outcome benchmarks. Seventy percent of the performance score that guides reimbursement is based on clinical measures like the number of heart failure patients that receive a beta blocker on discharge. Thirty percent of the incentive payment is based on HCAHPS patient satisfaction scores. Now, currently, the portion of Medicare revenue at stake is relatively small, but over time this fraction is scheduled to grow, and it's expected that private insurers will start basing their payments on HCAHPS performance as well. Not surprisingly, these incentives have made even those who remain skeptical about tracking patient satisfaction interested in learning how to improve their scores on the HCAHPS survey. Now, I've described for you a few reasons why patient experience is worth paying attention to, the tools currently being used to measure satisfaction outcomes, and the impact these measurements will likely have on hospitals' reputations and finances. Let's next turn to what hospitals can do to improve their patient satisfaction, and I'll start by discussing a project I've been working on here at ChristianaCare for the past year. Now, our broad goal was to identify an aspect of the patient experience at ChristianaCare that needed improvement, to design a targeted intervention, and track our progress toward improving satisfaction scores, all using HCAHPS data as part of the kind of plan-do-study-act cycle commonly used for other types of quality improvement work. We decided early on that we would focus our efforts on one or a small handful of nursing units. Our hope was that if we could demonstrate improvement and satisfaction on one unit, our work could serve as a template for other units throughout the hospital. And so, we began by looking through hospital-wide data to find nursing units that would be a good fit for this project. Specifically, we looked for units where overall satisfaction, the measure tied to value-based purchasing incentives, was relatively low. We focused on units where there were enough beds and patient turnover to give us a meaningfully large sample, and we sought out units where staff had expressed an interest in improving patient satisfaction. One of the units where we found all this was on the 3B, 3C postpartum ward. As you see here, the unit had mediocre overall satisfaction scores compared to other parts of the hospital. The number of patients the unit sees is relatively large, and perhaps most importantly, the clinical and administrative staff on these units were eager to improve their satisfaction scores. Now, I mentioned that our goal was to improve the unit's overall satisfaction rating. That meant that we needed to increase the proportion of patients rating us a 9 or 10 on the HCAHPS question shown here. But overall satisfaction is a pretty nebulous concept. If patients tell us their hospital stay was a 4 or a 7 overall, it's not clear from that number alone what we ought to do differently to improve the experience of future patients. By looking at how patients rate specific components of their experience on the HCAHPS survey and assessing how those subscores correlate with the overall rating, we could get a better sense of where to focus our efforts. On this chart are our target units' performance on various HCAHPS questions laid out along two dimensions. Along the x-axis, questions are arranged based on room for improvement, with questions where the unit scored highest on the left and questions where we scored lowest, where we had the most room to improve on the right. Along the y-axis, questions are arranged according to how closely a response on that question correlates with the patient's overall rating of the hospital. By laying out our responses like this, we can limit our attention to areas where the unit has significant room for improvement, those points on the right of the chart, and to questions that positively correlate with overall satisfaction, those questions on the top of the chart. For the 3B3C nursing unit, it appeared that nurse listening and nurse courtesy ratings fit these criteria. Now we took a closer look at our unit scores on these two questions by comparing 3B3C's performance with some peer institutions. These assessments left us feeling pretty confident that we should focus on the nurse listening and courtesy ratings. Patients were rating listening and courtesy lower than other aspects of their hospital stay as shown in the previous chart. These were areas where we were lagging behind our peers, and performance on these questions appeared linked to our primary outcome of overall satisfaction. Our next step was to assemble a team and design our intervention. The satisfaction guidance group we gathered included a range of stakeholders, including nurses, administrators, and representatives from marketing and the hospital-wide patient experience office. We decided early on to develop an intervention that would be quick and straightforward to implement. This worked well for me since I didn't have any funds earmarked for the project, and I was hoping to get some data in time for today's presentation. But in all seriousness, I think it also reflected a sense that improvement efforts can be exhausting. And while we were committed to improving patient satisfaction, we wanted to try to do it without the meetings, lapel pins, and other fanfare that often go along with big quality improvement initiatives. We ultimately decided on a bundle of interventions that are listed here. These included brief education sessions during regular staff meetings where nurse managers introduced the project. We distributed HCAHPS surveys and asked nurses to respond as if they were a patient who'd recently been discharged from the unit. We also put together some suggestions for how nurses could demonstrate to patients that they were actively listening during bedside rounds. And we included a message on the bedside television system with examples of changes Christiana had implemented based on patient feedback. You'll note that we didn't directly ask nurses to do more listening. In fact, our sense was that nurses were constantly listening to patients and doing so with a lot of attention. The problem was that while they were listening, the nurses might also be changing IV bags or checking vitals. And as a result, patients got the impression that they weren't always being heard. So the bulk of our efforts here really focused on demonstrating to patients that nurses were paying close attention to their needs. Now, we implemented these changes toward the end of January 2012, and the data we've gathered so far show some promising results. You'll see on the left that nurse listening and nurse courtesy ratings rose from around 70% in January to 82% and 92% respectively in March. Overall satisfaction on the unit increased from 58% to 78%, and that's a statistically significant improvement at the 0.10 level. Now, I don't think we can take credit for this entire improvement. We used a simple study design that doesn't account for the many other changes that were going on in the hospital. And we'll have to wait and see if these gains hold up over time. I do, however, think it's fair to say that we accomplished what we set out to do, which was to show that by using existing HCAHPS data and the same kinds of data-driven quality improvement techniques we applied to problems like central line infections, we could meaningfully influence patient satisfaction scores. Beyond that, I think we also showed that this kind of intervention can be done in a way that places minimal additional burden on frontline nursing staff, and that by bringing together a team of interested stakeholders, we can lay the foundation for future satisfaction improvement work. Now, I'd like to shift gears a bit and talk about a second patient experience I worked on, this one at Cleveland Clinic. This project was motivated by the same concerns as the work I just described. A substantial number of the hospital's patients were dissatisfied with their experience. The institution knew this dissatisfaction was going to affect its reputation and reimbursement, yet it wasn't clear what needed to change in order to improve satisfaction scores. We knew that certain HCAHPS questions were correlated with overall satisfaction. What we didn't know, and what I don't think anyone really knows yet, is what drives variation in how patients respond to the individual HCAHPS questions. For example, if a patient needed to move rooms five times during a hospital stay, does that influence satisfaction? Many patients say their pain isn't adequately controlled in the hospital. Are there certain procedures where we do pain control particularly well or poorly? Do patients from urban or rural areas have different expectations about noise at night? What about older patients or those here for elective surgeries? We realized that by combining HCAHPS survey results with databases that detailed other aspects of a patient's hospital stay, we might be able to put together a more complete picture of what drives patient satisfaction. Our project matched patient-level HCAHPS survey responses with clinical and administrative records for over 18,000 hospital discharges across a two-year period. We hypothesized that four broad sets of factors would influence satisfaction ratings. Patient characteristics such as race and insurance coverage, physician characteristics such as years in practice and percent time devoted to clinical activity, nursing unit characteristics such as average daily census and skill mix, and healthcare processes that occur during the hospital stay such as use of restraints and time spent in the ICU. Our primary outcome for the study was overall satisfaction. We also, however, looked at the effect of our clinical and administrative variables on different HCAHPS subdomain scores. And it's one of these subdomains, the physician communication rating, that I want to review in more detail today. Now, the physician communication score is a composite of three questions that ask how patients feel about their interactions with doctors during their hospital stay. You'll notice that the question stem mirrors the nursing communication questions I described earlier. Physician communication was an area that the hospital had struggled with over several years and where we'd seen little improvement despite various initiatives. Our analysis pointed to several interesting correlations and possible targets for intervention. First, let's look at physician characteristics and how they relate to patient's ratings of their doctor's communication. Let me explain how to read this chart. In green are factors correlated with higher physician communication ratings. In red are factors correlated with lower physician communication ratings. And in gray are factors with no significant correlation to the physician communication score. One interesting finding is that patients rate their doctor's communication skills more favorably when their attending physician is older. However, communication scores really peak for physicians age 40 to 55 and appear lower for both younger and older attending physicians. Importantly, we find that physician characteristics explain only a part of the variation in patient's ratings of their doctor's communication skills. Here we see a number of patient characteristics that, as in the earlier slide, show positive, negative, or no correlation with the physician communication rating. A handful of these variables have been previously characterized in the satisfaction literature. For example, patients with higher education levels are more likely to give their physicians a negative rating on communication skills than patients with less education. Other factors, however, such as the relationship between insurance, marital status, and religious preference are new in this study and point to subpopulations who might benefit from increased time and attention, either from physicians or other healthcare providers during their stay. Different patients, these data suggest, have substantially different expectations about physician communication. In this chart, for example, we see that a 65-year-old male is nearly 20 percent less likely, excuse me, more likely to give their attending physician high marks for communication than a 20-year-old female. And this is after controlling for the many potentially confounding factors I've shown on previous slides. If we look at some common procedures and events that patients experience during hospitalization, we can see other interesting correlations. Some factors associated with lower physician communication ratings stick out as potential low-hanging fruit. Patients who undergo abdominal paracentesis, or GI imaging such as a colonoscopy, for example, seem to rank their doctor's communication skills significantly lower than others. Might an additional focus on communication skills by the teams involved in these procedures improve satisfaction? That's not a question these data can answer by themselves, but it is the type of question that could be tested experimentally and evaluated using age caps as an outcome measure. Looking ahead, I think it's almost certain that interest in patient satisfaction will increase, as will the need for hospitals to improve on measures like age caps. In the coming years, we can also expect to begin seeing surveys like age caps being given to patients in different healthcare settings. In fact, pilot testing has already begun on a tool called CG-CAPS that measures patient satisfaction without patient care. Of course, it's easy to be cynical about these kinds of measures, yet surely we can all think back on someone who we've cared for, who, even though they received the appropriate clinical interventions, was disappointed with the way their care was delivered. Whether that was a child who was stuck four times over the course of a day because we didn't think to consolidate lab draws, or an elderly patient rushed out of the hospital before we fully explained how their medication list had changed. My point isn't that we should feel bad about ourselves for these failures. It's that we have tools and techniques to begin doing better. Medicine once assumed that central line infections are going to affect some portion of patients, regardless of what we did. We now know, though, that with thoughtfulness and vigilance, we can come close to eliminating these adverse events. I encourage you to start thinking about patient satisfaction the same way. It's difficult to understand what drives patient satisfaction, and harder still to consistently deliver an outstanding patient experience. But it's our duty to strive for that goal, and with tools like age caps, we can begin to make progress. And whether you buy that or not, Medicare is going to take a chunk of our income if we don't do better on these measures, so it's worth paying attention. I'd like to thank the Women's and Children's Satisfaction Guidance Team for their work on this project, especially Sherry Monson and Michelle Schiavone, without whose support this never would have gotten off the ground. And I should mention that the age caps project I described earlier at ChristianaCare is just one of many initiatives this group is working on. In the coming months, I think you'll see some great things come out of their efforts, and hopefully their work and ideas can serve as a model for satisfaction improvement initiatives throughout our healthcare system.

ai AI Insights
Summary

Generate a brief summary highlighting the main points of the transcript.

Generate
Title

Generate a concise and relevant title for the transcript based on the main themes and content discussed.

Generate
Keywords

Identify and highlight the key words or phrases most relevant to the content of the transcript.

Generate
Enter your query
Sentiments

Analyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.

Generate
Quizzes

Create interactive quizzes based on the content of the transcript to test comprehension or engage users.

Generate
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript