Understanding Meta-Analyses and Systematic Reviews in Evidence-Based Medicine
Learn about the importance, construction, and evaluation of meta-analyses and systematic reviews, the strongest forms of evidence in medical research.
File
Intro to Systematic Reviews Meta-Analyses
Added on 09/02/2024
Speakers
add Add new speaker

Speaker 1: Hi. In this video, we're going to talk about meta-analyses and systematic reviews. Now, systematic reviews sit atop the evidence-based medicine pyramid you can see here because they represent the strongest form of evidence that we have. The way these are constructed is by taking individual studies, which we had looked at before, looking at individual studies, and now you take them all and you combine them into one review. Now why would you want to do a systematic review? Well, perhaps you have a complex issue that has many different papers that are done on it and you want to look at all of them in one place and come up with conclusions. Or perhaps you want to increase the precision. Each one of these papers may have a smaller sample size and so it doesn't have good precision, meaning its confidence intervals are huge and you want to increase that. So by taking a bunch of them, you can put them all together, get a bigger pooled sample size and increase the precision. Or perhaps there's a lot of discrepancies in the literature. So maybe you have some studies that are for a particular therapy and others that are against it. So you want to take a definitive look at all of them and come up with a conclusion. Now there's different ways that you can come up with these overviews and the first one is called a narrative review and in a narrative review, an author will take a bunch of these papers, they'll just pick a bunch of them and they'll read them, review them, and then come up with their conclusions. Now these are pretty good. They look at a bunch of different studies, so it's better than looking at one paper, but there's no real process by which the papers are selected. So there could be bias in there. Why did the author pick these? Perhaps they picked the ones that were only in line with what they were thinking with their own conclusions and so they picked things that supported their conclusions. These are often good for getting answers to background questions and they're good for a brief overview, but they're definitely prone to bias. Now a systematic review, you don't just pick any articles, you have a system by which you pick them. That's why it's called systematic. You go to the literature in a systematic way and you get all the articles and so that process exists and anyone who does that process would get the same bunch of articles. Then there's a systematic way by which you evaluate the individual articles in a systematic way that you combine them all into one and make your conclusions. Now meta-analysis, it's a subset of systematic reviews and these are the ones where you would take the data from each individual studies and then pool them all together and then evaluate that. So before we look at how to evaluate a systematic review, let's look at how you would do one. It'll make it make a little bit more sense. So let's say you have an author here who has a particular question about the literature. Now that question needs to be formulated in a nice, tight and focused way and the best way you want to define that question, you can use the PICO method with the particular patients defined, the intervention, the comparison and the outcome. We need to be sure that we are asking a question that is focused. You can't ask in all treatments for MI, does it help? You need to know is it different between cardiac caths versus TPA versus medical treatment but you also don't want it to be too focused like you can't say, I'm only looking at 47-year-old left-handed people who have a heart attack. It has to be generalizable and so you want to have a question that is not too broad but not too focused, a Goldilocks question. The other thing you also need to do is you need to identify your inclusion and exclusion criteria from the start. So if you're going to go to the literature, we need to know which articles we're going to look at. Are we going to include older people? Maybe we don't want to look at people who are 75 or older or don't want to look at people who are 18 and younger or maybe we don't want to look at studies outside of the United States. Maybe we don't want to look at, maybe we do want to look at those but you need to decide on that from the start before you go searching the literature because this is one way to avoid bias. Now the next thing that we're going to do is we're going to conduct a literature search. So I've depicted the literature here as this big cloud with a bunch of papers in it and there's a lot of things that you need to look at and so we want to look at as many of these papers as possible. And so where is the place that most of us will go? We'll go to PubMed and we will look at Medline. And so that's the first place that we're going to look is Medline. But there are other databases that exist. For example, Embase is the European version of Medline. There's also a database of systematic reviews by the Cochran Collaboration called the Cochran Library and that's another great place to look. But what about stuff that is not in one of those databases? How are you going to find those? Perhaps you want to look at something that are in the smaller journals that are not included in these catalogs or even unpublished research. Maybe there's a study that had no effect. It showed it was a negative study and those don't tend to get published and so perhaps it's just sitting in the desk drawer of a researcher somewhere. And so we'd like to include that or maybe there are some trials that are currently going on right now and so they haven't been published yet. Or maybe there are abstracts that have been presented at conferences. There's a lot more information that we want to look at. So if you want to do a full literature search, you need to look at all of these places. But how are you going to find these if they're not specified in here? Well, the best thing we can do is to ask an expert. When you go to Medline and you find the one author whose name seems to appear on all the papers, ask them, do you know of any other studies that are done perhaps in a smaller journal or perhaps others that are not published? And they can help you find more of these studies. You can also look in the references of the articles that you pulled. And so that will help broaden your list that you're going to look at. So now that you've defined your database, you also have to also define your search terms that you're going to look for and do your literature search. And then you're going to go ahead and pull the articles. So after having supplied the search terms, you get a bunch of articles. And you first start by reading the abstracts and applying the inclusion and the exclusion criteria. Perhaps you said we're only going to look at articles that took place in the United States from the year 2000 and forward. And so one of them was from 1996, so you would get rid of that one. There's another one from 2003, but it was in Canada. You want to get rid of that one. And so then you decide which ones you want to keep or not. And so this person might say, okay, look, I want to keep that one. I don't want to keep that one. I want to keep this one. But how do we know that this person wouldn't introduce bias when they look at these? Well, the trick there is you want to have multiple reviewers doing this. So let's say this person looks at the article and this person looks at this article. And if they both say keep it, you keep it. If they both say get rid of it, you get rid of it. But what if there's disagreement? What if one says keep and one says get rid of it? Well, then you have a third person that's a tiebreaker who can decide whether or not we need to keep that article. So we would pull the articles, read the abstract, apply the inclusion and exclusion criteria, and then the next step is to actually read the articles. Now you go back, you go to the ones that met the first pass. Now you read them again and you decide which ones continue to meet our inclusion and exclusion criteria. So we're going to use the multiple reviewers method so if two people agree, we keep it. If not, we get a tiebreaker person in there who's going to decide whether to keep it or not. Then once we've all agreed on a subset of articles that we're going to look at, it's time to take the data out of it. And so usually it's all put into a spreadsheet. And so the columns might be the type of study, the sample size, the outcome, the patient kind, the demographics of the patient, et cetera. And so you want to make a spreadsheet that has all of that information in it. And it's usually presented as a table in the articles as well. And again, you're going to want to make sure that there's multiple people doing this so that there is consistency and that we know that this person isn't introducing bias nor is that person, but we're all doing it and there is inter-rater agreement there. And then the final step that we're going to do is you're actually going to conduct the analysis and then what I've drawn here is actually what's called a forest plot. And so each one represents the point estimate and the confidence interval for each one of the articles that we pulled. And so you can look at these and then you make your judgments and you make your conclusions. And so this is how we would complete a systematic review. Now let's look at how we would evaluate a systematic review. And the process that we'll use to do this is the one that comes out of this book here, JAMA Evidence, User's Guide to the Medical Literature. And this is a commonly used framework that is used to look at all kinds of articles, observational studies, clinical trials, et cetera. And the three questions that are always answered are listed here. Are the results valid? What are the results? And how can I apply the results to patient care? Now let's look at each one of these quickly in regards to systematic reviews. So the first question asks are the results valid? And we can answer that by looking at several different questions which I've listed here. Did they ask a sensible question? Remember we want to know is the question too broad or is it too narrow? Look at the biology of the process at hand. Maybe you look at antiplatelet agents for strokes and some might say you want to look at all antiplatelet agents. Is aspirin the same as Plavix? Or others might say, yeah, you know what, they're close enough. So make sure that the physiology is good enough and that the question actually makes sense. And then you want to know that an exhaustive search was done, that they looked at all of the possible places. And we talked about that, right? You want to look at not just at the published literature but the unpublished literature because if you don't look, if you just look at the published literature, you're only going to see positive studies. If you only see the positive studies, you're going to have more optimistic results. You're going to overestimate the effectiveness of the results. Then you want to know if the studies that were there were of good quality because if you have bad quality studies going into the systematic review, you're going to have a bad quality systematic review, right? Garbage in, garbage out. And there are all kinds of scoring systems you can use to look at studies and you can Google scoring system for randomized control trial or scoring system for observational trial and you'll find checklists. And they'll usually list a method that they're going to use to score these studies. And they should. They should have a systematic way. Remember, it's a systematic review. And then we want to know if the assessments were reproducible and that, remember the way we did that was we had multiple reviewers and that the one reviewer assesses a study of a certain degree of quality, so does the second one. But if they disagree, then we have the third person in there. And how are they assessing it? Well, they're using that checklist that we looked at before, right? And so we have a reproducible way, a systematic process of doing it in the systematic review. Now the second question asked, what are the results? And again, we got about three questions here that we could look at to see what the results are. We could look at heterogeneity and I'll explain that in a second. We could look at the overall results of the review. How do they pool the results? And we can look at precision. So let's first talk about heterogeneity. And what heterogeneity means is the differences, right? Homogenous means everything's the same. Heterogeneous means that they're different. So this is where we would look at forest plots. So let's say we're looking at a risk ratio. So for ratios, we know that the point of no difference is one because one to one or one means there's no difference between the two. And so we would plot the results on here. And the way we conduct these is you put your point estimate here and then the confidence interval goes there too. Now if you have bigger studies, you can use a bigger dot here. And so that's one way you could tell. And so bigger studies tends to have a smaller confidence interval as opposed to a small study there which has probably a much bigger confidence interval. And so this is what a forest plot looks like. Now I've heard it said that forest plots look like if you turn to this thing on its side that it looks like a forest. But I think it's actually named after some guy named Forest. And so that's where it gets its name. So a systematic review that is heterogeneous means it's not the same. The data are all over the place. And you can see in this study that's exactly what we're finding. We have some that are over here, some that are over here, and nothing really is kind of in line. Now let's look at one that is less heterogeneous. And so maybe it would look something like this. You can see here that everything kind of falls in about the same area. Most all of these point estimates are on the same side of the point of no difference. Confidence intervals, they might be of different lengths depending on the sample size. But you can see they're all kind of in the same area. So this would be homogenous. Now there are ways of calculating this, the number for how homogenous or heterogeneous something is. But you can also kind of eyeball it looking at these forest plots. Now if it is, if there is heterogeneity, they need to explain why. What is the explanation for why there's such difference? Were they using different patients? Maybe these were old patients and these were young patients. Were there different interventions? Maybe this was TPA and this was streptokinase. Was there different methodology? Maybe these were randomized controlled trials and these were observational studies. But there better be some explanation for it. And then the last thing we would want to look at is the precision. And the way you could look at that is how much the confidence intervals are overlapping. Now in systematic reviews, more than meta-analyses where they were going to pool the data, you want to look at how they looked at their results in overall. Did they count this study which had a lot of patients, where you see it's got a big dot there, the same as they did this one that has a tiny little dot? Or did they weight the results? That meaning that bigger studies had more weight than smaller studies. Okay let's move to the third question and that third question is how can I apply these results to patient care? And again we have a bunch of questions that we could ask for that, namely do these patients in the systematic review match my patients? Did they look at all possible outcomes and did they look at the cost and benefits? So the patients that were included in the systematic review should be approximately the same as your patients. So a systematic review that was done in let's say Singapore may not apply to San Francisco or to San Antonio. Maybe it will but you need to make that distinction. Are these patients similar enough? You want to look to see if they looked at all possible outcomes. Systematic reviews frequently won't report the adverse events that were happening in the study because they're usually measured differently in each study. So they may have talked about how well a drug did but they might not have talked about maybe the drug caused death and one study looked at death, the other one looked at pain, the other one looked at cost and so they didn't report the same negative outcomes and so it was hard to really compare them because you really want to know in the end are the benefits worth the cost and potential risks. They should make some comments about this in their paper. So these are the three questions. Of course, it's made up of many sub-questions that you're going to use to look at a systematic review and if you do an internet search and look for the Center for Evidence-Based Medicine, I believe this is out of Oxford, University of Oxford, they have a bunch of critical appraisal tools that you could use, basically checklists that step you through this entire process. They have a systematic review, a critical appraisal tool. Be sure to check out this. There's a lot of great resources here at cebm.net. So this ends this talk on meta-analyses and systematic reviews. Remember, these are, they sit atop the evidence-based medicine pyramid and so they're very powerful tools. If you could find these, go for them and just be sure to know how to evaluate those. Alright, see you in the next video. Bye.

ai AI Insights
Summary

Generate a brief summary highlighting the main points of the transcript.

Generate
Title

Generate a concise and relevant title for the transcript based on the main themes and content discussed.

Generate
Keywords

Identify and highlight the key words or phrases most relevant to the content of the transcript.

Generate
Enter your query
Sentiments

Analyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.

Generate
Quizzes

Create interactive quizzes based on the content of the transcript to test comprehension or engage users.

Generate
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript