[00:00:00] Speaker 1: Systematic reviews are the best first paper to do as an early stage researcher, and in this week's Fast Track Live, I'm going to make the case to you for why that is. You'll remember if you joined us last week that we delved into whether the PhD system is broken, and you'll remember the core idea I kept coming back to, that yes, the PhD system is failing, broadly, at the moment, because the conditions required to support the inherent apprenticeship model of it has slowly eroded over time. And that's left researchers, to many, not get the support they need, feel lost, and forced to figure it out for themselves. And so today, I want to go over why systematic reviews are a specific decision that you can take early in your career that dramatically reduce that risk. So I recommend it as a first serious paper, especially for PhD students or clinicians coming into research. And today, I'm not going to go through how to do these. Thus, for another day, I've got a full playlist on my channel that you can check out, and it takes you step by step through a systematic review challenge to take you start to finish in 90 days or less. So we're not going to do any PICO walkthroughs or Prisma boxes, although at the end of the session, as ever, I'm going to take the questions that you sent this week. And if you would like to participate in these sessions and get my personal feedback on your research and these workshops, click the link to submit your video question. Oops, that's the wrong link. Yeah, just click this QR code and submit us your video question. I'll cover it. And of course, we've got, if you want to interact more closely, we've got our private members workshops where obviously, we can go into much more detail supported by our step by step courses. So yes, coming back to this, I want to explain why strategically doing a systematic review first is unusually safe and sound as a starting point. And it's one that I've used personally with my PhD students over the years in my time as a professor at Harvard, Cambridge and Oxford, and many of those researchers who started on that very trajectory now find themselves as senior, tenured permanent faculty at many top tier institutions in North America and Europe. So let's dive in. Let me start by saying systematic reviews are, even though they're safe and sound, they're by no means easy. Some people think that you can just get a bunch of papers and create a boring list of them. That doesn't work. They require real thinking, real work, real discipline, and importantly, they require a higher level of analysis, of synthesis. So just to kind of clarify what a systematic review is, many of you are familiar with the literature review, and where you go and you find papers on your topic, and you summarize them, except that's not really what a literature review is supposed to do, it's supposed to be a strategic argument for your next paper, or what you want to do in the paper that you slotted the literature review into. Systematic reviews are like these literature reviews on steroids, because what you're going to do is you're going to treat these articles like pieces of data. And you're going to be able to answer questions that others can't even ask by looking across the entire field. You'll spot patterns, gaps, you'll see where the field is moving. And that's where the real magic and power of systematic reviews comes in. They can be done in a qualitative way, they can be done in a quantitative way. When it's done quantitatively, that's sometimes called meta-analysis. But what's important is that they're lower variance than other types of research papers. Even if you're doing interviews, and you can't recruit enough people to interview, or quantitative data, and your analysis just isn't working out, or experiments, and your experiments are failing. So as I'll go through in the reasons why I recommend these so strongly, is that they dramatically reduce the numbers of ways things can go wrong early on. And they're also often benchmarks or reference pieces in your field, relatively highly cited for what they are. And those are just a couple of the ingredients of what makes them so powerful as a first paper. So let me go through the first reason that I'm such a big advocate on this channel of systematic reviews. The first reason is they're step-by-step. And I think one of the hardest parts of doing research for researchers is not knowing what to do next. There isn't the same built-in structure as when you did coursework before. You're kind of told, like, well, come up with a topic, find a research question, read the literature, and you really are pushed off the deep end into the pool, where what you've done before is you've aimed to get grades and high marks. But that doesn't... And there is a right or more right answer in many of those cases. But at this level of doing research, you're stepping into the unknown. And so what's great about the step-by-step nature of the systematic review is there is a structure, there's a built-in roadmap to it. It's one of the few research projects where you can see the path very clearly before you even start. And it follows a very predictable arc of steps from going through how you define a research question using a PICO model, how you go about searching databases so that you don't end up getting confused in Google Scholar or others you're searching actual databases, like Web of Science, PubMed, others. You learn, you have boundaries, so you know what to extract, what's in, what's out. And there's a formula for doing the analysis for actually learning how to do that synthesis and learn how to see and say things more with data. And then the writing is also quite formulaic in that the across fields, the introduction, the methods, the results of discussion, they all have the same core ingredients. And so that lack of structure that should be provided by a true apprenticeship model in the PhD just creates a softer way, a softer entry into doing research because the structure is built in for you. That dramatically reduces anxiety, dramatically reduces the potential to drift as a researcher. I'm not saying it's easy, but it's a whole lot more navigable than other types of projects, especially if you're in a situation where you're on your own and you felt left to figure it out. Say hi to some people joining us again. Hey, Amy. Good to see you again here. Hey, Amy, Chris. And we have Abubakar. Can it be suitable for PhD and master's? Absolutely. If you go to my Google Scholar profile and you look back in the years when I mean, it's very easy to find when I was professor at Oxford, you'll see several of my PhD students who went on to get permanent positions at King's College London, at Oxford, even London School of Economics, York, I mean, I could go on several in North America. That's exactly what they did. And we've had many master's students. I've even had undergraduates publish these, even high school students publish these reviews. So if you're a complete beginner, no research experience at all, you've tried to publish and failed multiple times, gotten rejected, had false starts, this is a great way to build your confidence back. The second reason, so the step-by-step nature is there, but the second reason I make the case for these is that it's foundational. It sets the foundation for everything that comes next. And that's because the systematic review will force you to read everything on your topic. And when you do that, you get confidence. But there's something more that happens that's built in this process. It prevents some of the kind of rookie errors that can happen. It will prevent you from duplicating research that's already been done or answering a question that the field's already moved past. It's going to help you identify where the field is moving and where the sweet spots are for what I love, which is low-hanging fruit. It'll help you find those high-impact papers that the field's pointing to that haven't been done yet. So it's going to help you see where the evidence is thin and saturated, what's contested, what's still live, and you're not going to feel like you're swimming against the current because you'll be able to zoom out of your field and see where that current is moving. So somebody put it to me that systematic review is kind of like Intel. It's like intellectual reconnaissance. You're really mapping your field to figure out where to deploy your resources first. So think about systematic review as your intellectual recon on your journey. Number three is, well, you're going to have to do a lit review anyway. That's often what researchers are asked as the very first thing, right? Prof or supervisor will say, well, go do a lit review. They want to know that you know your field, that you're not coming up with something out of thin air that has some basis or logic to it. What they're really asking for when they say go do a lit review is they want you to find a good research question. They want you to justify a gap. It's just unfortunate they don't always say it in those terms. But what makes a systematic review better is that it's unusually publishable and so highly cited. So if you're going to do a lit review anyway, do one that's more publishable. And the reason why it's publishable, people sometimes get confused about. So they often see narrative-based literature reviews published by top people in the field and they think, oh, I can do that. But what they miss is that a lot of those narrative reviews are invited and they don't go through the same peer review process, or they might be in a special issue, or it might be an editor of the journal who publishes it because, well, they're the editor of the journal and they have a bully pulpit. They can do that. What's different about a systematic review is it's reproducible and that you're going to do everything in that step-by-step fashion. Remember point number one of why to do them. So that others, you know, we've got Amy Chris here and Abu Bakr, Amy does her review and sets out her methods in a cookbook-like way to where Abu Bakr could take those same steps, come up with the same papers to analyze, follow the same methods and come up with the same conclusions. That reproducibility is at the heart of science. I've got another training on how that reproducibility is under threat and why systematic reviews are one of the best remedies we have against. But that reproducibility is what makes them so much more highly publishable and to just kind of emphasize their importance in clinical medicine, they form the basis of most guidelines. So when you go get treated by a doctor for something, odds are over 90% of the time, the evidence that they are deploying to treat you has been crystallized and codified into guidelines from systematic reviews. And so when life is on the line, it's life or death and evidence has to be of the highest quality, systematic reviews are used. So you are literally in producing a systematic review, producing the highest possible quality piece of science that's out there. It is at the top of the evidence hierarchy and pyramid. Some people may not agree with me about that, but that's just, I mean, you see the impact of evidence in how it's taken up and how it's used. And it's so strong because you are drawing on the entire field to make your conclusions. That's why they're getting picked up more and more in psychology, in economics, in social sciences, having sprouted out from clinical medicine, also in engineering, computer science, management, education, across the board. Because in a world with too much information, too many studies and noise, systematic reviews have a critical function to distill, orient, and set an agenda. Okay, let me get to the last point that I think is deeper and more transformational as a researcher. Every researcher goes through a journey from where they start and maybe start feeling confident because they've done well in their classes and grades and they're like, well, I did great in my master's. I did great in my undergrad. I'm going to do great in my PhD. And then by the end of this journey, the other side, they're confident as a researcher. They don't just read research like a native, they actually do it and produce knowledge. I'm going to do another training later on that PhD transformation arc, the very predictable stages it has, where you get stuck and what kind of mental, cognitive, and emotional shifts you go through along the way. But one of the things that the systematic read does is it ingrains very good habits. It kind of forces you into a process of good research, of detailing your steps, dotting your I's, crossing your T's, asking precise, defined questions, separating signal from noise to learn how to synthesize instead of just regurgitate. And so the way to think about it is that this is like the best kind of training. Because what I'll find with researchers I work with, they go on to do quantitative work and the level of rigor and detail they've done to define how they've chosen studies for their review and analyze them is the same rigor and detail they need in their quant work. It's the same with qualitative work. Whatever field you're in, this gives you a kind of safer space to develop, hone, and refine those skills without being overwhelmed by the additional challenges that can come from implementing technically challenging analyses down the road. So you master really core skills about writing a full paper start to finish. And the first paper is always the hardest paper to do. Almost everyone I work with, if they get past that first paper, they've made that transformation journey. They've become a researcher. And the second paper, they always say, that's so much easier, so much faster. I battled with this first paper. But then somehow two, three, four papers a year, it was unthinkable just months ago. But once they got that first paper, it's like everything clicked and suddenly it opened the floodgates. So the fourth reason here is that it builds confidence. It trains the right core skills of research and prepares you better for anything you do next. You'll just find little things like you read faster. You know how to forensically go through papers instead of just reading them start to finish or absorbing fast information. You quickly spot weak arguments. It shifts you in a mode of critique rather than just absorbing information. You get to understand what a contribution looks like. All these are skills that, again, we should be training through our apprenticeship model as a PhD. But with that model breaking down, this is a built-in practical system that will force you to learn those skills. And that's why I love it. I think it's a great foundation. And I am kind of an evangelist for these systematic reviews. It doesn't mean that these are the right choice for everybody. It doesn't mean that everybody should do one. I would just argue here that they reduce the risk that you face of your confidence collapsing, your timeline slipping, of a paper not getting published, of you going down a dead end or blind alley. So that's why I recommend them. So guys, what's been your experience doing these systematic reviews? I'd love to hear from you before I turn to questions that you submitted this week. And we have some good ones, including a couple on systematic reviews. If you have anything pertaining to your systematic review, do let me know. And I love this comment that Ahmed says. Ahmed comes in here and says, I chose a topic, did a huge systematic analysis, and my supervisor says you can write it, but it won't work for your PhD. But yes, I found an amazing new topic just because of this systematic review. That's fantastic. Exactly. It's one of the best. People get stuck trying to find winning topics, and systematic review is, if you haven't found some winning topics through a systematic review, something's gone wrong along the way in your review. Somewhere you have failed. So that's great to hear. I don't know what happened, Ahmed, on choosing the topic and doing the systematic analysis that won't work for the PhD. I'm not sure why that would happen. It's worth checking. I mean, you've got to do a lit review anyway. So it depends. And some departments will allow the systematic, if you say you have to do three research papers, they sometimes want the lit review, and then three research papers. So you can do a systematic review, but that's not going to substitute for those three research papers. In other departments, especially management or others, the systematic review does count as one of those research papers, and there's even people who do their whole dissertations just on the back of systematic reviews. We have Dr. Jim Reddy says, as someone who advised doctoral students in business, I use a systematic approach to help them identify a reasonable gap in knowledge they feel via their doctoral study. Mini SR. Yeah. I mean, I think, look, it takes, we've estimated, it takes, and people laugh at this, but it's really true. Go check out my systematic review playlist. You'll see that how step by step this is. That's 100% free. Check it on my channel, just go live and scroll back. Actually, there's even a playlist that makes it easy for you. Is it as good as our full course? No. But our full course, you get observational learning, you see people doing the steps, you have worksheets, but that course is designed to have feedback built into it. So anyway, I'm not here to sell you anything. If it's a great fit, check us out, see if it's a good fit for you. But the systematic review approach to identify a reasonable gap is really, really ideal. It's going to, again, force you, it's going to avoid a common trap of doing low value research or duplicating something or just a very minor derivative value add project. So yeah, Dr. Jim Reddy, I completely agree with you and I'm pleased that you found this approach helpful for your students and you'll see top tier journals in business are increasingly publishing these. We've got F1T9G4J. Where do you guys come up with these names? What can one do if the systematic review isn't giving us the expected answers? Well, that's good. I mean, counterintuitive findings are some of the best findings in science. And this will, exactly. That's why you want to do a systematic review. You want to actively prove yourself wrong. It's a whole idea of falsification in science. We have an idea, we test it, we were wrong. We refine our theory. So yeah, that's a valuable signal. I would listen to that and think what that means for you going forward. And when you say it's not giving the, again, the whole purpose of science is you don't know the answer before you start. You might have a pretty good hunch, but yeah, if you can give me some more detail, I can maybe help you navigate that. I remember when I was, my very first research, I worked in a biochemistry lab and I was pipetting stuff. And let me tell you, I was, I don't know, maybe a wandering mind. I was terrible at pipetting. I just couldn't sit still and I thought it was monkey work. But I kept getting results that weren't what we expected. And at first, I blamed myself. And finally, when we were really able to prove that, yeah, that was real, it led to a really nice paper that then became a quite important paper. And this was in some cancer research down the road. I don't want to bore you with that. But yeah, those counter, I mean, it's sometimes harder to prove a finding that's unexpected, that doesn't go with field norms, but those can be some of the most powerful findings. So I'd encourage you to just probe that a little more deeply. And oh, hey, Jeff. Good to see you, Jeff. Glad to have you with us as always. I find few SLRs on AI-related topics. I disagree with you on that. There's tons. But it depends what you're connecting it to. So it's kind of like COVID. Anything you could intersect with COVID was getting lots of citations just blowing up. AI, unlike COVID, thankfully perhaps, is here to stay. And so we're going to see, we're just seeing a flood of AI-related SLRs on everything you can imagine, educational performance, on business strategy, on every health domain you can think of, because it's transforming every aspect of life, much like the internet did, to perhaps an even greater degree. But yes, Jeff, I love what you're saying here. Jeff goes on to say, the superior learning from an SLR gives me ideas to improve a later, more publishable empirical study. 100%. 100%. And this is kind of the hidden, unseen value of the systematic review. You can feel, well, the worst is when I see people do a kind of half-baked lit review. They develop a study, and then they get deeper into it. They get to peer review or somewhere, and the reviewer is like, well, you didn't cite my study that already did what you wanted to do. Or you did it in a way that was inadequate. And it makes me sad, because that could have completely been avoided had they done the search in a more systematic way and left no stone uncovered. It's also why I don't really love when people try to use some of the AI search engines, because they're only capturing a fragment of the literature right now. That may improve in the future, but they only have access broadly to open access stuff. So I'm pretty much old school in this. I still prefer Google Scholar. So Amy, Chris, adds, please, what tools can make my work easier, and how can I get journal to publish health-related research? Well, so what tools? Well, I mean, the tech triad that we recommend right now is Zotero. The reference manager is indispensable. We recommend Grammarly to tweak and touch up your grammar. And some very specific use cases of LLMs. And I've shifted recently from ChatGPT to Gemini for that, because of the ability of Notebook LM to keep projects together in a way that's slightly better than ChatGPT's project capacity. But yes, that's it. So when I see people often getting huge tech stacks, to me, it's a sign of avoiding the hard work. It gives them a feeling of making progress, but they're often not going anywhere. And again, they're just dodging the hard things that they have to do. In your mind, when things get hard, it's really a debt at avoiding the thing that's hard. We're kind of designed as humans to avoid discomfort and suffering and run away from it. So when you're doing research, you're stretching yourself, you're forming new mental connections and synapses. That is anything but a pleasant process. It's like a deep mental stretch. So okay, F1T9G4J, I'm currently doing a systematic review on causes and consequences of late marriage and can't seem to find enough appropriate literature on it. Yeah, so I'm glad you asked this. We always go through two tests with our systematic reviews as you take them forward. One is a duplication test. So we want to make sure you have some real value you can add over and above existing systematic reviews. The second test you need to do is a feasibility test. And we do this very early on. And you do need to do a quick and dirty search and make sure there's enough literature on your topic before diving into it. Because you can't synthesize if there's only two or three papers in your field. So what does that mean? How do I go one level up? And, okay, well, I'm good, Professor Suckler, how do I go one level up? Well, systematic review, you're going to use a model called a PICO to define your topic. And those PICO become like a control knob so you can dial out. The PICO stands for P is population, I is an intervention or exposure, C is a comparison group, O is an outcome. And you can dial those knobs out. So say, if your outcome is late marriage, you could dial that out and maybe say, okay, if you mean late marriage is over 60, you can maybe dial that out a little bit to say over 50, right? You'll capture more studies over 40 and keep dialing, kind of zoom in, zoom out with that knob. When you're looking at the causes and consequences, well, another thing, so maybe the way you've operationalized it. So I would be surprised, though, if there's not enough studies on this. So my intuition here, again, not knowing this, just pure intuition, is my suspicion is that there's a second problem. So you would have probably passed that feasibility gate if you're going through our course. But I would suspect that you've got some type two errors, that you're probably missing some studies because if you just...it depends how you've operationalized your keywords. If you're just looking for causes and consequences, you're probably not going to find the literature that maybe looks at things like divorce, dissolution, stress, depression, other consequences of marriage. So go look at other systematic...so we use an approach to look at other systematic reviews on marriage and look at what terms they looked for when looking for the causes and consequences. But yeah, I think it's helpful already having a sense to F1T9G4J of what you have in mind that you're really interested in and you'd like to show here because sometimes when people are looking at late marriage, maybe they're interested in kind of life satisfaction, quality of life, or relationship breakdown, or financial stress, or something else. So it would be helpful to know in the background what you're most interested in and even just a broad rough sense of where you're trying to go with your research just to make sure that this review is aligned with that. But I hope that helps. I would need more data to more fully navigate that with you and see kind of where...be able to pinpoint precisely where the bottleneck is. But my suspicion is it's going to be how you've operationalized your keywords in this case. Mayer Abdullah says, is it a good idea to add SLR in an empirical study in a finance paper? That would make it too broad. I wouldn't try to combine. I see this happen. Sometimes they're like, oh, I'll do two or three studies in one. And I don't recommend that when you're just starting out. You could do it later. I mean, sometimes you see big nature science papers and it looks like they've got 10 studies packed into one. Do that later. Don't do it now. Are there some people who swing for the fences and hit big with that? But often they might have close supervisor input supervision. They might be part of a bigger machine making that happen. Right now, go for low-hanging fruit, get some quick early wins. There's a phenomenon of a Matthew effect in science where the rich get richer and early wins lead to greater long-term success. So you've got to make a quick name for yourself out there to succeed on the job market. And don't hate me for saying it. It's just the nature of the game. English or Paris still is really true. I don't see that going away anytime soon. So coming back to it, Abdullah, yes, I recommend that you keep one paper, having one big defining method, and the SLR is going to weaken that. For your empirical study, just do a traditional literature review, short. That's just going to derive your research questions, derive your gap, maybe your hypotheses to glide into your methods. These are really great questions, everyone. And when you ask these questions, there's other people out there who might have the same questions but just be afraid to ask them. So you're helping the entire community when you bring these up. With that, I'm going to go through some of the questions that we had this week. So let me turn to the first. We have Ahmed who writes—I'll try to pop this in the chat so you guys can see it. It's a little bit big. So let's see how much will come up here. But he says he's a PhD student in accounting. He's published several papers in Arabic and some in English but is less proficient in the latter and graduated with honors in the master's and currently in the—okay, cool—currently in the third year of the PhD. And says—let me pop this in the chat again—currently says, I've completed the first chapters of my dissertation and I'm considering publishing a narrative review of my dissertation research paper in a reputable journal. This would be for my own academic advancement and to fulfill my PhD requirements as most of my PhD publications have been in local or in journals that are not considered reputable enough to generate sufficient citations. Okay, yeah, so you want to aim for some higher quartile journals, totally get it. And so I'm not sure entirely what your question is but in terms of publishing a narrative review for your dissertation, like I said, that is—tends to be actually a lot harder to publish than a systematic review because it doesn't have that same reproducibility. So it's not impossible but I do get a lot of researchers come to me and say, oh, I published my master's thesis. Can I just turn it into a publication? Sometimes yes, sometimes no. If you've done the right calibration steps, there's a gap, there's a real value add, yes. But in yours, I would just say the secret to you getting into higher impact journals is going to be ultimately based on the strength of your research question, the importance of the research question, the debate in the field, and how valuable that gap is in the literature review. So you really need to differentiate and make the case for why do we need a narrative review on your topic. So if you can do that, even if the execution of the review is not that good, you'll still have a good shot. I've seen—I was working with one researcher who got a revise and resubmit and I'll tell you, I mean, the paper—she's watching, I mean, I told her the same thing in a fierce but loving way. The paper was terrible. But the question was fantastic and just on the basis of the loan of the question of the gap, she got a revise and resubmit and we've got a very good success rate of revise and resubmits because once you're in there, you've kind of constrained the space of what you've got to do and the reviewers almost are telling you, hey, here's what you've got to do to clear the bar and you just got to go do that and do it effectively. It's not always easy. It takes some hard work. But yeah, so roundabout way of saying, Ahmed, I'd encourage you to do some of those diagnostic tests to establish the value of your paper before burning time and energy trying to publish a narrative review that could just continuously get rejected. So yeah, if you want to share with us how you've defined the gap for yourself and the value out of your paper by calibrating it to a paper that's very close to yours, send that to our next workshop and I'd be happy to help you with that. And if you're confused about identifying the gap or clarifying that value added, got a really good live session on the gap and also some videos on how to find the gap that will get you clarity on that. I won't have time to go into all of that here. Okay. So we've got a next submission here who says—oh, okay. I like this one. Some of you might be able to relate to this. I'm relatively quick in writing papers. If I have the results, I write and submit within one month, but I procrastinate revisions and I prioritize new writing projects instead. How can I overcome my dislike of revisions? This is good. I'm glad you asked this. There's a lot to unpack here. So one thing you said that I really want to call out, once you have the results clear, it's easy to write. And when people are telling me they get stuck writing, it's often because they don't have clarity about the results. So I love that and you're naturally following a good healthy writing flow and intuitively seeing something that's very important and in a big failure mode for a lot of researchers. They're writing to figure things out, not writing after they figured it out. So second, you procrastinate revisions. So what's going on here, implicit in the revision process, is you're getting feedback. You're getting critique. Now this is part of the PhD transformation arc I was referring to before. In the past, that critique would be in the form of a grade and you might take that personally. You might have gone and talked to your friends. Oh, I got an A, I got a B, I got a C, I got this or that, and it feels quite personal. That can happen with revisions or rejection. It feels like a knife in the gut. You're taking it personally. And the challenge and part of this PhD transformation is to be able to absorb and integrate and grow from that feedback. It's almost like you ever see those like, you ever play those video games as a kid where like you shoot like the bad guy and the bad guy just gobbles it up and gets bigger? You kind of need to be able to do that and not let it harm you. So feedback is a gift and it will sting when you get it, but it's going to make your work stronger. So this was a big shift I had in my journey where in the beginning I almost rejected critique and revision because I was taking it personally. And now I go as far as I possibly can to integrate it because they're a signal that something's not clear and needs to be improved. So I don't know if these revisions are coming from peer review or from a supervisor. But again, what did I say earlier in the session today? When things are uncomfortable, our mind tricks us and finds ways to avoid doing that hard thing. And look, it's bad enough. We're creative people often going into research. We've got some of the worst shiny object syndrome and you often get a burst of energy, the excitement of a new idea. You almost get this gut pull to do it and some of you can really go hours with like wind in your sails from this new idea. But finishing that last 10%, that's tough. That is tough to do. It can be boring. It can be tedious and it's indispensable. And think of it like a skill. You've got to practice it. You'll get better at it. And eventually it will become mechanical. And so yeah, I would encourage for all of you, whenever you have these feelings as a researcher, there's a part of this game that's technical and there's a part that's mental. So when you find these feelings, I would treat it like an invitation to go deeper. And if I were going to go deeper, just one more thing on procrastination. Procrastination is often perfectionism in disguise. And revisions are you are being exposed. Your last round got exposed as having imperfections and trying to clear that bar again and subject yourself to having those imperfections called out can be very confronting and manifest as procrastination. So I don't know personally, but if I were going to go deeper, I would really encourage you to probe that. And that's why also on the inside of our program, we have an entire mindset track and we have mindset coaches who are experts, better than me, in working with and helping you cultivate these skills. It's just like professional sports. There's the physical side, but a lot of top athletes, they have mental coaches as well. And that's how you get to peak performance. And that's not just about academics. It's anything you want to do in life. So really, thanks for sharing that question. Really fantastic question and gave me the opportunity to chat with you about something I'm really passionate about. Okay. Dadaraji too. Yeah. Dadaraji in our group. Feedback is a gift. You know, Dadaraji, I actually, I think you were the first person who put that so nicely. Feedback is a gift. And now I say it all the time. Thanks to you, Dadaraji, for that. But yeah, Dadaraji, maybe share in the chat with us if that resonated with you and how you responded first to feedback, because I think a lot of people recognize that and that shift to seeing feedback as a gift is part of that journey I was talking about before where you just start out, you might have early confidence, then you get disoriented, then you go through some overcompensation, and then, well, I don't want to steal too much thunder. This is coming up for another day. But that's part of the arc. Okay. We have Fatima. Fatima asks, I want to ask, I watch your playlist on systematic reviews, but I'm a little confused on how to get the proper search strings if I can't find any pre-existing SLRs. So you don't need a pre-existing SLR on exactly your topic. You're going to do this for, like, imagine your keywords, imagine you have three keywords. I'll just mix something up. Let's say you have poverty, HIV, and treatment. Well, you don't need to find a systematic review on, I don't know, poverty and HIV. Go find a systematic review on poverty and figure out how everybody else searched for poverty. Go do a systematic review, look at systematic reviews on HIV. So kind of split apart your keywords and search for search strategies that were already established and validated for each of those keywords. And this might actually help our buddy whose name I can't stop saying now, F1T9G4J. This might help you on what you're trying to do as well. So Fatima, I hope that helps. My suspicion would be not knowing what you've done that you might have tried to find just a pre-existing review on your topic rather than treating this as, well, I actually just want to get the most robust search string. So I want to figure out how other people have operationalized these different terms. And guys, if you're just starting out, what's going on here is when you're searching for literature, if somebody says, you know, ethnic minority, other people might use different terms like African-American, you might need to define that as Hispanic or something else like that to capture all the different permutations of the words. And you can try to intuit them, but why not just stand on the shoulders of giants and build on what's already been done? So if anyone ever critiques you in the future, you can say, well, you have a problem with the way I did my search. You don't have to have a problem with those great papers that got published over there too. And you cite them. So I hope that makes sense, guys. I see people try to use LLMs for this purpose and they really don't work. This just don't recommend using LLMs at all to find keyword variants. I just find it's a mess. It's not the right algorithm. Okay. And Ahmed says, feedback is lifesaving. Yeah. Yeah. A hundred percent. I mean, I benefited from a lot of mentors, a lot of input along the way. There's no way I could have gotten to where I am without it. And it's also why I'm so passionate about this channel and our mission at Fast Track of creating research literacy and democratizing these implicit parts of research that were passed down from mentor to mentee, creating a system to make this accessible to everybody. Because I genuinely believe that research is not about being a genius. It's not about how smart you are. It's about, it's a craft. It's about being shown how to do it. And if I had my way, I would infect everybody in the world with a joy of discovery. The ability to sift truth through noise, because for me, in this world of cacophony, of ideology, where hate can run rampant, I believe truth and research literacy is one of our best defenses and safeguards of democracy. You're getting me ranting. So somehow I got from feedback to safeguarding democracy. Well, that's a Friday for you. Guys, I've got one more, a couple more sets to go through here. Beata says, okay, Beata is in the middle of a systematic review. So I'm going to pop in to the chat again, so everybody can follow along. She's interested in identifying studies via other methods. So what Beata is referring to here is in a systematic review, you've got, unlike a literature review, you've got to really step by step explain how you chose the articles you did. There's two branches. So there's one branch where you identify studies from databases like Web of Science, PubMed, you put in keywords, you search for them, they come out. Others is where you go into things like Google Scholar that are not as reproducible to find articles. And maybe you have to go into gray literature, unpublished literature. In that other branch, Beata is asking, can your nearest neighbor SLR be included in the citation search approach? So one method for finding other articles that goes in that other branch that wasn't in your initial database search is to search citations, something called snowball searching or citation searching. And so the articles you've included from say, Web of Science and PubMed, you look through all those citations and you see if there's anything that looks interesting. And then you pop that up because you want to detail what you've done in a transparent way. And you apply your criteria to see if they make the cut, if they're in or out, they meet the inclusion or exclusion criteria to be inside your study. So yes, the answer is yes. So 100% that works well. And I do encourage you if you are going to do citation searching and there are systematic reviews on your topic, yes, go ahead and do that. Now, if you set up your search strategy correctly, you should have already captured all the studies that are in there. So if you find yourself capturing a lot of studies that are in there, it's a sign there's probably something off with your search strategy in the first place. And this has to do with something more technical. I don't want to get into here about type 1 and type 2 errors. But Beata, I know you've taken our course, so you know what I'm talking about there. So if you're including a lot of studies from this nearest neighbor SLR, you probably have type 2 errors and something's gone off. But yeah, thanks for asking that. And Beata has a follow-up question here. Let me pull that out. If she asks for new studies, should you pull duplicates out? Yeah. Well, if you're citation searching and you pull the same study, there's not really a duplicate process because you're cherry-picking at this stage, and you're not going to cherry-pick duplicates. So you don't have a duplicate stage at that part of the flow, right? You're only getting duplicates because you're searching multiple databases that overlap. But here, the only reason you're even pulling it into the right-hand branch is because it wasn't already on your left-hand branch of Prisma. OK, guys. If you have any questions about that, if you're just beginning, don't worry if that sounds confusing. That's getting into a little bit of the weeds of the mechanics on how to do a systematic review. Usama Salim asks a very good question. How many articles should we have to read for a good systematic review? Well, there's two aspects to this question. One is how many do you have to read and pick up and read cover to cover? And the answer is—you're not going to like this answer—the answer is zero because you're not doing a deep read of these articles. You are going to forensically pull things out of them. And so if you're reading an article cover to cover, you've missed the right way to read articles at this stage. And one of the good things that a systematic review does is it goes through—it forces you into a certain reading model to forensically pull data out of articles. I can't tell you the last time I've read an article cover to cover, probably when I was doing a peer review. It's very rare you need to do this. I've got a great training on my channel about the triple pass reading method, a method we've adapted from other reading methods out there. And basically, there's kind of a top-level pass where you're going to look like, do I need to read this? Does this fit my purpose for reading right now? And then there's another pass that you're reading with a purpose to forensically pull things out. And then the most detailed passes you're going to read to try to reconstruct what the paper did and found for yourself. And that's very rare. It's less than 5% certainly of the time you should be in that third, very deep, getting your hands dirty with a paper pass most of the time. Even systematic review, you're just pulling out key information as it pertains to answering the main question or scope of your systematic review. Now there's another layer. So I said there were two aspects. So one is about how you actually read, how many articles you have to read. The second I think is going on here is how many articles you think might need to be included in your review. And to give you a way to think about this is that if you include a final set of articles, if you include hundreds in your final review, you're not going to have a whole lot of space to summarize them. You're going to end up having to do more quantitative approaches to summarize and map out that literature. Whereas if you have say, imagine you only have six, that would be the very bottom end of what you could get away with. You'll have space to really get deep with those articles. So there's no hard and fast rule. It depends a bit on your field and it depends on what you're trying to do. But I encourage people to do their first systematic review to follow something called SWIM, which is systematic review without meta-analysis. I imagine you can have a systematic review that's qualitative and a systematic review that's quantitative. I recommend the qualitative one and to do that in the most effective way with more of a narrative synthesis, I find 20 to 40 articles is a really good sweet spot. Again, it can be more, it can be less. These are not hard and fast rules. And I think this is sometimes frustrating for people, there's this judgment involved. There's no right answer. You have assumptions and you have to defend those assumptions. So Uzama, let me know. I hope that answers your question. But again, rule of thumb, if you're below five, something's gone wrong, you probably have too few articles. And I've got one more here. I can see Beata just came in. And again, Beata, I think this is—OK, let me pull this up first before I answer it so you guys can see it. She says, if SciLit.com is used, should it be reported in the identification of studies via other methods branched in Prisma? Should studies be found using the same search string that was used, for example, in Web of Science or not necessarily? Well, there's a problem here. So SciLit.com is, again, going to be a partial repository—I mean, they're all partial. But why—I think behind this quest is telling me, Beata, are you missing some articles? Are you not finding enough articles for your review? You don't need to do this. So my sense here is just don't do this stuff. You've got to give me some compelling reason to do something unorthodox and that there's some payoff to doing that, where if you've searched Web of Science and a few of the other major databases, you're fine. You don't need to do this right-hand branch of Prisma at all. So this tells me we might need to revisit something or think about something in your search strategy if you're not capturing somehow papers that answer your research question. So I really want to get to what's deeper and what's motivating you to try to do these alternative methods. I will say very rarely, very, very rarely do we go through, at least the researchers who I work with, and use that branch of Prisma to start cherry-picking articles, just—it's usually not necessary. There are exceptions, but I would say in less than 5 percent of the reviews that we publish do we do that. Jeff, good question. For top computer science journals, if SLRs include a paper's code and results that are replicable, can you recalculate the paper's outcome statistic to match the alchometric type used by the other papers? People have a harder time replicating studies than you might think. It's one of the things that was assigned to me when I was a graduate student, try to replicate this paper, and you'd be amazed how hard it is to do. So yes, you should be able to, but standards of reporting are not always sufficiently transparent enough for the reproducibility, which is, again, another reason why I love SLRs, because you guys will be at the avant-garde of reproducibility, because you will achieve a high standard of replicability with your SLR that I hope you will then later carry on and apply to every other type of study that you do. So yeah, thanks for that question, Jeff. Guys, we have had a good session. Have I convinced you that a systematic review might be a good thing to do? Do you have any doubts, questions? If you guys are in team replay, I always come back after and answer all your questions and all the comments that you leave here. I know I try to pick a time zone, a time that's going to be consistent for the majority of you, but we do leave out some of our friends in Australia and New Zealand and in East Asia, as well as in the West Coast and California. So do let us know if you're on team replay, and if you guys have topics that you'd like to see coming up, let me know. I respond to you as the Fast Track community. Submit a video question here. And if you guys are interested in working more closely together, then I would encourage you to check out how we think about research, how we approach it, what our systems are like, and I've got a QR code for you up here. See if that approach resonates with you. And if so, I'd encourage you to reach out, book a call, and let's have a chat and see if we're a good fit to work together and help you in your journey. So with that, I bid you all a very good weekend, and I look forward to seeing you all next week, same time, same place, Friday, our Fast Track live.
We’re Ready to Help
Call or Book a Meeting Now