[00:00:00] Speaker 1: Hey everyone, Professor Stuckler here. We have a fantastic session on Systematic Reviews Made Simple. In this session, I'm going to show you the seven-step workflow that we actually use inside of our mentorship communities that have helped hundreds of students go from zero background experience to publishing in as little as three months, and publishing not just anywhere, but in respected Q1 journals. These are just like a vitamin shot for your career. They help you get on that proverbial fast track, which we aspire to reach with all of our community members, unlocking fellowships, promotions, scholarships, standing out among your peers, getting more attention from your professors, and so many benefits that you won't really fully appreciate them until you do this yourself. If you've never published before, being able to publish is like having a superpower. Being able to not just consume information, but produce information that advances the forefront of your field is really a life-changing experience. What I wanted to do today, by the end of the session, you're going to see our seven-step workflow. I'm going to share with you some of our worksheets that our community members have used, along with the templates that have saved them a ton of time. We're also going to have time to look at some submissions of questions from our community members here on YouTube, our Facebook group, LinkedIn, and our private mentorship communities that we'll cover together so you see some real-world examples of how these systems can be put into practice to save time. In the spirit of open access, I just want to remind you what Fast Track is all about because I see we're truly international, and I see some faces that we're welcoming back here. Do drop a line in the chat. I don't know if any of you are watching on the replay, because as much as we try to accommodate international time zones, some people aren't able to join live, especially that's the case for our Australia, New Zealand, Papua New Guinea students at the moment. But if you're on Team Replay, comment replay below. I do read and reply to every comment that's left in the chat afterwards. So if you have a question, let me know, and myself or a member of our team is going to get back to you. I did create this community because I have found a major gap in a lot of the support that researchers are getting. Much of our research training model just isn't fit for purpose. Many researchers are just expected to figure out things on their own, and this can leave people feeling, what I hear a lot, feeling frustrated, feeling like they have a lot of energy but maybe not getting the feedback they need to truly thrive. They're expected to figure things out, cobbling together a patchwork of material on YouTube that doesn't add up to a coherent whole, or even worse, dealing with sycophantic AIs that give positive feedback that takes them down a deep rabbit hole, might leave them feeling a little more confident with positive feedback at first, but later feeling completely lost and hopeless. So that's why we created our systems, to really fill that gap, to help those researchers who are really motivated, really want to fly and thrive, but are just missing that crucial support to truly thrive. I experienced that myself as a grad student, even when I was at a top institution of Yale and then later Cambridge. I too was in that model, just expected to figure everything out on my own, and I made about every mistake along the way. I definitely did things the hard way, took a lot longer to do things than needed to be done. So that's why we created this community, so that you don't have to go through the hardship that I did. We also have our fantastic Marina Sloopy, who's our admin in the group. Many of you guys may have met Marina already. We also have Susan, who's also a friend of our group, like I said, truly international community. Susan, I've got a submission from you as well. We do have a principle in our communities that if you have learned and benefited, there's also an important role to give back, and Susan is definitely a shining model of that. Really excited to cover your topic for a systematic review today. For those of you watching on, we'll see a real insight into how Susan has worked with our systems to progress through our systematic review training. So that's awesome. We also have Rubel joining us for the first time. Rubel, welcome. Excited to have you with us. So yes, several of you who have submitted, I've got Mariam, we've got an abstract from Corinne, a few others. You're welcome to do so. By the way, if you do want to submit a video question yourself and participate going forward, may answer and do a deep dive into what you're struggling with, your research, or just want some feedback, check out the QR code here. If you want to check out some of our mentorship communities, wherever you are in your journey, whether that's just at the very beginning with no experience, or like some of the advanced researchers, we work with professors, doctors, members of the medical community. Wherever you are in your journey, we've got support structure available for you, and one that can meet every budget. So we always try to match you with the right support for you. So I encourage you to check that out as well. Okay, enough background, guys. Again, I encourage you to introduce yourself, get to know members of the community. Let's dive in. Systematic reviews made simple. I find a lot of people I work with are quite visual. So I'm going to pull up a whiteboard. So let me share my screen and hopefully I'll be able to get this right. If you guys can't see the screen, let me know. It is something that I do tend to be a little bit of a technological luddite sometimes. But here we are. So okay, systematic reviews made simple. Well, firstly, let me give you some background on what a systematic review is and isn't. It's really different from what you may have done before in a traditional narrative style review, where you kind of have some background familiarity, perhaps in a lit review, you're trying to get a quick summary, an overview of what's been done in in the field in an area of research, you might be trying to find gaps, you might be trying to spot directions for future research, you might be trying to build your own knowledge. So you can feel confident in the field, or you may be trying to publish a paper, we've got a dedicated video on my channel comparing different types of lit reviews. So if you want more detail, I'll post a link below to that. But the narrative lit review is really typically going to often happen through googling around, going to Google Scholar, you take some articles, and you synthesize them together, you might create a narrative or story out of them to keep things tidy, and write that up, it can vary in length from just a few thousand words to 10,000 words, even more. The challenge is that these narrative lit reviews, they're not reproducible. So it can be very hard for somebody to replicate what you did, because you've just kind of hunted around on Google, found stuff that you found is interesting, and cobbled it together into a text. So that lack of reproducibility really misses out one of the core features of science, which is that, say, I'm picking on Marina and Susan here, that Marina can take what Susan did, check her assumptions, and follow the steps she took, and see if she comes up with the same conclusions. This reproducibility is missing. That's why when you see narrative lit reviews published, they're often by very senior people in the field who have been invited, and they've kind of earned the right to do that. We actually want to hear a bit, their original spin on the literature. But it's not something I recommend for beginners. So that can be, they're harder to publish, because they're not reproducible, and they're often invited, and I think they're actually harder to execute. Okay, so this is a detour to what a systematic review will do for you, which is it's going to take all these processes, and just like it says, it's going to systematize them. So what are you going to do? You're going to have a reproducible way of doing a systematic review. It's easier to execute, because it's step-by-step in nature. The steps are kind of cleanly laid out, and they need to be cleanly laid out so somebody could actually follow them. So that's built in. You're not going to be searching Google Scholar, you're going to be searching databases. So the difference of Google Scholar and databases, Google Scholar is an algorithm. So again, you can see how that's not reproducible, because an algorithm is going to optimize for what it thinks you want, for relevance, and it's also going to take into account your past histories. This is going to apply to most, and many, of the AI search algorithms as well. Some people are using perplexity, or even if you're trying to use a chat-GBT. These can be good for scoping things out. I'll show you where they can fit in the mix, but they're not really good for conducting a systematic review, again, because of that reproducibility criteria. So because of this, I find that systematic reviews on the whole are just easier to implement, and what I recommend typically, and I've done in my time as a professor at Harvard, Oxford, and Cambridge with my graduate students. If you guys have any questions about what it is, what it isn't, let me know. A close neighbor to a systematic review is sometimes called a scoping review, and by the end of this video, you'll understand some of those subtleties. Just a tactical point, I always recommend setting yours up as a systematic review. I've had this happen before. If you go look at, we've got a testimonial from Nehal Noor. She was a medical trainee in Ireland. She set hers up originally as a systematic review. The reviewers asked for it to be a scoping review. That was fine. It was very easy to change back from systematic to scoping. But you can't really go from scoping to systematic once you've started down this path, for reasons I'll get into. So systematic reviews are just a little more respected. I think they're even easier to implement. So with the vast majority of researchers who I work with, there are always exceptions. I do recommend doing a systematic review as your first paper, and then pivoting back if need be. Okay. Let's get into how you actually go about doing this. I've even encountered researchers where they go propose to their professors or their colleagues that I'm going to do a systematic review. And they say, no, no, do a narrative review, because they might think you're not able to implement this. It could be your professors haven't even done this themselves, and it may seem daunting if you've never gone through the process. But hopefully by the end of this session, I'll have demystified the steps that you've got to take. And I can see we've got some more new faces just getting introduced. I can see Hisham's with us. Hey, Hisham, good to have you join us. Isabel, glad you found that those four giants video was a game changer. We're going to cover some of those giants today. And we've got Heartlove saying that with your guidance, I can be a researcher. 100%, 100%. I think this is a more general problem in our society. Don't get me on a tangent. But there's sometimes a chasm between those who do science and those who don't. And I think there can be a general suspicion from those who don't do science sometimes of those who do do science, almost like there's this world they're not a part of, they can't participate in the debate or the discussion. I think this leads to a lot of misunderstandings and definitely does not help create the kind of evidence-based world that I'd like us to live in more. Anyway, I digress. I don't want to get into politics on this channel, but I do believe as scientists, we have a duty to do reproducible science, but also to make our methods and tools accessible. And again, that's the spirit of the open access community that we're aspiring for here. And Leha, hi from Pakistan, welcome. Okay, first thing you've got to do, and this is deceptively simple, but it might seem so on the surface, but it's very hard, is you have to step one, you have to find a winning topic. This is hard. This is hard. Again, this is a step that many people, if they get a winning topic, sometimes if you're just starting out, they get it from their supervisor, and they may be inheriting a good topic. They may be inheriting a bad topic. Sometimes it's very hard to know. I mean, Ernest Hemingway said that the difference between somebody who's great at writing and somebody who's not good at writing is what he called, sorry for my language, a good shit detector. It's very hard to have that detector when you're just starting out because you don't have a lot of experience. So you can't tell what's good, what's not so good. You don't really have your feet, and that's normal. So I want to give you a method here for finding a winning topic that you can use to establish and validate your topic with where you're at right now. One other thing to say on topic is I would say about 90% of your success comes from finding a winning topic. I've seen poorly executed papers that are on a fantastic topic get published in top journals. What I've also seen on Flipside, technically perfect, perfect reviews, papers. This is a general point on the topic. Technically perfect papers that are just on such a small, narrow topic of little interest or have been done before that they're dead on arrival. They can't get published. What a shame it is to invest so much time and energy into a topic that isn't going to get you delivered goods for where you want to go. Some of you may just be wanting to tick a box for a course or program. That's okay. You can optimize your topic for that. Many of you who I talk to want to go down a research path. It's definitely not your ticket to a boring life. If so, you really need to be optimizing for publishing from the outset. We do two things to help us find a topic. One is we use a convergence method. We try to figure out what you're passionate about. That's key. You're just not going to survive this area if you're not passionate about what you're doing. Only you can truly answer that. We want to make sure there's a debate, and we want to check that the paper's feasible. I do want to take a pause here for a second and just show you what one of the worksheets is that helps you find a topic. If you guys are interested, I can make these resources available to you after. I'm going to stop. Let me share a different screen for a moment. Bear with me a second. I'm going to pull this up. I just want to, again, share with you what I'm looking at. This might take a second. Let me share here a window. I'm going to go into our course library. Let's see if this opens up. Okay. We'll go into the systematic review course. We'll go into modules. I'm broadly walking you through the steps of this course. Let's go into week one, how to find a good and fast topic. We have these nice step-by-step instructions on how to do this. This is broadly mimicking what I'm going to share for members of the program. We actually have worksheets, which breaks the whole process down into small steps where you do it yourself and you get feedback from us on each step of the way so you never feel lost. Basically, this is the step, the estimated time it's going to take for each component of the course. What we're going to do is we're going to define research interests, feasibility. We're going to check low-hanging fruit, identify the debate. This is the flow that I know Susan's gone through later that I'm going to share with you. Let me get out of this and come back to the whiteboard so I can walk you through how this all works. What I want you to do, for passion only you can answer. For debate, the easiest way to check the debate is actually, you're going to think I'm eating my words, but it's actually to head over to Google Scholar. Google Scholar, you can use AI search tools for this step as well. This is why, because we're going to scope. We're going to have a little bit of a scoping phase. What we want to scope for is we want to do two things. Firstly, we want to make sure that there are citations and there's activity. You might want just some broad keywords, some broad terms. For example, I've been working with a psychiatrist. He's interested in insulin sensitivity and bipolar. We might just put in those two terms. We just want to see if there's a lot of activity in the area. One challenge I sometimes see with researchers just starting out, especially those who might have a lot of experience in industry, is they misunderstand how a lit review can be like a funnel. Sometimes, if they have a lot of experience, they're kind of down here. They have these interesting ideas and things that are gaps in their field, things that need to be done. Maybe they have a better way of how practices should be implemented. They want to focus their lit review right around that. What they don't realize is lit review is like a funnel in which you have to maybe take a broader set of papers than you originally imagined. Hopefully, at the end of your lit review, you're going to review lots of different papers. This is supposed to be a paper I'm drawing here. You're going to review lots of these. You're going to say, hang on a minute. It's like nobody's done this innovative idea that I've got here. Nobody's looked at it in quite this way. Nobody's looked in this country or this population or used this method or some kind of gap that comes out. To bring that in focus, you can't take your starting point as a gap because you go search for that and you're not going to find any papers. To implement a literature review, you've got to have enough papers to review. That's one of the big problems I see people having. If it's too narrow, you might not find any citations or activity. You're just a little bit too narrow. You might need to broaden. That's really key to scope for. That's also going to link to our second point here called feasibility. You need to check. Your first feasibility test, test one, is are there enough papers? If not, we're going to need to broaden out the scope to make that possible. Typically, enough papers, you need to make sure there's at least about eight to 10 papers. Otherwise, you're not doing a review. If you only got one or two papers, it's just not enough. I'm going to show you some tools on how you can broaden out that review using some different control knobs to dial in and out and get your topic right. We want to see there's citations and activity. Again, this is really important. Like I said, this is going to be your best objective indicator that the field is hot. It's much easier to publish in a space where there's lots of citations, lots of activity going on than an area that's just you look in Google Scholar and you hear crickets. I'm in Texas, so crickets is kind of the jargon for when there's silence. There's just nothing there. Look at the citation counts around your area. Again, if you're only seeing one or two or sometimes zero citations on papers, you might have a problem. That's one objective indicator. That's going to tell you if there's a debate. For feasibility, you need to see are there enough papers? You also need to do a second test, which is to make sure you're not duplicating what's already been done. This is awful. I've seen this happen a whole bunch. Researchers come to me excited, and they're already halfway racing through their systematic review. Then they find, as they've done their search for papers, oh no, somebody's already done what I wanted to do and is back to the drawing board. You want to actually get in front of this and test this out front. What you want to do is you want to go in Google Scholar and you want to actually, if you're doing systematic review, this applies to anything you want to do, but in this case, you want to go look for what we call a conceptual nearest neighbor paper and nearest neighbor papers. We want to find the paper that's closest to yours, and you want to go look for those keywords you might have set up and systematic review. You want to find, has a systematic review already been done on my topic? Find what the closest review paper is. Maybe there hasn't been a systematic review. There's only been a narrative review or something else. You need to figure that out. Then you need to justify what is the gap. The gap is what was weak about that paper? What scope does that leave for you to make a contribution? You need this because you need to understand your value add, and I know I'm drawing arrows all over the place, but this is really important because the big reason, one of the biggest reasons for rejection is that reviewers ask, well, they say there's lack of novelty or substantive contribution. Say, well, what's new here? This has already been done before. This is not clear. Every introduction of your paper, every abstract is going to have to calibrate this, what's missing, this idea of the gap, what's missing and what value you're going to add. That's why conceptually what we're doing here is we're finding, okay, the field has brought us to here. I'm going to go over and above that and take us to here. That difference there, that's you plugging some gap and you adding some value to fill it. Not all gaps you can plug, but we've got a whole training on how you can spot gaps quickly in your field and use that to set up your research. Check that out on my channel. Again, I'll add these links below to you so you can go to these additional trainings. We've got a lot of valuable trainings. You can find many of them on my YouTube channel, but you got to hunt around a bit. That's why kind of our courses that have the additional material broken into step-by-step as well as those neatly laid out where they need to be can be really valuable. Any questions so far, do let me know. I want you to do, too, some of you can take your topic and they'll come to me months into their topic and say, oh, it's not working. I don't know why it's not working. Maybe they've only got one, two papers in their review and they've been going in circles trying to do this review, but they don't have any papers to review or they find this duplicate later on and then they're like, oh, no, what do I do now and trying to find some funny way to make it work when sadly sometimes you just got to start that over if you want to publish that you're not going to be able to make that work. Finding a winning topic, we could do it. We've got sessions doing a whole setup on finding a winning topic, but this is the first part. So once you've got your topic, then the second thing on your topic, you need to develop it into a PICO model. This works for any field. A lot of people say, does this only work for science? No, this works for political science. We publish several systematic reviews on say, personally, populism and its economic drivers among other things. This works across the board, you see it's in management, on entrepreneurship. So PICO, what is PICO? This is going to help you define boundaries and it's going to also help you if you're running into any of these problems, like, are there enough papers, figure out how to dial in and dial out to get more concrete. So P is population, that's kind of the group of individuals, countries, firms, whatever it is that you want to study, could be a medicine group of patients. Comparison is sometimes an exposure. It's often what you're wanting to look at directly. Comparison, sometimes you make direct comparisons. So for example, I have students in education say they want to compare AI training to conventional human teachers. So your AI training would be your intervention. Your comparison would be the normal teaching by real humans. Your outcome, typically, with that AI example, maybe you want to look at test scores. Maybe you want to look at student engagement and satisfaction. You need to define what the outcome is. This outcome sometimes confuses people. It's kind of easier to understand for quantitative people because they understand there's an effect of something on something else. But usually a lot of, especially social science, is looking at the effect of some X on some Y. The effect of taking a drug on blood pressure, say. The effect on what's being affected is that outcome variable. So going back to the doctor earlier, maybe you want to look at an insulin-sensitizing agent like metformin on bipolar. Now you can refine that. So bipolar, is it the incidence of bipolar? Is it episodes of bipolar? Is it the manic part of bipolar? Is it the depressive part of bipolar? Keep going. Where I'm getting to is that for each of these, you can refine these. So for example, take bipolar. Maybe you want to say a broad category of all mental illness. So suppose you start too low. Suppose we start with something like, okay, I just want to look at bipolar. And then you find, oh, there's too few papers. Well, what do you have? You can actually dial this out and say, okay, maybe I want to look at all kind of, I don't know, broader mood disorders or something. That would be a bit wider. You'd be casting a wider net to get more papers. Maybe there's still not enough. Maybe you want to look at all mental illness. So what happens is when you define each of these knobs, you are essentially helping dial into your topic. Same thing with the intervention. Suppose you want to look at an insulin-sensitizing agent. Well, you could look at a specific one. Maybe you want to look at metformin. Again, don't worry about the details if this is not your field. It's the approach I want you to understand. You could dial out to say all insulin-sensitizing agents, and you could broaden that way. So again, same thing. And the outcome, you have the same kinds of choices. So sometimes just getting this out and getting the PICO down is going to help you make smarter choices and you'll be having a conversation back and forth as you look at your conceptual nearest neighbor, possibly even plural neighbors, to dial in the gaps in value added you can have from setting up your topic. Okay, guys, I know we're covering a lot of ground fast. Let me know if any questions. I'm going to keep going to step two. But this is going to help you. So in terms of defining your topic, you want to have this. If you do happen to be working in a health-related field, you're actually going to need this because in a health-related field, you need to pre-register your protocol for your review, and this is an integral component. If you're in social science, I recommend this because it's just good scientific practice. It's also going to make sure you've got well-defined boundaries. As a rule of thumb, I like you to have at least two components of your PICO defined. You don't have to define them all. If you define them all, you're often going to be too narrow. All right, let's go into step two. Step two in our simple system, you need to develop a search strategy. I do want to take a breath though to see if anybody has any questions. Make sure you can all hear me clearly, you're still with me. Cool. Kubra asked a good question. Do you have any real research articles to understand the PICO? Yeah, definitely. Let me share my screen. We can look at real concrete papers. Some will actually even report the PICO in their model. Others, it's more implicit. Let me show you some and show you how it kind of... Yeah, let's go into Google Scholar and let me stop sharing for a second. Let's go into Google Scholar and I'll show you some examples. I'm really glad you asked that, Kubra, because if you want some more concrete examples, then I'm just going to do an ego Googler here. I'm going to do a systematic re-stuckler and you'll see a few of our papers come up and let's pick one. Okay. You can see these papers can get quite a few citations. Let's look at several. So we're here, oh, here's some from Mohammed Rehab. We've got quite a few. I'm trying to think of a good one. This is a nice one. This air pollution on COVID incident severity and mortality, systematic review of studies in Europe and North America. And then I can show you how that's going to look. So I'm going to just copy this title to the whiteboard and then guys, maybe with me, you can help me break this down and let's make a Pico model together out of it. So okay, I'm going to go back to the whiteboard. Okay. Okay, guys. So if we were going to make a Pico out of this here, how would you do it? Can you guys spot what is the population here? I know this is going to be like easy for Marina and Susan. What's the P? What's the I? What is the C and what is the O in our Pico model? One other thing, this just relates to a side note on titles. I like starting with a very boring title that just says what your paper does. You can make it sexy later, but at the beginning, I prefer to go this way. So I'm going to give you guys a second to think about that. And Kubra, let me know if you have any ideas as well as we look at this. So I'll give you guys two seconds to have a think about it. Don't worry, there's no wrong answers. In fact, wrong answers are more helpful sometimes than right answers because they give us the opportunity to upgrade your understanding. So I know it sounds counterintuitive. I actually prefer wrong answers because that's an opportunity to improve. Meanwhile, Barrow asks, yeah, so just put the outcome is going to be disease prevalence and omit the comparison component. All right, Susan is spot on. Europe and North America, this is going to be our population. So I remember this study specifically. In our initial scoping, this is exactly the phase I was talking about before. We had an opposite problem as we went through. We noticed later, we get into it in later steps. We had way too many papers, way too many papers. So we had to narrow down. It was the opposite problem. And we did that instead of looking at the whole world, we limited it to Europe and North America. Kept things tight. And Susan, Susan just like crushing this. Yeah, I know, Susan, you've done this a million times. The intervention is they're looking at the impact of air pollution. So this is kind of what's being exposed and it's the effect of that air pollution on this outcome. And you can see we had a question here in the chat about prevalence. You would just define this as the disease prevalence. So that was Barry's question, sorry, Jeffrey's question here. Yeah, so Susan, you've nailed this. And Kubra, just coming back to you, I hope this answers your question. The impact of air pollution on COVID-19 incidence, severity, mortality, systematic review of studies in Europe and North America. We've got our P here, which is Europe and North America. We've got our intervention or exposure, which is air pollution. We've got our outcome, which is COVID-19 incidence, severity, mortality. It doesn't have to be one outcome, it can be a set of outcomes. And we omitted the comparison group in this case as well. You don't have a clear comparison. Your comparison could have been pre-COVID periods if you wanted to define it. But that means the whole world was exposed to COVID, albeit to varying degrees. But in this case, the C didn't need to be defined. So again, like I said, don't worry about that. And Kubra says, yes, I get it. So thanks, guys. Again, in this community, when you ask questions, it really helps others who may actually be too scared to ask a question, but they're sitting and just bursting with that question, but too scared to ask it. All right, let's keep going. I'm going to go on a little more quickly to the next steps. I just spend a ton of topic. I mean, if I say 90% of your success is linked to your topic, you almost want to think, well, maybe I should spend a lot of time getting that topic right. You can use AI to help you craft topics. We have developed a FastTrack AI mentor. And maybe Susan can even share the link with you guys. That is helpful. But I really find for defining the topic, you need real human feedback. That AI mentor has been trained on all of our internal data. So it is a fantastic tool. But even that, even the best that it is, is quite imperfect. You need real human feedback on your topic. These tools will help you. Okay, search strategy. Basically, you have two components that you need to get. You need to figure out which databases you have access to and can use. So that's an important step. You can see that from existing systematic reviews in your field. You might check with an underutilized resource at your, if you're at a university, you might have a research librarian who's a great person to ask about this. They often don't get enough love and attention, and they're really happy to help. It's like having a free research assistant. So I encourage you to chat with your research librarian. So you need to define your databases. Databases are, what are some common ones? You can have Web of Science. You can have EBSCO. You can have PubMed. You can have, some people use Scopus. There's a whole bunch of different ones. What I love personally, anybody in a health-related field, I love the combination of Web of Science and PubMed. Those have been tried, tested, validated in paper after paper. Web of Science, I really almost always recommend if you have access to it. Really good coverage. And then Web of Science in social science fields can be combined with EBSCO or combined with Scopus. Sometimes you can even use JSTOR. These are all great. The next thing you want to do in your search strategy is you need to identify keywords. And these keywords are going to be linked to your PICO. So all these steps kind of build into each other. So here you've got your PICO. And if you want to turn that into keywords, well, your keywords are typically going to be the elements of the PICO. So what? We got a keyword that will search for an air pollution. And we're definitely going to have to search something on COVID-19 here. I won't go fully into how to set this up and do this, but typically your keywords are going to be that you're going to want to search for are going to be the noun. So you won't search for the impact up. You won't search for systematic review or studies. You got a third potential keyword, which is your population. In this case, Europe and North America. In this case, I don't recommend explicitly searching for the population as a keyword. It's just so easy to find the studies and kind of throw them back later on. So if I find a study from China, I can just throw it out. I'll explain why in a minute. But in this case, the two keywords I would set up are these two. Now, you can't just drop in, say, I'm going to search for air pollution and COVID. You need to find keyword variants for these. So because air pollution, maybe people are talking about ozone. Maybe they're talking about specific terms to capture air pollution. They don't use air pollution directly. So in your search strategy, you need to go look at existing studies. So here we use a nearest neighbor setup again. We find keyword nearest neighbors. So we go find existing systematic reviews that already use these keywords in their search. And we just take the ways that they operationalize those terms. So you're not reinventing the wheel. So the tip is to get keyword variants from existing systematic reviews that have already been tested and validated. This is that whole approach that you sometimes hear in science of standing on the shoulder of giants. Don't reinvent the wheel. Build on what's already been working. So then if a reviewer ever comes back and says, I don't like the way you captured COVID-19. So, well, you're going to have a problem with this paper, that paper, that paper, that paper. Who all did it this way? And that was perfectly fine for publishing in a top journal. I hope that makes sense. So I do see a rookie mistake if people just try to scratch their heads or even use an AI tool to find these keyword variants or even get a thesaurus out. Where, no, the better way is just to tap into what somebody has already spent lots of time and energy into figuring out and has already passed peer review. Oh, yeah. Kubra mentions Eric for education. Yeah. Thanks, Kubra. That's a really good one. We actually have in our research collective community, an education research working group where people meet separately and work on their education-related, all things education-related and research together. And often that's where we do get that detailed field-specific knowledge sharing. Eric, I would only recommend for education. But yeah, thanks for sharing that, Kubra. That's great. Okay. So I won't go through actually a live demonstration of this. You can see this. We got a whole systematic review playlist and I'm about to drop, really excited for this, a very long, full systematic review walkthrough. Very long video. So be on the lookout for that. I've never released something like that before. So it's going to be fun. Okay. Next, once you get your search strategy, you need to pilot. And I think some students get frustrated. They're like, okay, I got my keywords. I got my topic. And then they go search these databases and they're like, oh, I'm getting too many studies. I'm getting too few studies. I've messed up. And they lose confidence. Well, this is normal. This is why you need a pilot. Because you can still be getting too few or too many. And bigger problems. You might be missing key studies that you need. This is a big problem. So the way we think about this is you have two kinds of errors. You can get, when you do your search, you can get a false positive. So things in your search you don't want. That's going to happen. These are also called type one errors in your search. The other type of problem you got to deal with is the bigger problem, which is called a false negative. Things in your search you do want, but don't appear. And these are called type two errors. That's a real problem. So you just need to make sure, calibrate it for these. In short, false positive, you can see. You can just like throw it back out. It's like the study example here. We got something from China. We can just throw it back out. We don't want that. False negative, bigger problem. Bigger problem. You're going to need to do a test to figure out, right? Because it's missing. How do you know if something's missing? That's tricky. So you're gonna have to go back. And this is where Google Scholar comes into play again. We can use Google Scholar or something else to test. Test the robustness of your search. Something called a type two error test. Again, I won't have time to get into all of this. But basically the principle is you're going to look at Google Scholar to find papers that should be in your search. And go back to your search and see, well, are they actually there? Or are they missing? And again, this piloting step, too few, too many. You might have to add some keyword variants. You might remove some that are getting lots of false positives and aren't necessary. And so you'll go back and forth. And you might be tweaking, updating your search strategy a bit and updating the piloting and so on and so forth. All right, your next steps are going to be... Each of these steps, you've got to get right. There's so many... So this is where people get thrown. There's so many steps to get right. But a skill like riding a bike, once you've got it, you've got it. And you can do it over and over and over. You'll need to figure out a set of clear inclusion exclusion criteria. This, I think, is where people on traditional reviews, going way back up to traditional reviews, have trouble. Because they can't figure out... They start reading everything. They have mountains of papers, and they get lost. And they start losing the forest, that proverbial forest through the trees. This inclusion exclusion criteria is great because it sets boundaries. It kind of creates this box, the sandbox you're going to play in and say, hey, if this paper is out here, I don't want it. I'm only reading the stuff in my box. And that'll be linked to your topic. So, for example, you might define, okay, COVID air pollution. I'm only going to take the studies that have maybe had a real COVID test validation and haven't been self-reported. Different things of this nature. These inclusion exclusion are further control knobs to help you dial in your search. Typically, at the end of locking this in, I want you guys to end up with, for most systematic reviews, maybe between 20 to 40 final papers for analysis. That keeps things... I find that's just a sweet spot. Not a huge right or wrong answer, but that's where I want you to be. So, if you're projecting you're getting too many, you need to tighten up your inclusion exclusion. If you're getting too few, you might need to relax it. On another note, for this piloting, I like to say, I want you, after removing duplicates, you're going to get some of the same papers in different databases. I want you guys to be around 2,000 papers that you're going to go through to see if they meet your criteria. So, look, I won't be able to cover everything in this session, but we're getting through a lot. The next things that you need to do in our simple system is you need to have a setup for doing step five. You're going to need to, once you've got this right, this exclusion and exclusion, you're going to write up your methods. We write the paper from the inside out. I see so many people getting stuck writing up their introduction. We only want to write as we go along. So, once you get through this process, you're going to basically get what your pilot had, and then you're going to get rid of some papers and get a final set for analysis. Here, you are at the halfway point, star is like the halfway point. And then you'll get to step five, which is going to be your data extraction and analysis. And then your next steps in our simple system is going to be writing up. And finally, you're going to submit like a pro, like you've done it a million times. I wanted to get many of you guys who are watching are just starting out. So, I want to get you started. And I want you to take these tools. If you're doing a systematic review, this is already going to help you. If you, I'd want you to go back, check your topic, see if it meets these criteria. If you've been struggling, this might unlock progress for you. And it helps you to know kind of in this roadmap, where you are. If you're doing a review, even if you're doing a narrative review, where are you? Because in a well-done narrative review, we actually try to replicate these steps. So to prevent people from getting lost, we want to set up inclusion and exclusion criteria to draw these boundaries. Maybe some of you are stuck in this piloting phase. And so it really helps to know in this roadmap of doing the paper, where are you and where are you getting stuck? Maybe you're doing data extraction and analysis and you don't have a way. You don't know how to organize the papers or what material to take out. This can trace to earlier steps. Well, this data extraction analysis should be pulling out all the information from your PICO. And if it doesn't have that information, you're not going to be able to answer your paper's overall question. So this all ties in together. But as a first step, if you are struggling, figure out which step you're struggling with and check all the steps before that to see if you can identify something. Your inclusion, exclusion might be struggling. Oh, I have too few papers now. Maybe that's the symptom here. But you actually needed to fix this back with your search strategy originally. So I hope that makes sense. This is kind of a very broad overview of our system. Again, I say 90% of your success is from the topic. So that's why I like to focus about 70 to 90% of our time on getting that topic right here at the beginning for you. All right, I'm going to pivot to questions that you guys may have. And I hope you found this helpful. I'm glad it looks like LinkedIn user here has enjoyed the session so far. I think it's going to be a lot of fun for us to go here. And now go through some submissions and some comments that we got. Oh, dear, my camera has decided to be buggy. Give me two seconds, guys. This does happen to me sometimes. Buggy camera, great. And I've got an older camera, and it just sometimes says I don't want to play anymore. Give me a break. This is too hard work. Let's see if we're back. Oh, one second, guys. I'm going to have to tweak this. I'm unable to access. Give me a second, guys. Technical hiccup. Here we go. All right. Okay. I brought the camera back to life. It is a good break point for you guys to ask any questions that you may have. There we go. All right, guys. If something like that happens, I don't know if that just happened, but do let me know about that. Okay. Right, let's go to your questions that you've submitted this week. So I've got a first question actually from... Let me pull up for the workshop. I have got a question from Samiha. And Samiha is having some trouble. I'll try to post this in, but Samiha writes... I'll copy and paste this into the chat. She says, I'm working on the introduction for a narrative review, effects of vitamin K on, et cetera, et cetera. The usual formula, why it matters what's unknown, how the study fills a gap doesn't feel right here. Okay. She keeps going to say... She says, since the whole point of my review is to summarize what's unknown and highlight gaps... Yeah. Restating all that in the intro after the abstract already does it, feels repetitive and awkward. Is there a different scheme for a narrative review introduction that feels more natural? She goes on to say here... I'll post this as well. That feels more natural. I was thinking about skipping most of what's known and focus on recent discoveries gaps to highlight the review is useful because it summarizes it all. Okay. Well, this is important, right? And this is actually a step that we cover in step one of the topic. So we need to know not what is... I think this is a point of confusion. If you had your conceptual nearest neighbor paper, we need to establish what is your paper? What value is it adding compared to the existing papers that have been done? So you've kind of said what that main gap is you need to focus, like how this review is useful. You said here, this key thing you said is this, your review is useful because it summarizes them all and puts them in a broader context. So what does that mean? So your literature review does, it will have overlap with your abstract. Your abstract is like a mini snapshot of the paper. We're going to cover an abstract later on. But in your introduction, you're usually going to have a section that we have a three-part play type introduction. So why are we having this conversation now? So why is this topic so important? What's been done and not been done and how your study fills the gap? Well, you actually have said that this how the study fills the gap is there. You're filling the gap by saying that we summarize all the papers and put them in broader context. So what does that mean? That middle section, that prior section, you need to set up to say these prior studies or these prior lit reviews have only looked at maybe a few studies absence of their context. That they haven't looked at something or other, right? You need to roll out the red carpet, say, well, why, Samiha, do we need your review? And you've got the answer to that. That's what your introduction needs to focus on. You don't need to go through in the introduction and summarize everything that's unknown in the gaps. You're right. That's what your paper is going to do. So just keep that clear. Those gaps that you identify from your review, that's going to be more your discussion section. But the gaps that motivate your review, the reason you're doing the review in the first place, those are the gaps that you need to focus on. I hope that makes sense. There's the gap that motivates your review and then there are gaps in the research that your review identifies and points to future research. That's in the discussion. And again, to answer the abstract, it's 100% going to have overlap with your introduction. It should kind of be a mini snapshot of the whole paper. Thanks for asking that. That is a great question. Okay. Let me go to a next question. We have Susan. So Susan says, so Susan's a more advanced member of our community. Excited to cover your question here, Susan. So here's my question. Susan says, I'm just posting the chat. I want to find out whether the proposed study is feasible by running the advanced search and websites online databases based on nearest neighbor papers. See attached file. Yeah. So cool to see, Susan, you're using our systems. Let's take a look. And I think some of you know the answer. Kubra just posted a question. Kubra, I'll come to your question in a second. Let me share my screen and find what Susan sent. And you'll see a nice example of how Susan's been using our systems to make progress. This is why these templates really can help because they give you, for each of those steps, they show you what you need to do and help you troubleshoot some of the most common mistakes that researchers make. So Susan wants to find out if the study is feasible. And her topic, gamifying digital lifestyle interventions for patients with chronic conditions. This is a very cool topic. I think this is, well, who doesn't like gamification? That's always a lot of fun. Digital lifestyle interventions. Good. So this is well-defined. You guys can see how this fits into a Pico very quickly. And Susan's got her Pico defined. That's awesome, Susan. Yeah. So this is very clear. I mean, you've got nearest neighbors here and you can see that this is focusing on just the just physical activity and a more narrowly defined population. And it's focusing on a more narrow platform. I can see here from your nearest neighbor papers without looking at them fully. I mean, this is great, Susan. This is what makes our feedback effective too. We get the actual data we need to give you laser-like feedback. This is awesome. Just wanna make sure you guys can see the screen still. Okay, great. Sometimes the screen is too narrow. You can't see the screen. This is a good set of nearest neighbor papers. Well done. So Susan, I just wanna see how you define now your gap against these. I have some ideas. So you haven't shared with us your gap, but the only feasibility, based on these conceptual nearest neighbor papers, I see space here. You've got a scoping review here and it's more about self-management. And I think almost all of them are gonna be lifestyle interventions is my guess. Could be though, could be not. So I don't know if that's giving you that much discrimination. Patients with chronic conditions, you just need to find that. Is that gonna, in your inclusion, exclusion, is that gonna include all? There are many chronic conditions. Is that gonna include mental illness? Are you gonna capture HIV as now a chronic condition? I don't think that's what you mean here necessarily, but it could be. So check that. And gamification, it doesn't look like you have a specific medium. As I look at this, I think this works really well if you do wanna focus on, many are focusing on self-management. You just need to define this pretty clearly. And some of this definition is gonna come with your, you didn't actually put in your PICO, the digital lifestyle side. So I think you just need to clarify this. Is this, by lifestyle, do you mean, I presume you mean diet, physical activity stuff, not anything medication related. I don't love the term. That's a side note in the field. I don't love the term lifestyle, but it's okay. I just, it's not always that clear. So you can still use it, but just make sure you've got that clarity underneath it when it comes to decide our papers in and out. So yeah, I see space. I guess one question too is if they did a systematic review here, they probably had eight to 10 papers and that's a narrow subset. So this tells me the way you're constructing this, it should probably include all of these. I don't know. You might check that, but that's eight to 10. You might end up with too many papers based on what I'm seeing here. I don't know how many papers this one had. So another criterion you can sometimes add later on is you can put a design criterion. Maybe you only wanna have, sometimes it's called design. You might only wanna look at randomized controlled trials. But for the first part, it looks to me you're not duplicating existing study, but I would encourage you to get more clarity on the specific gap, the value you're trying to add over and above the nearest neighbor papers. And just foreshadowing, I think you might have to find a way to tighten up. You're gonna run the risk of too many papers and you can tighten up in different ways in the inclusion exclusion stage of the paper. Let's look at your terms. Oh, I'm gonna check. Do you guys have any questions about that, about how I just analyzed this? Susan, okay. You was on the thin side for meta-analysis, Susan says, yeah. I don't recommend you doing meta-analysis if it's your first paper. Basically in the data analysis page, you have two ways. You can do a qualitative synthesis, which is not gonna get into heavy quantitative tools or a quantitative synthesis, which is meta-analysis. I recommend if it's your first review, doing qualitative because doing a meta-analysis, you're gonna have to learn two skills, systematic review, and you're gonna have to get really up to speed with the quant skills. So this looks like a good term for chronic diseases. I can see you don't have HIV in there. So you just have to carefully define that. This looks quite good. And I imagine you took this from other papers. I don't know if I'd include long COVID here, but you definitely can. So I have a thing about that. These are good. I think these are optional. I can see you've highlighted it because you might've had something like that. I didn't know exergam. See, this is another thing. You need to do these nearest neighbor papers. I would have never guessed exergam. Exergamification is a term. So that's quite useful. I like this gamification. That's quite good. The star is another tool that's gonna capture. It's a root word. Everything gamifying, gamification, everything after this I will be captured. So this looks quite good. I would combine these terms, however. I would combine these terms. I won't get into the full nuances of what Susan did here. I don't know. You might get some type one errors from this play or playing or players. So I don't know about that. And then here you're looking at the digital medium of the gamification. I think that's good. I think it might actually be overkill to have this. And this is where the piloting stage comes in handy. I don't have your numbers. You might be fine just with your gamification and chronic disease term. And then maybe only look at RCTs. That said, this is quite well-defined. It's probably gonna have this. I'm assuming you're gonna search the top abstract. You did this. There's already in Web of Science, we've got a, it looks like you did do that. So that's good. There's already an established, Susan, keyword cluster search filter for randomized control trial for Web of Science that's been tested and validated. So I recommend you use that. We've got too many keywords going on here. This is the main thing that comes out to me. Way too many keywords. So this doesn't work. I don't like you doing article document types in English. I would get rid of that here. The filters don't work very well and make errors and you can't track the numbers and you need to be able to track the numbers later. So I would be getting rid of this. Way too many keywords. I think I would just be going this and the gamification and the randomized trial using the pre-existing Web of Science filter that we have in the course, I believe in week four. Let me go just show you where that is. Hope that makes sense, Susan. Ask me a question. I'm going to show you where that filter is, by the way. Because this is a common step and your intuition was spot on, I think already saying like, it's going to make sense to perhaps only look at RCTs. So it's right in here, Susan. If we go into, where is it? Common search filters, randomized control trial filter. Go to the downloads. It's right in here. This is one from a paper that's already checked it and I highly recommend. And find the Web of Science one. Yeah. So this one here is the Web of Science filter that I recommend that you use. You'll see there's a different one for PubMed and this would just be a mess. I mean, who's going to come up with this on their own? So rely on the studies that have already done this for you. So cool. Yeah, Susan, thanks for sharing. That's awesome. So you don't know how to do an advanced search. Okay. I'd go into the Web of Science training. I won't be able to go through right now a Web of Science demonstration, but we can definitely cover that at our next workshops coming up. So yeah, definitely. You want to go right into advanced search and Web of Science. You want to go into the query builder and go one at a time. So yeah. I mean, that's the goal in all of our feedback is to keep things moving and point you to where you can get the right resource. We're not doing our job if you're stuck. So that's why there's multiple avenues to reach out in our circles. We have 24-hour people because all over the world are there helping each other, which is really cool. And we have about five or six different feedback workshops throughout the week. So you literally will never feel stuck again, which is awesome. Okay. So let me go to the next. I wanted to cover Corinne and Miriam. So let me take Corinne's abstract. So Corinne has an abstract. All right. We're shifting gears for a second. And so let me, I hope you guys, I'm going to try to zoom in so you guys don't go too blind here. Corinne wants to submit it to a conference. So a couple of things. She said that the paper's not finished. She wants to submit to a conference and that's great. I think that that's really helpful to do. Always encourage you guys to submit to a conference. Some people think they already need to have the paper finished. That's not true. And you can even submit things. I've submitted things to conferences and then the results change later and I just present the updated results at the conference. So just getting that submission off is a win. I encourage you all to do it. I'll put some track changes here. So are children used as proxies in parental stalking? I like titles that are questions. That's good. Let's take a look. Read this together with me and see exactly what Corinne's proposing here. I'm going to see if I can create a little more room for this to breathe. So yeah, read with me for a second. And I encourage you actively to think about, right, often as researchers, you don't get space to practice, right? So if you're playing basketball or something, you would shoot lots of shots by yourself, right? You wouldn't just play a game. And so often when people are doing research, they're only doing their own research. And you never really get that space to like go train to upgrade your skills. And this is what exactly what this does. You learn through working with others and through observing how others troubleshoot these kinds of problems. So really observational learning is huge in sports. And I believe it's really fundamental in learning skills like research. So, okay. So what's going on? Hopefully you've been reading this in the background. So stocking's bad. If you studied lots about ex-partner stocking, we know less about children's exposure and victimization. So this is already kind of a term that's saying this is the gap. Oh, here's another gap term. Prior qualitative studies suggest children are at risk, but the tactics are unmapped. So this is saying, okay, maybe you're gonna tell me what the tactics are and children being exposed. Study exams to examine the extent of court verdicts recognize children's exposure and risk. Okay. I'm already spotting some incoherence here. Why are court verdicts gonna tell us the... It may be you wanna use court verdicts to map out the top tactics. I don't know if the court verdicts will tell us those tactics. Usually they're making a protective order or something to prevent the stocking. I'm not sure this is set up in the right way. And the question, are children's use of proxies in parental stocking? That's not linked to the gap here that you've set up. So, okay. But let me come back to that. We conduct a mixed method retrospective observational study on all six from 28. This is good. I don't know what this is. I guess this is a law, but this is strange. You spell out acronyms at first use when you see it under what Swedish stocking law on all six of our verdicts issued under a Swedish stocking law from the year here. That's great. 80 cases, numbers stay consistent. 80, you probably want numbers. 80 cases met our inclusion criterion that they had to be parents, but both the defendant and complainant were parents. I'm confused about this. This is confusing. So the stocking, the potential, I would just say maybe to keep it clear, the potential stocking victim was a parent of at least one minor. We identified using a deductive. So this is a little unclear what you did here, including the nature of stocking behaviors and stocking assessment management, this tool using it, but that's fine. If there's a common tool, and I think this is also a fine, it doesn't really show what you did, but that's okay. I mean, you don't have a whole lot of space for the methods. One other thing is striking me here is this might be a little bit long on the word count. So I'm already thinking about this, back I 350, check the formatting. Sometimes these are 250, not 300. So I might already be thinking about reducing things down. I would be getting this down to maybe two or three sentences. This doesn't look bad. I just spell out the acronym. Results, most verdicts, I'd like to see more of a quantification here. Like how many most is it? Was it 400? Was it all 650? Was it two thirds? I think you can quantify here. Only, okay. So only the, okay. So this is, see, you're going to the main results. They contained exposures of children's exposure that contained accounts of children's exposure to stalking across cases. So in here, you're getting the tactic. The verdicts didn't, this is an important point. So actually when I look at this, I think one of your important messages, hope you guys are still with me. Yeah, it's a really nice topic, Marina says. I really like it. Marina's also, coincidentally, Marina's also working in a topic of childhood violence. So that's a cool thing. Definitely want to link you two up. So this is the first study here. It is systematically. Okay, wait a second. So you see your payoff is, we're the first to analyze court verdicts outlining children's exposure. I think this good stuff, this is your most powerful stuff to me. This is powerful. I'm just highlighting. This is powerful. This is powerful. But it's not linking to your title and it's not linking here to your gap. So I think instead, if court verdicts is to say, well, maybe this has been very difficult to study because you can't, people can't just go get a data set on children who have been exposed to stalking tactics. I think this is, you got to play up your best stuff and make that clear in your abstract and title. So, all right, I'm gonna start doing some track changes. So let's say a systematic analysis of Swedish court verdicts. I like putting like your method after the colon or after the question. So, and you could even just say, I guess you have a different question here. I mean, this to me, your question is, well, to me, just reverse engineering. Are the courts protecting children in their verdicts? Are children who are at risk? That seems like a pretty big question. A second order question is what tactics are the stalkers using? I think it might have multiple papers here because if this is the first time we've leveraged this, this is a hugely valuable data set. So I think, again, you've highlighted a fantastic gap, definitely gonna be publishable. So based on what you've told me, I haven't tested and validated it yet, but instead of this proxies, right? You've got different questions and this proxy thing isn't really answering the question. Your bigger question is, do courts protect children exposed to risks of parental... We have to check the language of this. Something like this, I think is a stronger question. And then I would say, ex-partners are common victims of stalking behavior, yet often their children are neglected or not protected or something. I think this would be kind of the framing. And then you get to prior studies indicate children often become deeply entangled in stalking targeting an ex-partner, yet what's unknown, yet the tactics and protection from judicial systems is not well understood. Here we analyzed for the first time, court verdicts on ex-partner stalking to identify protection of children and potential exposure to stalking tactics. Something like this, I think ends up being a lot clearer and higher value, Corinne. I hope that makes sense. I'd have to look abstract, really got to spend time getting the wording right. I think this is a bigger question. This is a bigger question. And what you did was not wrong. It just had some frictions and incoherence in there that I think we can clear up. Marina, thanks for pointing this out. The results section is not very clear either. I agree. I think we need these numbers, but I say, of the 650 verdicts, about stalking, I guess you need to say the verdicts, how many of them were, you wouldn't expect a verdict that, so it's kind of confusing. Was this a successful verdict and say, we proved they were stalking, we did some protection or were some of these verdicts saying, no, there was no stalking, nothing to happen. So that kind of, we haven't described the data very well, but I'm going to assume all these verdicts meant that that found stalking. You kind of need to clarify this, that these were all the ones that there was a penalty or something they found evidence there was stalking about, on ex-partner stalking, none explicitly addressed risks to children or protective or risk management strategies. That's your shocking stat. I think that's huge. Lead with your good stuff. People are going to look at this at a glance and say most verdicts, I would maybe say N equals whatever, simply were limited to noting that the ex-partner victim was a parent listing children's names and ages. And you say the remaining, we don't know how many. Most verdicts, a few, however, detailed children's exposure to stalking, the associated tactics. By the way, exposure to stalking associated tactics, I'm writing it in this way to create a flow in the text, to make it easier to understand. Revealed how stalkers, this is a powerful note. So what are we doing? You had two big questions that you answer, the question about the tactics, but the bigger question I think is, are the courts doing enough to protect the children who are exposed? So you set up here, if you had a stat on like 90% of ex-partners have children, 90% or something, the children are also victims of stalking, but it's not clear whether our courts adequately protect them. Although prior anecdotal reports, let's say this, indicate children often become deeply, often become, are exposed to stalking themselves. It is unclear whether judicial systems adequately recognize these risks and protect children from harm. I think this will make it even stronger. Okay. And then here, Marina, does this read better? And then I think this follows a little bit more. Notely, you haven't proved this, that they place children at the center of stalking. You haven't said that. Actually, I think you're overstepping what your study can do. Let's just leave it here. This is fine. This is not strong enough. I think the strength, I would, oops, accidentally deleted something relevant. Whoops. Hang on, let me go back. Oh dear. This is why it's good to save multiple versions. So the associated tactics revealed how stalkers, okay. Let me see if I can fix this. Okay. It's not my, in Texas, I would say first rodeo of making those kinds of mistakes. That's why you always want to save multiple versions. I'm going to save this and send it back to you, Karen. I hope this is helpful. Karen, I think you're with us here privately. So let me see. Karen's saying here, she's saying, wow, cool. 650 where everyone charged in stalking. So just, just clarify that. So, you know, assume your readers are dumb. This is not an important point. 650 where everyone charged or convicted. Well, you need to say that and make that clear for us. Right. Because your readers aren't going to know, but I actually think you, if anything, you've underplayed the importance of your findings in the abstract. Oh, no, my camera's gone funky again, guys. Two seconds. I'm going to fix this camera and then I'll come, come back. You guys got to let me know when my camera starts bugging out like this. It's unfortunately, an older camera. So give me two seconds, guys. I need to tweak this camera once more. OK, I think I've got it fixed. Let me reset. Oh, here we go. OK, all right. Yeah. So that's a very important point, Corinne, that to make. I hope that helps. Yeah, I know your camera got lost, too. I really appreciate this. Yeah, good luck. Look, again, my assessment, 100% publishable, really important. I think you've got multiple studies. Don't try to put everything in one study. That's another common error. You've got great stuff and you don't want to necessarily need to milk it all in one paper. So, yeah, I'd encourage you to also chat with Marina in the group and our other criminologists who are deeply interested in this topic. Susan writes practice makes progress. Absolutely. Good writing involves a lot of rewriting. 100%, 100% true. Yeah, Zakia, we got a lot of people on Team Replay, so recommend you check that out here. Hayukistan asks, will the same method work on chemistry projects? Yeah, 100%. I mean, especially our writing system called PEER. I mean, of course, I'm biased. I think we've got some of the best training in the world that just kind of breaks down these things that you're just supposed to figure out on your own. Oh, wait. Hey, Karen, I actually can see you here. Karen, I'm going to pull you on for a second.
[01:15:56] Speaker 2: Nice to meet you. Thanks for having me.
[01:15:59] Speaker 1: I need to connect my headphones to be able to hear you, but I think others can hear you. Give me two seconds. Karen, did that make sense, by the way? I've got you now.
[01:16:08] Speaker 2: Yeah, it did. It's just a little bit tricky because all of the legal stuff, there is a bit of a gray zone where I think that the prosecutors should have prosecuted their children's victimization, too. But we're saving that discussion for a legal dogmatic study.
[01:16:34] Speaker 3: Yeah.
[01:16:34] Speaker 2: So here it's more about mapping the behaviors so that the courts, hopefully, and other authorities in the future will be able to spot these behaviors.
[01:16:46] Speaker 1: Okay. I mean, I still think it's quite shocking that none of the verdicts, including the children, I still would focus on that and then go second into the tactics. As a limitation and a caveat, you could say, well, maybe the courts weren't doing this because they're going to deal with that in a separate case. So you might just want an answer to nip that in the bud. But again, you're going to have more subject knowledge than me, but that's what jumps out to me as the big strength and big message of your paper. So it's just bringing into alignment.
[01:17:22] Speaker 2: Thank you.
[01:17:23] Speaker 1: Like the title with a gap with what you actually deliver. So good luck. Let us know, Corinne, how it goes because I know you're under a tight deadline. Yeah. So it's really good timing to get this submitted for today's workshop. But yeah. All right. Good to have you with us, Corinne. And again, encourage you to link up with Marina. Cool. All right. I'm going to get back and wrap up the session. Yeah. So you guys, just to add here, if you guys are struggling or are feeling lost, I would just encourage you to not go around circles and silence. Even just having a sounding board to work through your ideas will help you get so much more clarity. If you don't have a system to help you publish, it's all too easy to get lost. If that's you, if you have been going in circles, you have been struggling or maybe you're feeling good, but you just want to accelerate and use shortcuts that are not cutting corners, so to speak, but are saving time and really start flying and accelerate your progress, I really would encourage you to click the links below and have a chat with myself or Marina or another member of our team. And let's see if you're a good fit to work together more closely in a more intimate way. I always encourage people to say, if you feel our systems resonate with you, if you feel they could help you, then let's have a chat and see what we can do for you. If not, that's okay too, but it is fundamental. Nobody makes this journey alone to becoming a highly proficient researcher in the same way professional athletes. They don't get there on their own. They have some of the world's most expensive coaches, often multiple coaches, a whole team of coaches. So just encourage you. I hope this take a message for you that you can save a ton of time, headache and frustration. I wish I would have done that by getting and tapping support that's available to you. Ideally, you're going to be getting that from your professors, supervisors, peers around you. But in practice, I just hear so many stories of that not happening the way it should be for a range of reasons I don't want to get into. But guys, again, if you want to participate in the session, do get in touch with us as well. I'd encourage you to submit video questions. I'm going to try to do these workshops with a little more frequency. We've been doing them about once a month. I'm going to try to get back into the habit of doing these weekly. Overstretched as ever, but look forward to seeing you in the next session and hope you have a great weekend to everybody wherever you are.
We’re Ready to Help
Call or Book a Meeting Now