[00:00:00] Speaker 1: AI is helping thousands of researchers around the globe to move faster than ever before, but it's quietly becoming the number one source of research errors, errors that I see as a professor and in our fast track research mentorship programs. Errors I'm talking about are not really the obvious kind that are well known about AI like hallucinations or fake citations. Those are obvious. You can spot those instantly the same way you could spot an em dash and it triggers a flag as likely to be AI writing. The real danger I'm seeing with AI is deeper. It's more subtle. It's more insidious. It's the kind of error that you don't notice until maybe you've spent three weeks working on your lit review or 10 hours extracting your paper or or months trying to build a dissertation and suddenly you find everything just crumbles like a house of cards and you think, wait, what just happened? Things aren't making sense. How did I get here? And in this training, I want to explain why, how this can happen that you build this edifice with AI that collapses like a house of cards. And so it's not that AI is malicious or it's trying to trick you or derail you or lead you down the wrong path. The problem is that AI behaves like a false friend. It gives you many things, many false kind of failure modes that you can operate in with AI by your side. It can give you confidence before clarity, momentum before direction, beautifully written text before you have actual understanding. And that's a deadly combination of research. So I want to go through today five failure modes that I'm increasingly seeing, failure modes in which these hidden ways that AI quietly derails your research projects and leads you down a path that you only discover later is completely lost. And these would be based on real cases of researchers who I've worked with. So let's dive straight in. By the way, though, I do want to give a quick welcome to new members out there everywhere. We truly are an international community. I know some of you are on team replay, those of you, especially in Australia, because it's the middle of the night. If you're on the team replay, come and replay below. I do read and reply to every comment on our live streams. And also I appreciate it if you can give the session a like if you do like what you see. It does help the algorithm reach other students and researchers out there who might not otherwise benefit. But the whole point of my channel and fast track is to provide you with the support that I wish I would have had when I was in my journey as a researcher. Because even though I was a pre AI period, I made about every mistake that you could possibly make from choosing dead end topics to unworkable projects, to spending an inordinate amount of time doing things that I didn't even have to do. And so I boiled down and crystallized decades of experience training researchers across different disciplines into templates, worksheets and created a platform for workshops, community sharing that's literally helped thousands of researchers accelerate their careers and publish and high impact journals. So if you're curious to see what we could do for you working together, click the link below, set up a call with myself or a member of our team, and let's see if we're a good fit to work together. Okay, let's dive straight into the presentation I have today. And also by the end, after I go through these failure modes, I'm going to give you some suggestions for how you can use AI in a better way. I've got a free downloadable template I'm going to share with you with some fantastic prompts that you can implement today. I'm also going to go over some of the submissions to our workshop. We do have submissions from people in the community and every week we take those on. Another quick announcement and note, we have Dr. Gauden Galea joining us in two days time on the channel. He's a former director of the World Health Organization. A lot of people in our community are doing research, not just to get a degree, but because they really want to make a difference. They care passionately about an issue and they want to master research to shape the world. So Dr. Galea is going to share his insights on the front lines of policymaking for over three decades and share how researchers can actually influence policy with the research. So you're not going to want to miss that. And like I said here, if you do want an opportunity to participate, click this QR code, you can submit your video question. We're going to come to those and we'll have time for QA at the end of the session. And if you want to learn more about some of our programs, I've got a QR code for you right here. And I can see Zubair. It's great to have you join us again. Hey, Zubair. And oh, hey, Marina, good to have you join us as well. Marina, if you are around, hop into the studio with the link and say a quick hello to everybody. And Ananda, great to see you as well. Definitely going to learn new skills, but also learn what not to do, which is sometimes as valuable as what to do. Okay. So I hope you guys are ready. Let's dive into failure mode with AI number one, and that is confidence before clarity. Failure mode number one. So this happened to me recently, a researcher brought me a topic that AI told her was innovative. And she was looking at the link between physical activity and sleep. And basically two minutes into checking the topic, we did something where we checked for conceptual nearest neighbor papers, papers that are similar, and we used this to help calibrate the gap to see what the value added is. Well, we found that the study she was proposing was virtually identical to not one, but three studies published recently already. But yet AI had encouraged the idea instead of challenging it. And that's confidence before clarity. Because AI can make you feel like you've come up with something brilliant, but it hasn't done the due diligence that we have all our researchers do to find, establish, and validate a topic before going too far. And AI is going to tell you, hey, great topic, excellent idea. Here's how to go and suck you in even when the topic doesn't have a real gap, might not even be feasible, might not be answerable with available data. And that's exactly why you need a real approach like our convergence method that can help you find and validate a winning topic. And so this was, of course, it's always not fun for me to be the bearer of bad news. The researchers who I work with say I'm fierce but loving, but we had to re-engineer a topic and that researcher had lost a lot of time with AI going down the wrong path. And that's just incredibly, incredibly frustrating. So that brings me to, guys, if you recognize any of these failure modes, or if any of this has happened to you working with AI, it's not to put you on the spot, but I would really appreciate if you say just put failure mode one, failure mode two, if any of this has happened to you, because I did a post on my channel, you can see the poll where a lot of researchers are saying, yeah, AI is helping me write faster, AI is helping me find structure. And sometimes that's true. When used in the right way, AI can really help you accelerate. I'm going to come back to that later on how to use AI in the right way. But when used in the wrong way, especially these failure modes, you can lose months, cause months of frustration. So let's get into failure mode number two. And I call this failure mode down the rabbit hole. My attempt here with AI to depict a rabbit pulling a researcher down a rabbit hole. And this can happen where you get this initial puff of confidence before you have actual research clarity. You actually have your feet planted on the ground. And you can easily get sucked down the rabbit hole because you probably might have experienced this yourself with AI chats. You have a chat and it says, hey, do you want me to do this? And then it pulls you deeper and you're sure, yeah, let's do that. And then do that. And then do that. And then little do you know, you've really gone deep down this rabbit hole. And the thing is, AI doesn't know the path. It doesn't know the destination. In fact, it's making up the path as it goes along, because that's the way LLMs and the optimization and the algorithm of the AI works. What typically happens here with this failure mode when you're going down the rabbit hole is your research might start off logically, but then halfway through, say your lit review, you feel a shift and think, wait a second, something's not making sense. And then you ask the AI to patch it up and it kind of tries to patch it together. Or you might say, oh my, how did I get here? The funnels, wait, this is not where I want it to be. And so I have an example here from a researcher came to me who was doing a systematic review. And the researcher had asked AI for some feedback and AI said, oh, well, you could summarize the data with some effect sizes and even produce some code and commentary. The problem is the entire thing was conceptually wrong. So what the students started to do was kind of a light, half-baked version of a meta-analysis, but wasn't following standard practices, didn't even know that they were doing a meta-analysis until actually pointed out to them, which came as a surprise, and didn't realize that submitting that to peer review was just going to get a big smack of a no, like what are you doing here, that would get blown out of the water. And that's really the rabbit hole that AI can pull you down that you may not even realize until it's very late and then you have to do a radical unpleasant surgery to your paper. So again, this is a really tough one. And again, a lot of these, I don't know if you can see the pain and frustration I felt with this. It's really hard because sometimes the researcher has an intuition something's not quite right or they might've gotten a feedback from a colleague they don't understand because there's a big disconnect between what the AI is telling them and what their supervisors or other research or colleagues are telling them. It's like they're speaking different languages and they don't know how to reconcile the two. And sometimes the path to get back on track is really quite unpleasant. Just to say, oh, hey, welcome from Yemen. Thanks for joining us. I can see Alina is here as well. Hey, Alina, always good to have you with us. Again, guys, appreciate it. I know it's putting yourself out there, but if you've experienced any of these failure modes that we've talked about so far, getting pulled down the rabbit hole or having confidence before real clarity, do share that with us. It helps not just you, but it helps the entire community to benefit. So also I can see, I thought I saw Marina had jumped in briefly. No, I don't know. Marina. Hey, are you with us? Oh, hey, Marina.
[00:11:16] Speaker 2: Hi, everyone.
[00:11:17] Speaker 1: Also, I can't hear you, but Marina is our research collective coordinator and she is awesome. Our researchers love her and Marina, just since you dropped in, just wanted you to say hi to everybody.
[00:11:29] Speaker 2: Nice to see you all. I see already some members of our community.
[00:11:35] Speaker 1: Oh, I actually can't hear you. I'm going to have to put on my headphones, but great to have you join us, Marina. You don't have to stick around. Just wanted everybody to put a face in the name if they haven't actually met you. And we're really fortunate to have Maria. She really is indispensable to our community. I know Alina, Zubair, Ananda, many of you do know Marina and you all get to know her. Again, if you click the link below, you'd have a chat either with her, myself or another member of our team and figure out if we're good to work together. Thanks Marina. Thanks for joining us. I'll see you. We'll be in touch. Okay. So let me get back here to our mini presentation. So one second. Just going to remove this here. Okay. Yeah. So that's failure mode number three, guys, that we're going to head to now. So failure mode number three, and some of you guys may have experienced this as well. So you've had confidence before clarity, you might've getting pulled down the rabbit hole and you get pulled deeper still because AI is a sycophant. It's this cheerleader for you that will cheerlead as you drive off a cliff. And I see this happen over and over and over again, and it just widens that disconnect I was mentioning to you before where AI is like, correct. Good job. Yeah, that's right. Oh, the reviewers are going to be blown away by this. They're going to love it. In fact, we pushed really hard to get AI to say bad things about a research. You can do it if you set in the right prompts, but even then it's a bit murky. And the problem is that these AIs are being designed to keep you on the platform like chat bots. And they do this by inflating your ego. It really won't say bad things. I even did a test of this. I sent a really like bad plate of food and it was like, Hey, I've been cooking it. What do you think of my cooking? And AI was really positive about it. And I'm like, this looks like excrement. I can't believe anybody would want to eat this. The AI is trying to be positive and so much positive that it doesn't make sense. So we there as a side note, AI has gotten itself into lawsuits. This has happened with chat GPT, where it was even egging someone on to be bold and brave and end their life. No exaggeration, horrible story. Hopefully chat GPT is going to clean that up. But it's just to tell you that this sycophantic behavior will cheer you on as you drive into a train wreck or drive off a cliff. So what does this mean in practice and what have I seen happen? Well, I had one researcher doing a difference in difference analysis, and this wasn't a beginner researcher. This was someone who already had a PhD and was more advanced doing a natural experiment design and wanted to implement a type of placebo test. I won't get into the methodological brambles of what they were trying to do, but the researcher was not only implementing the test incorrectly, but using it in a way that made absolutely no sense. And yet the AI was in the background cheerleading, saying, yes, good job, correct, perfect. This is great. And suggesting refinements that were refinements that were just completely losing the plot. And this can happen because AI just doesn't have context of a full research method, how it situates into a project, what you're trying to achieve. This can happen because even if you have different chats, it doesn't perfectly get the context across chat or even within the chat over time. It's just optimizing for a particular response and maybe not optimizing for the context of you as an individual researcher, your research paper, and what you're precisely trying to show. Again, it can be very helpful for learning methods, but I've seen multiple cases where that method has been applied in a very quirky kind of way that's not standard in the field. And that's going to get me to my next failure mode. This is failure mode three of getting pulled really deeper and deeper down the rabbit hole. By the way, hey, nice to hear from you, Karsh, that the videos are helpful. Really pleased. If there is ever a video you would like to see or a live session you'd like us to cover, do let me know. I always listen to you. I mean, this channel is for researchers, by researchers, it's to serve you better. As you know, the research landscape is completely altered since AI burst onto the scene in 2021-2022. So do let me know. And Sonia, look, I mean, be proud of yourself for this comment here. This video makes me feel a bit better when I say no to some feedback AI gives me. That's not easy to do. It takes confidence to say no. Very hard to say no, especially when you're just starting out, you kind of have this tendency to think, well, oh, the AI must know more than me because you feel small in your field. It's very common. A lot of the researchers we work with, I always ask this when I was teaching university, I'd ask how many researchers anonymously feel like they're a fraud or feel like they'll get kicked out of the program or somebody is going to find out about them. It's about two in three who say that they have those feelings and especially more common, about twice as common in women. If you're feeling those feelings, that's just a recipe to say, oh, maybe I'll get AI to help me. Maybe I'll turn to AI for some love and some comfort, especially if you're not getting any feedback. So that's really what can run you right smack into these failure modes, where instead what you want to do is get some real human feedback and perspective, even if that's tough love. Like I said, the researchers I work with call me fierce, but loving. And that really describes what you need from a researcher at this stage. So that's failure mode number three, this kind of false encouragement, this AI is a sycophant. Humans are going to challenge you. Supervisors are going to challenge you. Reviewers are going to challenge you. And yet AI is just going to continue to praise you, even if the idea is incoherent, rubbish, nonsense, excrement. It's going to congratulate you for it, even like my terrible-looking cooking that I tried to get it to tell me was terrible, it just didn't want to say what it needs to say. Like this outcome doesn't match your exposure. Your question makes no sense. Your design doesn't answer your research question. This is completely impractical and unfeasible, like happened with the advanced researcher who's getting praised for the placebo test that was just completely losing the plot. All right, guys, this gets me to failure mode number four. And this is where frameworks collapse. The logic just completely breaks. It's where projects quietly die. And that's because AI doesn't know systems. For example, I see this law on systematic reviews, which look, I'm biased, but we probably are the best systematic review training in the world. If any of you are here in our community, let us know, honestly, no filter what you think about that. But that's because it's truly step-by-step, and it is in a coherent system, and it's faster than working with AI because you don't get into these failure modes. But yeah, the challenge of logic breaking is that AI doesn't understand a PICO model properly. It doesn't understand how to complete a prisma diagram. It doesn't fully understand causal inference as it's done in your field. It doesn't understand the ins and nuances and outs of study designs or order in which you have to do research steps or setting conceptual boundaries. So it ends up mixing things that shouldn't get mixed and can create a lot of incoherence. Let me give you an example of one that I saw recently. A researcher came to me and was getting just lambasted by a supervisor for some unworkable methods. So what the researcher had done in a systematic review, a systematic review is, for those of you, not to lose all of you, it's a type of literature review where you create a reproducible process to find articles and then analyze them. It's something I highly recommend to researchers if they're starting research for the first time as their first paper. And the researcher had done a type of quality assessment of the papers using a Probas tool, but had used it as an inclusion exclusion criteria. I know this is getting kind of like into the technical weeds here, but this is something you would never do. You just wouldn't do that. The quality assessment is a separate process that you do after papers are included. Now you might have made a case that this could have been, this is an innovation. This is a neat thing to do, but it completely breaks the conceptual logic of Prisma and how systematic reviews are done. So the student had done this quirky thing that was just completely incorrect and would have just gotten blown out of the water. But even worse, the researcher had gone and pre-registered with this quirky method and then had to explain a deviation from a protocol. It just was just a mess, needless to say. But this is what I'm saying. I keep seeing happening with the methodological logic getting broken and sometimes this can be even more egregious. On my channel, I've got a video of a researcher who AI started creeping in some weird acronyms in the field that the faculty there at that university instantly recognized these are very strange acronyms, new as AI writing, and that student ultimately got expelled. And the point is not that you do need to follow the ethical criteria of your class. It's not really about the student getting expelled. It's just AI uses some really quirky things, but gives you a sense like it's normal. So it creates this beautifully formatted, but completely incorrect method. And this is a logic break and it's one of the hardest failures to detect because the text and what you're producing might superficially look clean, but the kind of inside is rotten and garbage. So do be aware of this logic breaking. And again, this comes from AI not having a publishing system that is internally consistent. I'm just checking the chat too, guys. Good morning. Oh, hey, Justin. I love seeing our new members join for the first time. Definitely. Yeah, we're sharing today some of the more subtle, but insidious errors that AI is making and why AI is becoming the number one source of research errors. Hey, Ananda. Yeah, definitely. We're synonymous with amazing SLR training. And cool. Yeah. So a recent video on my channel shows you our academic writing template, and this really just happened to me. It's a true story. I had to write a paper in 24 hours. Marina knows about that episode because I was complaining to people around me, how did this happen? I missed the eternal deadline. But our academic writing template does just take the guesswork out of writing. It's even faster than AI when you know what needs to go where and you have a format that you can use. And once you see that formula, it just simplifies the whole research process. Okay. And I see here, William, thanks for sharing failure mode number one, trying to narrow down the scope for a systematic review and AI contradict itself and how to advance with the project. Yeah. It's just AI can't really get Pico models right. I say this as somebody who's really tried to get AI to get Pico models right. We have a fast track mentor. It's an AI tool that's trained on our systems and data, and it does okay, but it really struggles to get Pico models right. So we're still trying to tweak that training and create the kind of AI that could work, but it would need to be a bespoke custom trained AI for this system of knowledge. So yeah, William, I'd encourage you to check out our topic accelerator course, and we've actually... I'll show you after here. Thanks for bringing this up. We've got a step-by-step guide that I'll share with you on finding a winning topic and uses our convergence method and does two critical tests. We do a duplication test and a feasibility test before setting up the Pico. It's kind of the first gate to pass before going on to the next step in the course. So I'll share that with you. You're going to get a ton of value out of it. And yeah, I think people are coming... Sonia makes a good point here. Sonia is saying, like, you get assigned by a literary review without any guide. They just kind of drop you off the deep end and say, okay, figure it out. And that's what leads to researchers hunting around on YouTube, hunting around for knowledge, and maybe a bit here, taking a piece there or there from YouTube that may or may not make sense for their field, or may not make sense for their exact project. They turn to AI, and then AI is like the sweet siren calls, like, yes, this cheerleader pulling you down the rabbit hole, and then you end up with broken logic, and you hate your life. But okay, let me keep going. That was failure mode number four. Let's get to our last failure mode. And this is the spinning wheels scenario, spinning wheels. And it's this feeling of, like, I'm going fast, but if you step back and look, you're going nowhere. AI produces an illusion of progress. You can generate paragraphs, summaries, outlines, definitions, pages upon pages of text with click. But none of it deepens your understanding. None of it necessarily strengthens your argument or answers your research question or brings you closer to publication. So this happened to me recently. I had a researcher come to me who was in the eighth year of their PhD, and they had written a 78-page literature review, 78 pages. And if you just looked at it, this is impressive. But the literature review did not have the necessary ingredients that a literature review has to have. So a literature review follows a funnel. You go a bit broader, and you narrow it down, and you want to get to the ending, which is kind of your gap, which is then going to align with your research question, what you want to show, your thesis aim, and glide into your methods. That's our whole what we call a North Star alignment sequence. Everybody who gets that right for the lit review part of their end of their lit review almost always completes and does great. But AI helped them write faster, but not better. And then there were other quirky kind of meta things that the researcher had put in, saying this part of the literature review fulfills this criterion of the doctoral thesis, and that's not something that goes in there. But it was very clear that AI had rammed some of this material in there and produced troves of text. And you can recognize some of this text instantly when it's not really prose, but it's broken up bullet points, no transitions between sections, copied and pasted together. It's very hard. I will say this is some of the hardest surgery for me to do, because the researcher comes in with this false sense of confidence, thinking what they've done is good, and thinking, oh, I just need some proofreading. And that's when I usually do a face palm, and I'm thinking for myself, help. And I really wish we would have started from the beginning and avoided a lot of this heartache and frustration in the first place. But yeah, this is how eight years of research time can disappear. And it gives you this illusion that you're doing something, but you're just spinning your wheels, and you're actually not going anywhere in the direction that you need to go. So these five failure modes aren't random. These happen because AI doesn't have the whole research blueprint in its proverbial head. It doesn't keep structure over time. It doesn't provide a good check on your feasibility. It's not really good at saying no or being fierce. It is good at being a cheerleader, but it takes you down a rabbit hole. So this is exactly why your job isn't, it is okay to use AI, but your job is not to let AI do the thinking for you. The analogy that we really work with, looking at this car again, is to think of yourself at the steering wheel, and AI is an accelerator. So if you have bad foundations and structurally your project is not right or doesn't have the right ingredients, AI is just going to multiply and accelerate bad research. So I can do research faster with AI because I know how to enhance my research. It's not AI powered, it's AI enhanced, and I'm still sitting at the steering wheel. I'm still in control of the machinery. And that's because I've mastered the fundamentals, but I've also done this for two decades. So what I always encourage researchers to do is, with this analogy, is master the fundamentals first and then you'll be able to bolt AI on to use it in a highly effective way. We recommend, for example, with the example of dissertations, actually a tech triad that does include Grammarly, Zotero, and LLMs, but with specific use cases. And you're going to see some of those specific use cases in the downloadable template that I'm going to drop to the description of this video later on. So what do you think, guys? Any of these other failure modes resonate with you? I'm going to turn to your questions and I'm going to turn to our workshop questions that we had submitted today. So let me just take a couple of the comments. So Justin says here, AI is a great tool, but has its limits. It's not a validated tool in research and also gives false information. Exactly. So again, this is deeper than just the hallucinations and fake citations, which are well known. But what advice would you give to researchers and new folks in the field who still need to do their due diligence correlating research? I had another question that somebody submitted to the workshop today. I'll just take it now and ask, well, how do I use SciSpace to search for papers? And I recommend good old Google Scholar. They're introducing a new Scholar Lab, Google Scholar AI. It works, sort of. It's not great. Google Scholar gives you the information you need. It already gets relevance right to the top, so you're going to get the more highly cited research recent papers closer to the top. And that helps you forecast the impact of your paper going forward. So I recommend, Justin, in answer to your question, getting the fundamentals right. There's just no shortcut for that. And if you are feeling unconfident, that's where supervisors come in, a supervisor or a mentor. And if you don't have a real mentor, well, look, I'm fortunate. I only got to where I got to being a professor at Harvard, Oxford, and Cambridge, published over 400 papers, peer-reviewed articles, won over $10 million in competitive research funding, had a big team at its peak of over eight postdocs and a small army of master's students. I only got there by having lots of mentors along the way. And that taught me everything from how to consistently find topics to scale my research and run effective labs and big teams. So you need that mentorship. That could come from your supervisor, could come from us, could come from somebody else, but you really need that. You need somebody. I mean, smart people will ask, how do I do this and try to figure that out. The smarter person will ask, who can show me how to do this? Which of those two people do you think is going to figure out the answer faster? And it's the person who asks, who can show me how to do this? And that's the real cheat code. That's the real shortcut. And people want AI to be that person, but it's just not what AI is yet. It might be in the future, but it has the failure modes that I've shared about that prevent that from really living up to that hope and promise and potential that people think all too commonly that it has. So yeah, actually, and Utkarsh, that was the question here, Syspace for finding literature review gap and doing literature reviews. I've recommended Syspace before because I think it's a really great tool for engaging. And this is somebody, if you look at the poll on my channel that I just posted a few days before the workshop, Syspace is great for helping you unpack complex topics and wading into and understanding papers and engaging with a PDF. I don't personally love, and it's not something we really advocate or train to use AI to do literature reviews for you. We tend to use AI more as a sense check to stress test and provide and intentionally frame it to provide critique because this has become increasingly important, especially if you're submitting a paper to peer review, because a lot of reviewers are becoming lazy and using AI to do the peer review for them and then just humanizing it. The irony is they would never have their students do this in their classroom, but they are doing it with peer reviews. One is it behooves you to know what the AI is going to say about your paper, whether good or bad, that is important knowledge going forward. So and Jasmine asks, what are your thoughts on using AI to help strengthen a research question? I really love the Pico model for refining a research question to give you clarity and boundaries and also to find your conceptual nearest neighbor paper to make sure you're not duplicating existing work and to ensure that what you're doing is feasible. Let me share the worksheet that I have with you. I think this is going to help William as well. In fact, let me share my screen in a second and show that just now to you guys. If I can pull that up. Let me see if I can pull it up. One second. I'll pull that up in the background. Sometimes while I'm live on the screen, things aren't as fluid or smooth as I would always like. OK, I think I think I will be able to be able to do that. May not work, but I'll try here in a second. How to bypass turn it in checks. Look, the AI checks are are just not not brilliant, honestly. I've had my own work because we write with our academic peer writing system, get flagged multiple times for AI and 100% wrote it myself. No way I used it all. Even submitted a paper that I did back in as a test in 2017 and it came up something like 40% AI, but it was before AI existed. So this is this is a bit of a challenge. I'm Philly. Oh, Philly. Hey, good to see you, Philly. Philly is with us. Philly is just cruising. I remember Philly. We're about to share Philly and bring Philly on the channel later on. Philly ran into some of this early on. And yeah, remember, Philly, we had Philly if you don't mind sharing some of these failure modes because I know some of them not to put you on the spot applied to you, but you got through it. You did the hard yards. You did the hard work. You got your first systematic review published and you just published a paper in a Q1 journal. You got full funding to a PhD and you did this all while running a business and being a master's student in engineering. Just really proud of your success. I can't wait to bring you on as a case study and you even had to overcome rejection from peer review, a rejection where you had heavily relied on AI and yeah, definitely. I know I was being fierce, but loving emphasis on the fierce in your case and for the good, and it turned out really, really well. Oh, Nehal, good to see you as well. We need to catch up later. I want to hear about your latest papers and publications and how you're doing in Ireland. So lots of good comments, guys. Hey, Philly. Hey, good to see you. I'm going to keep going here. Let me see if I can pull up this sheet. Give me two seconds if this will actually pull up. Oh, here we go. So it looks like it might. Let's see if I can get this to work. So let's come here and pull in the defining topic. I don't know if you guys can still see this. Let's see if it'll open up the PDF. This is going to be the real challenge. Open up the PDF here. Okay. Yeah. So right here. So in defining the topic, we'll use our convergence method to get three to five areas. We'll assess them for kind of broad feasibility. We really like to look for low-hanging fruit. We don't want you to do the 10-year project right now, later on, but this isn't the time. And we really want to get you to find the gap. And we also want to make sure there's a lot of debate. So sometimes I see researchers have fantastic ideas, but it's a conversation that few people are having. So you could do great technical work, but if there's no debate or active discussion or recent conversation, it's just going to be tough to publish. So we even show you a method of how to forecast your paper. So we want you to look for high citation, recent papers, and get grip on the debate. Then we have some topic candidate ideas. You have to do two tests. One is this important conceptual nearest neighbor paper. You always want to find this paper that's the closest to yours to make sure you're not duplicating existing work. And if you're doing this as the topic accelerator for systematic review, you need to make sure there's enough studies to review. We have a different topic accelerator for other kinds of topics. But yeah, William, drop me an email and I can share this with you later. I was going to pull up a whiteboard because where I want to go to next, I'm going to put my headphones on, is I wanted to cover some of the submissions we got this week. And we have a really good one here. Let me just pause this. I need to get my headphones on so I can listen to this along with you. This is from Melody. So two seconds, guys. Let's see if I can get my headphones on. And we'll unpack this challenge together. Now, Melody, I don't think, has been using AI here from what I can tell. But let's see what she submitted. So here we go.
[00:38:12] Speaker 2: Hi, Professor. I hope you're doing well. I really do enjoy your videos and I have found them quite helpful. And I've actually been following one of them, but I am stuck on this particular part of the process. So I'm looking for feedback on my research process for my master's thesis, specifically being literature review. So based on your video, I believe I'm encountering type two errors and I'm not really sure how to refine my keywords to fix it. My research question is how do racialized identity markers experience cancel culture? The keywords that I was using are racialized identity markers, cancel culture, and experience. I followed the process in the video. But when I was doing the relevance test to see if there were five articles from Google scholars that would appear within my web of science search, the authors often did not come up or the articles did not come up. And so I'm wondering what that means and how I can fix it. I've tried going into those articles and looking for the keywords within those articles and adding them back. And I was not really able to figure it out. The articles that have relevance tests still didn't work at that point. So now I'm wondering maybe my keywords are just incorrect. So yeah, any guidance or suggestions that you could provide since I'm kind of stuck here? I'm not really...
[00:40:01] Speaker 1: Okay. Look, Melody, thanks for sharing that with us. I know it's not always easy to share things and put yourself on the spot, but that's exactly what our communities and here specifically this live session is for. I want to pull up a whiteboard because I find a lot of the researchers I work with are quite visual and I think this can help get the nuts and bolts down of what you just shared with us. So what I understood is, I just was writing this down, you said, how do racialized identity markers experience cancel culture? So I don't know if there's a they them going on, but racialized identity markers aren't what's experiencing cancel culture. So I think that I like that you have a question that this is in the form of a question. This is really nice. Inherently, there's something a little bit quirky about the logic here because if we break this down into kind of a visual stripped down logic, this is something called a DAG and it's a really helpful way to kind of break apart the nuts and bolts of your study. So you kind of want to see how racialized identity markers are. I think your logic is more like this, how cancel culture. See how it's hard for me to figure out what the here, the right and left side is the outcome variable are in your in your setup, how cancel culture is affecting racialized identity markers. So I'm not really sure about this research question here. I find it really helpful at this stage of a topic to also ask yourself, especially for social science, what do you want to show to make sure you don't drift from your core passion, your core interest and ideas? And I think maybe reading between the lines, maybe you want to show certain racial minorities, either they have a lot more cancel culture directed towards them, or maybe it's much more devastating or traumatic for them in terms of mental health. I'm not entirely sure. So again, if you don't have this logic really sound, it's gonna be hard to go forward. The other thing I want to point out that you need to bear in mind in a literature review is a funnel. Now, this is a thesis. So I don't know if your whole thesis is the lit review, or this lit review is part of the thesis that's going to glide somewhere. But typically, the point of your literature review is to glide to your gap, and that gap is then going to feed into your research question is going to directly answer that. That's going to form kind of your thesis aim. And that's going to drive straight into you want it to almost feel inevitable that your methods are then going to come next and answer the research question. So you have this nice bridge that comes from your lit review. So sometimes you need to go a little bit broader. So the two things, you've chosen to do this as a systematic review. Typically, if you're doing a systematic review at the master's level, that is your entire thesis. So what I want you to do is take a step back, and I want you to get your conceptual nearest neighbor paper like we chatted about, and I really want you to do the feasibility test. I think you're struggling to find some papers in part because, well, you set up your keywords as experience, which isn't a keyword. Usually you want the nouns and the verbs. So that's not a keyword. You want your keywords to be cancel culture, and I guess these racialized identity markers. But this is, again, a little bit quirky. I think I would, again, forgive me if I'm getting this wrong, and do drop into the chat. I think what you want to see instead, I'm going out on a limb, okay? So I want you to see maybe the impact of cancel culture on, say, racial minority groups. Or if maybe cancel, I'm not entirely sure. This would be one guess. Even this is a little bit woolly, but I want you to make sure there's enough for a literature review. I want to make sure there's enough five to ten studies, at least, if you're doing a systematic review. Yours reads a little bit more like you might do a traditional literature review and do maybe a case study of a racial minority who was canceled. And I think that would be a more interesting setup. So for you, if we're starting from the beginning, I'd probably actually do a traditional lit review because it's a master's thesis. And then I would glide into a gap and say, we need a case study of how racial minorities experience this and look at cancel culture of maybe a prominent celebrity. I mean, the one who immediately comes to mind was JK Rowling, but she's not a minority. So pick one who was canceled and maybe go deep into their interviews in what they said and maybe their Twitter posts or whatever about how it was affecting them and kind of get to this gap. Maybe your gap is going to be, we don't really have a whole lot of case studies of how racial minorities are the victims of cancel culture. I'm not sure. But I hope this helps to reconstruct it. But I can see you were doing a really nice job using our methods, learning about type two errors. That's really fantastic. But I'd encourage you to get this feedback and get this kind of alignment. This really speaks to our North Star alignment sequence, right? Which is something we really encourage you to get for your thesis. When all these things kind of fit into place, almost this moment, people to me described their research clicks into place and just everything makes sense. And there's clarity. And I want you to have that feeling in that moment. And thank you for sharing this with me. Do send me an email. Drop in the chat. I'm going to check the chat now because I can't see while I'm sharing the screen. If that's helpful. And then let me go back through some of the questions. We've got some time for some questions and comments. And I really like the comments that came through. Tata Raji. Tata Raji is also one of our ambassadors. Staying on the steering wheel is best. You know, you don't realize the importance of staying on the steering wheel until after you put AI at the steering wheel and has crashed the car. And once you've had that happen to you once, you will never do that again. I just hope from this channel, you don't learn the hard way like a lot of people do. And Feli says the same thing. People who've kind of been there stay in the steering wheel. 100% true. Let me just take some of your comments here. The guide on how to find your topic. Yeah. You know, because a lot of these guides are working with a system, they do great standalone, but they do work with feedback and input from the community. And that could also be from a supervisor. So yeah, Sonia, if you'd like to send me an email and we can definitely share that with you. And Karsh has a question. Can you discuss a paper in parts? Discuss a paper in parts. Karsh, I'm not sure I fully understand your question, but can you maybe clarify? We do like to break down the paper. We have a full how to write a paper step-by-step course that uses our inside-out method where you start with the methods and then you go to the results and then you go to the discussion or conclusion and then you go to the introduction in that order. So you don't get lost. And it's also the more intuitive because you're writing while you're doing things. But I'm not sure if that's 100% what you mean. And Feli, guilty as charged. Yeah. But, you know, you came out on the other side, Feli. I can't wait to bring you on. I think it's going to really inspire a lot of people. We're going to start wrapping up here in a second. And we've got... Dusk just says, I agree about not doing a full literature review with AI, but I found the illicit systematic review to be good for finding things you might have missed. I'm about to release a video where I actually do test out illicit. I do think illicit is probably one of the best tools for playing with your Pico a bit. It gives you some suggestions from AI. So I think they've done that incredibly well. The challenge with AI to try to do screening for you is you're still going to have to do the screening yourself. You're still going to have to check AI. And by the time you've done that, you've done the screening yourself. So I don't find it that helpful. And the problem with a lot of the AI search tools is they're going to be restricted to broadly open access papers. So you're going to miss papers that you need. So look, it may change in the future, but the only technology we use for systematic reviews is Zotero. It's all you need. It's 100% free, and it's fast. Our researchers go start to finish investing about five to 10 hours a week, even with no background experience, and they get their systematic review submitted within three months consistently. Five to 10 hours a week, step-by-step, good old-fashioned Zotero, one paper at a time. It just works. So if you have good luck with it, though, share the use case that you've been using. If you can share, that's just what you found illicit to be really helpful for in the systematic review process or workflow. Is it the writing? Is it the analysis of the papers? Does it do something better on screening than, say, Covidence or other tools? Is it convinced me? I'd love to be convinced that it's better than Zotero. I just haven't seen that happen yet. So we are playing with coming up with a better AI tool, but I haven't been super happy with it yet. Marina pointed out earlier, we do have our FastTrackGrad mentor, and that also does help link to our trainings and methods, and also parse through my YouTube channel, which is a wealth of content. It just isn't that easy to figure out which video at which time is going to answer which question in the way that our bespoke trainings are laid out into a step-by-step process. So yeah, thanks for sharing that with us. Guys, a lot of fun in this session together. Ukash has one more. I'll just take a couple more questions. Start with paper, break it down. Why the topic? How did the author justify the gap? How did he arrive at the hypothesis conceptual framework? Ukash, that's a great idea, and I think we might do kind of a paper breakdown session. That could be a lot of fun, Ukash. So if you can send a couple papers, maybe from some different fields, we can break it down. That'd be a great topic for a review, and I think that's one of the best ways to see the formula. If you look at my channel, you'll see some how to write the discussion, how to write the methods videos, and we go through actual methods and discussion sections, and I show you and unpack what the ingredients are in those papers. So definitely check that out on the channel. And yeah, by the way, I saw you had a question about if we'll share these failure modes. Absolutely. We'll probably drop that down in the description, or maybe I'll send that, blast that out through. It's a good idea. I'll send that out through the email list. We have a great email list, by the way, guys, if you haven't already. If you follow the QR code here, when you enter in your email, or you can start an application process, it'll get you on our email list, and you get our publishing playbook emails, which give you our best tips, things, they're simple, little nuggets. Like we had one recently about the limitation section, a section that people fear, and we turn the limitation section into not just, oh, no, I have this limitation, but turn it into field level challenges that you overcome. So definitely, I encourage you to get on that list, and I'll probably blast it up there. And yeah, I mean, the research is already complicated enough. Just keep it simple. I see people coming with these tech stacks, like seven or eight different tools that don't even communicate with each other, that are being applied at the wrong part of the workflow. And not only are they just spending a lot of money on things they don't need, they're just getting more confused. But they get that feeling, that confidence before clarity, failure mode one, that happens. And Jasmine, hey, yeah, welcome. I love new people to the channel. You're not going to want to miss Friday. I've been a little bit remiss because of all my travels and chaos and trying to build our support from the inside out. I haven't been doing as many lives. I'm going to be picking that up and really accelerating our lives to one a week. And this one with Dr. Galea from WHO, whatever your field, that's going to be a doozy because he has had the front row seat with prime ministers, presidents around the world, and knows what it takes to get research and policy, and when and where it doesn't work, and the common mistakes researchers make. He just has a unique blend of having been both a researcher himself and a policymaker. And I've had the good fortune of having published, I think, I published over 10 or 15 papers together over the years. So that's going to be a lot of fun. And Jasmine, yeah, the systematic review challenge is great. Of course, what we're able to share publicly is a fraction of as good as what we have privately. So I'd encourage you to check that out. And I'm about to release, if my video editor can get it done soon, I'm glad, but I'm chatting to you here. A four-hour systematic review walkthrough on the channel is still not as good as what we've got on the inside because on the inside, you get this magic combination. Actually, I'll just share it with you here since we're all here together, two seconds. It's just got this magic combination that people really love in the sessions. What you're going to see, for example, here in defining keywords and building out the core, you're going to have broken into steps. Each step, you get a video what to do. You get a demo of somebody doing that step. And what's cool about these demos is these are real projects. These are real papers that went on to get published. For example, here, a live demo on piloting and testing your search, you're going to see somebody actually doing that. And this was Joanne. And here's where her paper was published. And this was part of the process. And I've never seen anything like that anywhere. And people really learn when they learn sports, they learn from watching others, not just by doing the drills and training themselves. So we do have a space for training through our workshops, which is great, but seeing others do that step really is like the secret sauce in these courses, along with the step-by-step worksheets where you're actually doing the project step-by-step and making sure you've got it right before going on too far. Anyway, shameless self-advertising, sorry, guys, that wasn't my purpose here, but definitely check this out. And Friday, it's gonna be the same time, same time as here. You can see it if you click in the live button, live stream on my channel. It's gonna be here. And awesome, William, yeah, you'll be chatting with Marina, potentially, who you just saw a moment ago, or Dadaraji, who commented earlier, or perhaps myself, we have different kinds of support. We always want to connect you to the best support for you. And sometimes we're not a good fit, that's okay, we'll still point you in the right direction. And systematic review, we have a dedicated... The previous live I just did was on systematic reviews made simple. So I encourage you to go to my channel, it's gonna go through kind of some key considerations about systematic reviews, why it's valuable as a paper, especially in a field like computer vision. We have some really fantastic researchers, both Anastasia and Nikki come to mind in computer vision at the moment as well, so can potentially get you linked up to that research group. But yeah, McBobX, thanks for sharing that with us. There are a lot of systematic reviews, if you look in IEEE, a bunch of them get published there. And it's a very foundational paper. What I say about systematic reviews, just briefly, guys, we'll call it a day, I like to stay until the questions end, but not run too far over an hour, is that you want to get your foundations right. So when I moved to England, I was always a tennis player, but it rained a lot in England, so I switched over to squash, which is indoors. And I thought, oh, I play tennis, this will be easy. And I was just terrible. I was just terrible. I didn't know how to play. So I played for years and never got any better. And the thing is, I had to get some training to learn the basics. And once I learned the basics, I could get on a path to improving. And systematic review is kind of like that. It's going to master the fundamentals of good research. You're going to have to do literature review. You're going to have to read the forefront of your field. You're going to have to analyze and evaluate gaps and make a map of a future research agenda. You're going to have to learn the tools and skills in your field to do it properly. So it's going to force you to do everything. When you come out on the other side of that, not only are you going to have a publishable paper, you are going to be a different person. You're going to have a research literacy. You're going to master research that enables you to shape the world. So oftentimes, researchers will come to me and say, I want to publish a paper. By the end of it, they're kind of sitting where Feli is, realizing, I have a superpower. I am not just consuming this world of information and being able to navigate deftly through AI slop being produced everywhere, and somebody just estimated that two-thirds of the internet is now getting saturated with AI slop, but you have the superpower of producing truth in a world that desperately needs it. So it gives me goosebumps just thinking about it, and I want that transformation for you. And we are on a mission to enhance research literacy in our own small way to make the world a better place and move towards an evidence-based world and reality and society. And thank you all for being part of that mission. And yeah, thanks, Mr. Hass, that's love from Afghanistan. I'm in Italy right now, truly global community. Hope to see you at the next session. Don't miss it with Gautam Galea. I check my emails. I respond to everybody here on the live stream if you comment, so do get in touch. And I will look forward to seeing you in the next video. Have a great rest of the day, everyone. Bye for now.
We’re Ready to Help
Call or Book a Meeting Now