The 5-Part Equation That Makes a Paper Publishable (Full Transcript)

A practical checklist to test research gap, novelty, alignment, clarity, and journal fit—plus a simple way to forecast impact before you submit.
Download Transcript (DOCX)
Speakers
add Add new speaker

[00:00:00] Speaker 1: Before you finish your draft, before you ever write the cover letter and load everything up to click submit at the journal and just hope and pray for the best, you need to have answered a critical question. Is your paper actually publishable? And by that, I mean, it's not, is my paper good? Is it decent? Did I work hard on it? Did my supervisor like it and say it's great, best thing since sliced bread? But you need to know, does your paper have the critical and necessary ingredients that tick that box that make the grade into something publishable that independent reviewers out there are going to say passes the bar. And to help you do that, by the end of this session, I'm going to leave you a publishability equation and with a five question test that you can use today on your manuscript or even your topic idea to help you know if it's publishable. And if you can answer yes to all five of these criteria, you're going to be ready to submit. And if not, it's going to help you know exactly what you need to do to fix it and make it publishable. If you're new to this channel, I'm Professor David Stuckler and this is Fast Track. We are a fast but growing community of people all over the world and what I aim to provide on this channel is the support that I wish I would have had when I was just starting out. Flash forward, I've been a professor at Harvard, Oxford, and Cambridge and I've had the great benefit of fantastic mentors along the way and pass that knowledge down to my students over the years at those lead institutions. And now I want to make that implicit logic open access and available to all and give you the same insight that I shared with those students over the year, many who themselves have gone on to become tenured professors. So with that, let's dive in. As always, we start with our quick tip of the week and today it's define your community. And I think times have changed for PhDs and especially after COVID. Things are much more online, dispersed, international. I remember when I was doing my PhD back in the day at Cambridge, we had a true cohort effect that we had researchers sitting, literally sitting alongside each other, working together, helping each other. And today that feels rare. And many of the researchers I talked to feel like they're working in isolation or working all on their own to figure things out. So I encourage you to find a community. There is that African proverb, if you want to go far, go together. And if you want to go fast, go alone. So but you want to go far and research is a long game. So find your community. It could be through our communities, could be through your university, it could be elsewhere through other programs. So I really encourage you to find that tribe so you are plugged into a rich network in your field. And believe me, times get hard in PhD. It helps to have people in your network you can count on. So with that, I just want to also give a welcome to some of you who are watching on Team Replay. Definitely hit like, it helps the algorithm to reach other people who might just need to hear this message today. It really helps our community to grow. And let me know if you're on Team Replay, just comment Replay. I read and reply to every comment on the channel. And we'll have some time as ever for questions at the end of the session. So let me dive in. I'm going to pull up a whiteboard as I like to do because I find a lot of you here are quite visual. Some of you are listening in the car or on the drive to work like a podcast. So I'll also explain the logic so you don't need the visual. Okay, let's start. What are the five essential ingredients that a publishable paper will have? And let me share my screen. To go ahead and do this, hope you guys can all see this. I think of this formula, we like to make things simple, but I like to think of publishability as a function of a few main ingredients. One is the gap. One is going to be the value of your paper. One is going to be its alignment. And I'll explain that in a minute. Another is going to be its clarity. And the fifth part of your equation here is going to be the fit, specifically with journals you're looking at. I'm going to go through each of these ingredients here in a second. But you can think of this like multiplication. Because in a way, if, you know, the value of your gap is zero, or the propensial value you add to the field is zero, or the fit is zero, well, you can have all the others in place, but you multiply it by zero, and your publishability is going to be zero. So you have to have all of these, and again, it's multiplicative. So if you have a very weak gap, you can have, you know, other things, you can have like a very clear paper, great fit with the journal, but if the gap is just very small, you're going to have low publishability. So that's why these are indispensable ingredients for your paper. And let me know if you guys are struggling with any of these as I go through them. But again, by the end, I'll give you a five-question test that you can go through. So let's go through the first, and this is something we do spend a lot of time on, on this channel, is to understand, well, do you have a real gap? Let's go through the first one. Do you have a real gap? And sometimes I see people come with a very amorphous gap that's just weakly defined. So like, well, there's just not enough research. Well, that's kind of there, but you need a specific gap. What's missing? What's that missingness in the field, almost like you've taken your reader right to the edge of knowledge and say, this is where the cliff drops off, and we want to, you know, the cliff is here, we want to build a bridge and get over there. We need to take them to that gap. And so that can be, it could be a missing population, could be a missing timeframe, could be a methodological weakness, could be a conflicting theory or contradictory evidence or not knowing enough about certain mechanisms. But you should be able to articulate your gap very cleanly. Reviewers and editors both need to see that. And if they don't see that, you're going to fall prey to one of the most common reasons for rejection. One of the biggest threats to publishability is that they don't see, they don't see novelty. And this is going to bring us to the next point that we have. So I want to make sure that you guys have this clarity about a gap. Now, one thing to say too, is that gaps have different values. Not all gaps are the same. So I see researchers sometimes say, well, I can see, right, I mentioned a missing population. This research hasn't been done in Ethiopia before, or this research hasn't been done in Canada. And well, sometimes that can be valuable for the Canadians, it can be valuable for the Ethiopians. But just in showing something we already knew, say, in the US or in India, in those populations, might not be big international value for publishability in big American journals, for example. It might be valuable for publishing in, say, the Ethiopian journal of something or other. But you have to think about that some gaps are inherently going to have more value than others. And I really like to steer you in kind of an early career stage, you know, if you want to aim for Q1 papers, you need to have Q1 level gaps, that's going to be you need to be kind of Q1 level in all of these areas. So when you're looking at to make sure you have a real specific, well defined gap, you also need to make sure that this is this is something meaningful. And sometimes I think for researchers just starting out that that's hard to define. Is this meaningful? Is this valuable? There's some ways that you can start to see that. And I'm going to share that in a second, because that takes me to the second point that you've got to get to. So okay, you got a real gap. But now do you have clear value add in your paper, defining gaps good, but you also kind of have to be able to deliver on that and be able to fill that gap, say something's missing one good, but your research needs to come directly in contact with that gap. So that clear value add is I mean, your paper has to change what we know about that gap. And so if you're imagine if your paper just was gone tomorrow and never existed, would we have lost some valuable knowledge? So your your, your clear value add could be all sorts of things, it can be maybe you have right in terms of that gap, the clean example before would be well, we're going to add in it hasn't been studied in Ethiopia or Canada. So that's going to be the value that I fulfill. But there are other ways you can do this, you might have a more up to date analysis, you might have a stronger synthesis, you might use a new conceptual lens or new methods to shed light. There can be all sorts of value add, but you, you need to be able to articulate the value you need to articulate, you need to be able to articulate your contribution and novelty here in relation to that gap. And if you cover these two, you've really gone a long way towards optimizing your publishability formula. Now, how can you really see, we've got a little trick to help you see, right, that these two components of the equation is as meaningful as this important to the academic literature. And we do that through forecasting your impact. So we like here to forecast your impact. And I commonly say there's a stat that gives me nightmares, that the median citations of research papers in the first two years of their publication, median number of citations is zero. And I don't think you guys want to sometimes you think, well, I published as well, I'm good. Well, it's better than published. I mean, many of you are doing this because you want to do something important and meaningful, not just pad your CV. So it behooves you to take some time to ensure that what you're doing is going to have impact. The easiest way to do this is to go look for something that we call a conceptual nearest neighbor paper. And that is the paper that is most similar to yours in the field. And that is often anyway going to be a benchmark for your gap, right? To know there's a gap, you took us to the edge of knowledge. So you're going to be able to take us right to a paper similar to yours and say, well, that's the point where our knowledge drops off. Now, when I say about forecasting, your impact is simply go into Google Scholar, look at those papers, and you get a sense of the citations. If you've got citations that are getting zero citations, well, guess what? Your paper is probably going to get zero citations too. So the best correlate of your papers impact and value that you can infer from these two components of your publishability equation is going to be from the performance of similar papers. Is that a perfect correlation? No. Are there exceptions? Yes. But this is an objective indicator that will help give you guidance if you feel unsure. All right. Hey, guys, any questions do pop in the do pop in the chat here. Tom says, hey, Prof. Hey, everybody. Good to see you. Tal says, hey, Prof. How are you? Hey, Tal. Good to see you again as ever. Edo says, I'm skeptical about this page. Edo, yeah. Well, let me know what you're skeptical about because sometimes that can mean I haven't done a clear enough job of explaining things. So do let me know. If you're confused about something or something's not clear, that means it might not be clear to somebody else. So let us know. I really like those tough, challenging kinds of questions. We have IndiaCountry44. He says, hey, Prof. How are you? Hey, Tal. Good to see you again as ever. Edo says, I'm skeptical about this page. Edo, yeah. So let us know. I really like those tough, challenging kinds of questions. We have IndiaCountry4569 for the latest research in any field. Kindly suggest websites. Yeah, using these kind of tests along the way here, I just recommend good old-fashioned Google Scholar. I see people racking up AI tools and sometimes racking up all these fancy AI tech stack, all these gizmos. It's a sophisticated way of just avoiding doing the thing that you need to do. So yeah, I think keep it. The research is already complicated enough. You can keep things simple here. Let's go to the next part of the publishability equation, and this is alignment. And this alignment is really important, is that you now need a method that actually answers your research question. I'm just going to abbreviate RQ for research question here. And this is kind of critical, right? If you haven't, right? Sometimes people will just say, oh, there's this gap and I just happen to have this data or, well, I just like using this method because that's what I always use. Well, it needs to deliver on your research question and your gap and deliver that value. And so sometimes I actually do see misalignment in papers, right? So your method should really flow quite logically and line up with what you're aiming to deliver on. So believe me, look at your paper, make sure that these two are syncing up because I do sometimes see things promised in the introduction of a paper that that paper can't actually deliver. And if you've done that, you have defeated your publishability. We move to the next example. And the next one that follows around this is around clarity. And this is always the case. I mean, you can have the best results, best findings, but if the reviewers don't get it, your paper is not going to be publishable. And I think a lot of rookies or beginners fall into this and they get very frustrated when the reviewers say, come back and ask for things that were actually in the paper. I used to get very upset about that and think, oh, these dumb reviewers and you have all these reviewer two memes out there for that reason. But over time, you know, there are some curmudgeon like reviewers out there are going to be difficult. But over time you realize, you know, maybe I could have been clearer about that. Maybe I could have made it easier. So I always want you to have sympathy with your readers and reviewers to make things so clear, so blindingly obvious they can't miss it even if they're dumb. That's what you want to aspire to. So you want to make sure that you've got clarity and that these elements of your gap, your value add, your methods alignment are showing, right? They can be in there, but if they're tucked in their ways that the readers can't see very quickly and easily, you're going to get rejected, either desk rejected or later on when the reviewers can't make heads or tails out of your paper. So this is really important, especially in the clarity. You really need to make sure this comes out in your results section and your abstract as well. I really want to see here that your results and abstract line up. And it's really key. The abstract is supposed to be this mini snapshot of your whole paper and you'd be amazed how many papers I see where this introduction of the abstract or the background of the abstract doesn't line up with the introduction of the paper or the abstract highlights things in the results, which is not actually the highlight reel of the results. So there should be this synchronization happening. If those are out of sync, you've undermined your clarity and you've weakened your publishability. So at the minimum, I want to see that your results and your abstract are really lining up here. So with that, that's clarity. Let's go to our last criterion in our publishability formula, and that's the journal's fit. This is another top reason for rejection, is it's just outside of the scope of the journal. Your editor's asking two questions when they're looking at your paper. And one is, is this of interest to our readers? Is this something that fits with our journal? We have limited resources, could be a fantastic paper, but maybe needs to go somewhere else. So yeah, your publishability is going to be zero if it doesn't fit the journal you're submitting to. So again, there's a whole process we go through to help you find and optimize your journal so that your cover letter, your introduction, even some of your citations are lined up for where you want to publish. It's an important step that a lot of people just gloss over. So you need to get that journal fit. The other thing the editors are, of course, looking for is in the other steps, is there clear value add? Do the methods line up? Is it sound? Are there important results going on here? Is it clear? And I can follow it quite quickly. But this journal fit, if you don't have fit, you're dead on arrival. And so I hope you can see again, in our five part formula here, these are all five things that you have to nail. And if you guys have struggled with some desk rejects, or even have struggled with getting some reviews that were harsh, see if you can see where you might have had some weakness in these areas. It's going to help you diagnose what you can do. And this is where I want to get to your final checklist of what you can do yourself today going forward here. So here's our five part final checklist that you guys can implement today. So one, can you state your research gap in one sentence? That is the level of clarity I want you to have. Similarly, and this should be linked. Can you explain the value add of your paper, I mean, its contribution also in one sentence? All right, this is an exercise just for you, it should go in your paper, but it's a diagnostic should be there. Three, do your methods directly answer your paper's main research question or aim or hypothesis? Different fields will set this up differently. Again, I know for some of you will be like, well, that's obvious. Take a hard look. I see this out of sync all the time. Finally, say are your main results, findings, the really important novelty and power of your paper plus the plus a gap in value add are merging very clearly, like easy to spot. I don't want you to have to like dig and comb through the paper to find this stuff. I want it to be blindingly obvious. Assume they're dumb, so they can see it. Again, point five, can you clearly explain why this paper belongs in your target journal? Can you do that? Do you have some evidence for why your paper is a fit? Not just I think this journal looks good. It has a nice title evidence. They published similar papers on this just last year. They've got an ongoing debate in this journal. The editor has a call for papers on this topic. If you answered no, and be honest with yourself, answer no to any of these, you know what you've got to do. And that clarity is power. So let me know, guys. Let me know if you passed this publishability formula for your papers, if you struggled with any of these things on your own. And just remember that sometimes I think there's a view that, well, you know, the value of my paper is the hard work that went into it. And unfortunately, the irony is there's not necessarily a direct link between more blood, sweat, and tears and a better paper. I've, believe it or not, written a paper, the fastest paper I've written that later got published in The Lancet. And I was able to do that paper start to finish in less than a week. It had very high publishability metrics. And it's just not that direct correlation necessarily between hard work in or more sophistication in and value out because a lot of your publishability formula is going to depend on the strength of your gap, the strength of the value that you're adding. So a very powerful gap, very powerful gap. It could be something that's really, really important in the field. Somebody's been able to shed light on, you can do that very quickly, it's still going to have huge publishability. If you can do that with clarity, if it with the journals, have internal alignment in your paper and deliver the value. All right. So guys, I'm going to take some of your questions. And we've got a little bit of time today. I had fewer submissions this week. Guys, I want to encourage you all to take advantage of the session here. I've got a QR code on the screen here if you want to participate and get me to look at your video questions, look at your drafts, manuscripts, anything at all. This is your time that I've blocked off and dedicate every single week. So definitely click the QR and take part. And of course, I'll leave a link up here about our research communities if you want to check out what we do, see if it resonates with you, have a look. You can have a trial and give us a test drive and see if it works for you. All right. Let me come back to our chat. And so I've got Ama19128, I would like to know if a method that's used in location A can be used for location B. So this seems to come back to this population gap question. Some population gaps, if I'm understanding you correctly, I'm reading between the lines. And yeah, that can be a kind of low-hanging fruit paper to say, well, this approach hasn't been done in this population or this setting. So I'm going to replicate this method, but I do this extension in this new location. The better way to argue for the value is to say, well, why is this new location? Why is location B so interesting? What does that tell us that location A doesn't? Is there a reason to believe that we'd get a different result in location B? If you have that kind of argument, then it becomes a broader interest than just something that's going to be interesting only to the people in location B. Otherwise, it can look like it's marginal or small value, just kind of derivative, just a minor extension of what's out there. And that's okay. But it does kind of build up brick by brick, and only every now and then you have a big punctuated break in how science evolves. And so that's not necessarily a bad thing, but it will shape the value of your gap. So I hope that answers your question. Let me know if I got that right. I'm interpreting through short text messages, so sometimes I do miss things. And let me ask. Here we've got Reverend David Laliakal. Hey. Good to have you join us today. Says I'm writing on indigenization of military music in Kenya. I'm having a problem accessing papers related to this topic. Okay, well, interesting. That's a, I encourage you, when you say you're having trouble accessing papers on this topic, I wonder if it's your topic so narrow, there just aren't any papers written on it. Because you, I mean, I cannot imagine there are, there's a lot of ink spilled about indigenization of military music in Kenya. Heck, I don't even know that there's a lot of research papers on military music in Kenya. So you're kind of shrinking your space with each of these variables, like the parameters on your topic. Kenya, go to University of Kenya. Now I'm going to look and look at the part that's about music. Now I'm gonna look at the part that's military music, and now indigenization of the music. So I'm, my suspicion is, I thought when I saw this, you said I'm having trouble accessing papers, it's just that maybe you couldn't access because you needed open access journals or something. But now that I think about it for 10 seconds, I think your topic is way too narrow. So and this I commonly see when our correct model we often talk about. And if you're new in research, I'm going to guess that your new research, sorry if I'm offending you, but I'm just gonna assume you're new in research. Sometimes the way people will start is with a lit review. And sometimes you got to think about the lit review as a funnel, a strategic funnel that's going to make an argument for your study. What happens is you want to be down here and you want to say that maybe you want to do something about indigenization of military music in Kenya. Well, if you're going to derive that from your lit review, that's what your lit review is going to do and distill and spit out. You need to go broader when you do your lit review and try to access papers as you described it. So here you got to go broader and you might need to look at indigenization of music in those debates. And then you need to look at what what are kind of maybe debates in military music and you need to think about the stops along the way. And then maybe you need to get into kind of Sub-Saharan Africa or or African countries generally or developing countries. I don't know exactly what the stops are going to be along the way. We're going to start your debate. Maybe you want to start. You could. There's no right or wrong way here. Maybe you need to start with military music and the role that that's played and then how there's indigenization processes and then what's been looked at in African countries and finally get this at the gap that that we have. And that's going to roll out the red carpet for your future studies. I hope that makes sense. I suspect that's where you're at at the moment. I have tools and business hack 9841 says, hi, hey, and then goes, hey, David, I just joined your program today. I'm interested in biomedical tracking. How do I get the best three to five research topics I can submit from this area to my supervisor? Well, if you're in our program, really encourage you to go through the finding a winning topic mini course. There's a great step by step guide and worksheet and we've got our private workshops you can join to test it out. You're going to follow a two phase approach. I actually covered some of that in the last live, but the internal training you've got is even better. And just go step by step through that worksheet and then take what you come up with your ideas to our workshops. It's going to force you to run a model to make sure it's clear, well defined. It's going to force you to define your conceptual nearest neighbor paper. It's actually going to use it built in. It's going to ensure you satisfy publishability criteria before you go down a dead end. So definitely check that out. It's step by step so you can't go wrong even if you're doing this for the first time. We've got—and Ama says yes to an earlier point. Okay, cool. That sounds like I answered your question about location A to location B. We've got Bungie001, Bungie or Bungie says, always start with literature review if it's your first time. If you have a solid idea in mind—oh, I think you're trying to quote me back here. If you have a solid idea in mind, do you think I should go for a research idea instead? Secondly, how many authors do you recommend in max? Oh, this is a great question. This is a great question. So look, you can skip straight to doing a research idea. Just make sure you've—think about the flow. If you do a lit review first, you will uncover gaps. So you've got clarity that you've got a good topic. If you skip the lit review, you just need to make sure you've done some kind of review. You know the field well enough that there's a gap there. So often, if I have a solid research idea, well, I've already done a bunch of lit reviews. I know my field intimately. So I don't need to do a lit review to help me harvest ideas because I already know them. I have a thriving research agenda and pipeline, but I've been doing this for 20 years. So if you have a good research idea, sometimes that can be handed to you by a supervisor. It still behooves you to ensure you've got the nuts and bolts in our publishability formula in place like you've got a solid gap and you've identified the value and forecasted the impact of what your study could do so you don't do a paper that is like a zero in one of those publishability formula components and is dead on arrival. I don't want that to happen to you. So have a think about that. The other consideration on research ideas is just check that it's feasible in a short period of time. And sometimes I see people taking on projects that are vast, that are just too huge for the stage that you're at. I really encourage you guys strongly at early stages of your career to go for low-hanging fruit. So you don't have to start with a literature review. Just compensate it. And in the paper itself, you will have a mini literature review, which is your introduction, which is still going to be a strategic argument for why your study needs to exist. So thanks for asking that. The second part of your question, you said, how many authors do you recommend min-max? Well, the important thing is that you're the first author at this stage of your career. Later on, you will become the senior author, like I've done, or in some fields you end up just alphabetizing the authors, depending on what you do. But I tend to be quite liberal with co-authorship. I think you can get quid pro quos in the sense that you bring someone in your paper, they help improve your paper, and they like your paper, they're going to have an incentive to bring you on their paper. That can multiply your publication output in early stages, and it can also link to the tip of the week, which is it embeds you in a community. You're no longer flying solo. And it can help you be more productive. It can help you spot things that are not clear in your paper. So I do like co-authors if they're meaningfully contributing. You want to leverage co-authors effectively. So I like getting full professors, those who have more experience, to really help with the framing of the paper, the narrative, the story. So use them in strategic points. You don't want to get your full professor kind of, you know, helping you clean your data set or help you with kind of very rudimentary statistical code that you could have asked Chet GPT to help you with. You can get Chet GPT to troubleshoot your statistical code, by the way, in certain other methodological steps. It's a very good use of AI. So I hope that makes sense. So there's no hard and fast number. And I tend to be quite liberal with co-authorship and be inclusive to build my network early on in my career. And actually, that's persisted throughout my career. That's my personal philosophy, though. Not everybody will agree with me. Okay. And we have Reverend David Lawley-Eckel saying, Thank you, Prof. David. This is my first time on your platform. I'm glad you picked my comment and responded. I hope to follow through. You're very welcome. Hope it's helpful. Reverend David, I hope, you know, happy to look more, I hope that captured the spirit of your question and where you're at. I would definitely encourage you, it's a common mistake in the literature review, you need to go one level up, one level up in broader, you need to go up the funnel. So they have enough material to review. You can't review. If there's no papers, there's nothing to review. So here we got a very good question. User GQ7 up 5HD9Z says, I guess YouTube, does YouTube just assign names to you guys? I don't know how this works on the algorithm. But good to have you with us, user GQ7. And they say, based on five questions you discussed here, how can a systematic literature review or meta-analysis literature review paper be framed? Well, great question. So same thing. You need to say, well, why do we need a systematic review on this topic? Or why do we need a meta-analysis on this topic? Let's say, with a systematic review, you got two types. These are lit reviews that are done on steroids. They're done in a systematic step-by-step way. That's why I recommend them as the very first paper, because it's kind of idiot-proof. It's great for beginners. They're also more publishable than traditional narrative lit reviews. And they're going to deliver all the benefits to you of literature reviews. And it just ends up being faster and easier to do. So I'm a big advocate of these. And all my PhD students I've had at Harvard Oxford and Cambridge, I've had start with these reviews. You can trace my students over the years. You will see I put my money where my mouth is. And that's what I've done. And it's just been a straightforward formula for success. So OK, in the systematic literature review world, there is a qualitative synthesis and a quantitative synthesis, quantitative often being meta-analysis. So for example, you could say, we need an SLR with meta-analysis, because maybe that quantification hasn't been done before. That would be a justification. I am, for the first time, we're doing a meta-analysis. We need that kind of characteristic. For the first time, we're doing this. You need to say, maybe there have been reviews on this, but not on my topic. Or maybe there's a lot of scattered findings out there, but there's been no review that's brought them together to figure out what the overall body of evidence is saying. So you still need to justify the same as any paper out there. What's the paper closest to yours? And in your case, you're looking, your nearest neighbor paper is going to be another systematic review or another lit review. What's the paper closest to yours? What did they not do that needs to be done? That rolls out the red carpet and makes a case to justify the existence of your paper. That's what the editors are looking for. So yeah, user GQ7, does that help? Does that make sense? If you want, add a follow-up question, and we can actually go look for that nearest neighbor paper to help you define that in your introduction. But this is a critical part of your introduction, it's probably going to feature in your covered letter as well. Thanks for asking that. And Mainari, you know, I don't get feedback in real time, you're laughing about something. So it could have been, I was just goofy or curmudgeon-like, but yeah, I make small little professorism jokes along the way. So not sure, I may have lost you there, but glad we can provide some amusements along the way. I think research is actually a lot of fun, and if you're not having fun while you're doing research, something has gone wrong. I'm not exaggerating, right? You guys should be getting into research because you're passionate about it. That's not to say it's going to be a hard grind. It's like, well, you know, I'm having fun at the gym, I'm lifting heavy weight, it's painful, it's hard, I'm grinding through it, but it's fun, makes you feel good. There's a joy of discovery in research. So you know, if you have the steps in place, and you know, where steps would be fun, being fun is if you feel like you're making no progress. Like imagine I stopped feeling very good if I went to the gym and I was lifting heavy and I didn't get any muscle, or I didn't feel good afterwards, well, yeah, I'd get pretty frustrated and not want to go to the gym anymore. So I think sometimes when researchers are left on their own to figure things out, and they spin in circles, and you're going to hit, here you hit a landmine, here you hit another wall, here you hit another barrier, well, yeah, then you're going to get demoralized and frustrated and eventually feel powerless and want to throw in the towel. So yeah. And user GQ7 says, thank you, prop. Okay, cool. So hope I got your question answered there. I would love to hear what you find out. We've got Bungie or Bungie001 is back. Thank you for the advice and amazing video sometimes. We know we're making a difference, but we forget. Just wanted to remind you that you definitely are. Oh, thank you. Well, this is why I show up week after week after week. This is why I take the time out from being a professor to make these videos and provide the support. And if there's something that you guys would like to see, something you'd like to see, let me know. And that's exactly what I respond to because there are definitely gaps out there in the way people are trained. And again, I know that so many of you come to me and say, I've just been told to figure this out. I'm smart. I'm capable. I mean, you don't have to be a genius to do research. You just need somebody to show you how to do it. You need feedback. You need structure. You need guidance. So yeah, definitely. But thanks for showing up, Bungie. And Dadaraji, who's in our community, says, it's a lot of fun if you have guidance and support. Absolutely. And I know, Dadaraji, you too. You've had a tough ride over the years. And we're in that kind of cluster of having to figure things out all on your own for a long time. And I think really, too, community makes things just so much more fun. And it helps with discovery. You want to be able to bounce ideas off friends and colleagues. And probably notice on yourself how you get the best ideas sometimes in those moments where you're in the shower or you're going for a walk. Having that sense of community enables those kinds of connections also to happen. Oh, Mainari is saying, YouTube made up a cute name. Mainari, I thought that was—I just thought that was your name that you chose. Okay. But, yeah, definitely a cute name. So Tal says, my new co-author fell in love with my capability to do a lightning speed SLR thanks to Props Methodology. I think I've done five in the past two months. I mean, dang, Tal, that's impressive. It is cool. I mean, you can watch—Tal is a friend to our community and you can see his story. If you go on my channel, go to the playlist, there's actually—Tal shared his story with others. It was really inspiring to me and others because—remember, Tal, if you don't mind me sharing, you were about to throw in the towel and you were going to give up because you had a bunch of papers rejected and you kind of flipped it very fast to being the go-to person in your department for SLRs. And, yeah, once you crack the code and you see the hidden system in research, a lot of what you'll see I'm sharing on the channel, if this stuff resonates with you, I encourage you to check out our systems, but once you see that, you can't unsee it and it just unleashes a lot of productivity. And, yeah, Tal, you're an awesome example of that. That's very cool. You got to share some of those with me. I love celebrating with your successes. But five in two months is—I'm not sure I could go that fast, honestly. I could go fast, but that's impressive. I'm a 19-128. I've really understood and thank you. Okay, awesome. Thanks for that feedback. I like to make sure that that is all clear. And just take a couple more here and we'll start wrapping up today. I'm doing my FYPRN currently literature review. Your videos helped a lot. I don't know what FYPRN—maybe is that registered nurse? So, yeah, I don't always know all your acronyms and this is atlylakimv5s, but pleased to hear it. Yeah, I think—somebody commented the other day, wow, your lit review training from four years ago is still accurate. Yeah, lit review is lit review. The process is kind of a century old. It hasn't changed. So this stuff is timeless. It is almost like the third rail that drives things forward that you don't always see. Tal says, I'd love to show these SLRs. Yeah, I'm pretty busy. I'm pretty busy. You can submit them to our workshops. I'd love to take a look. And yeah, Tal, and Maynare, you're saying final year project. Encourage you. Yeah, submit that final year project. I'm going to put the QR code one more time for you guys to submit stuff to the workshop I'll be doing next Friday. I haven't decided the topic yet. We might do one on PhD by publication as one of the options I'm flirting with, but if you guys have a theme you'd like me to cover for the next workshop, do let me know. As ever, we do invite guests as well onto the podcast. We have Professor Martin McKee joining us, Professor Courtney McNamara, and I've got some special guests coming up as well, really excited to share with you, but I'm not going to lift the lid on that just yet, but someone really, really prominent in the field that's going to have just really great insights for you. I'm not going to say too much more, but yeah, I will give you a hint. But someone similar in stature to the editor of The Lancet, to put it in perspective like that. All right. Okay, guys, that is a wrap. I hope you all have a fantastic weekend, and I will look forward to seeing you next week. Think about that publishability formula. Be honest with yourself. Let me know in the comments how you stature.

ai AI Insights
Arow Summary
Professor David Stuckler explains how to assess whether a manuscript is truly publishable before submission. He proposes a “publishability equation” with five multiplicative ingredients: a clearly defined research gap, a clear value-add (novel contribution), alignment between research question and methods, clarity of writing and internal consistency (especially abstract–results synchronization), and fit with a target journal’s scope and audience. He advises forecasting likely impact by identifying a conceptual nearest-neighbor paper and checking its citation performance as an objective proxy for whether the gap/value are meaningful. He offers a five-question self-test: state the gap in one sentence; state the value-add in one sentence; confirm methods directly answer the main question/aim; ensure key findings are easy to spot and match the abstract; and justify with evidence why the work belongs in the chosen journal. He also emphasizes building a research community, being pragmatic about co-authorship, and avoiding overly narrow topics by broadening the literature funnel, especially for early-stage researchers. Q&A covers replicating methods across locations (justify why the new setting matters), difficulty finding literature (topic may be too narrow), choosing topics and literature reviews, author count norms, and framing systematic reviews/meta-analyses via nearest-neighbor reviews and clear novelty (e.g., first meta-analysis or updated synthesis).
Arow Title
A Five-Factor Publishability Test for Academic Papers
Arow Keywords
publishability Remove
research gap Remove
novelty Remove
value add Remove
journal fit Remove
methods alignment Remove
clarity in writing Remove
desk rejection Remove
peer review Remove
conceptual nearest neighbor Remove
citation forecasting Remove
systematic literature review Remove
meta-analysis Remove
PhD publishing Remove
co-authorship Remove
Arow Key Takeaways
  • Treat publishability as multiplicative: weakness in any one of gap, value, alignment, clarity, or journal fit can sink the paper.
  • Define a specific, meaningful gap—not just “more research is needed”—and connect it directly to what your study delivers.
  • Articulate your contribution (value-add/novelty) in one sentence and ensure the paper would be a real loss if it didn’t exist.
  • Check alignment: methods must directly answer the main research question/aim; avoid overpromising in the introduction.
  • Maximize clarity: make the gap, contribution, and key findings easy to spot; ensure abstract and results are synchronized.
  • Ensure journal fit with evidence (recent similar papers, calls for papers, ongoing debates) to avoid ‘out of scope’ rejection.
  • Forecast likely impact by finding a conceptual nearest-neighbor paper and using its citation pattern as a reality check.
  • For location-to-location replications, justify why the new context is theoretically or practically informative, not just new.
  • If you can’t find literature, broaden the topic and build a funnel from general debates to your specific niche.
  • Co-authorship can accelerate learning, quality, and network-building if collaborators contribute meaningfully.
Arow Sentiments
Positive: Encouraging and supportive tone, focused on practical guidance, empowerment, and actionable diagnostics; acknowledges common frustrations (rejection, isolation) while offering clear steps to improve outcomes.
Arow Enter your query
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript