[00:00:00] Speaker 1: Today's session is a special one. I'm joined by Professor Martin McKee, one of the world's most published public health researchers, a former journal editor, and someone who's seen firsthand how manuscripts move from submission to real-world impact. And today, we're going to be doing a live question and answer session on publishing, peer review, and how research makes a difference beyond academia. The session's built around your questions that you've submitted, and they're ones that people don't always get straight answers to. We're going to be talking about what editors notice when they first open a paper, whether cover letters matter at all, how reviewer responses can quietly sink a paper, and how journals handle multiple submissions from the same authors, among many other questions. So if you're trying to publish or avoid wasting months on the wrong moves, stick with us. If you're new to the channel, this is Fast Track. And I set this up because I had a lot of difficulty along the way. In fact, it's only thanks to real mentors, like Professor Martin McKee, who's joining us, that I was really able to accelerate my research and fly. And so what we've done with Fast Track is capture the insights and the things that are often passed down in the ivory tower, from mentor to mentee, and try to make them available for all. So this channel is really valuable to help you wherever you are in your research journey. If you're just starting out from no research experience at all, or even if you're really already on the track yourself as an early stage, mid-stage researcher, trying to scale your research systems. Before we dive in, we always start with one quick tip. And today's is about supervisor alignment. So I'm seeing a lot of researchers now coming back from holiday, maybe been away for a while, refreshed with renewed energy. We just had a researcher in our collective community who had written an entire chapter draft. But the problem wasn't the chapter. Nothing was wrong with it. It just wasn't what their supervisor wanted. And it led to, and well, it's going to lead to, months of rework. So as many of you are getting back into research with your team, some of you are doing dissertation work, just take that moment, take a breath before going too far, and make sure that what you're doing is aligned with what your supervisors have in mind. That's going to save you enormous time later. So with that, Martin, welcome back. And great to have you with us. Thank you. Great to join you. Happy New Year to everyone. Hope you had a good, refreshing break yourself. And I hope your supervisees or your students are also getting aligned with what you have in mind before going down the rabbit hole. Got some really good questions, Martin. So I'm just going to dive straight in and hit you with some of these and get your thoughts. So starting off, Andrea asks, and I'll see if I can post this on the screen as well to help others follow along. She asks, verbatim here, what is the first thing that catches their eye, I assume she's referring to editors, reviewers here, when they open an article and tells them, gives that signal, that it is a good article?
[00:03:08] Speaker 2: OK, so in a way, whether it's a good one or a bad one, there are a couple of things that we look for. First and probably the most important is whether it's within the scope of the journal. Because as a journal editor now, there is just so much material out there. And you get lots of papers that simply are not going to fit at all. So in a way, the first thing that catches your eye is whether it's good or bad, whether it has anything to do with the scope of the journal. And then I think what you are looking for is essentially you're looking for, well, I think, I should say, different editors will be looking for different things. So in all of this, I'm speaking personally. But you are, I think, looking for something, I believe, that is actually important and is going to have implications for policy or practice. Because journals can't publish everything. And you have to be a bit selective. So I think that, for me, is important. Other people will have different priorities. And they will focus on maybe the methods or the results or something like that. But my first view is a really good paper should be one that doesn't have to change the way we do things, maybe change the way we think, add to it, but will be important and will be used and will be read. I think that's really key. Then, of course, we start looking at the methods and, in particular, whether the methods are robust, whether the sample size is large enough, whether it's generalizable. There are so many issues here where I think, as editors, we vary a lot. So one of the things that I've always taken the view is that I sort of tried to compensate for the fact that many journals will immediately be interested in a paper from the UK, from the United States, something like that, and they see a paper from another country that they maybe never heard of. I've always said we should be trying to redress that balance. That's a view of my own, but not everybody sees it that way. But I actually think that it's going to be a paper that's going to be read by people, that's going to be interesting. And then you look at some of the other things. But other editors will take other views.
[00:05:33] Speaker 1: I mean, Martin, in doing that, you've covered the two biggest reasons for rejection. One, not being a good fit with the journal, and two, not having a novel or substantive contribution that we see. When you talk about the methods, do you think, is that more do the editors let the reviewers take that on, or do they just do a set, kind of a sense check, a sanity check, does this look coherent, and then let the reviewers do the work?
[00:06:04] Speaker 2: Well, most journals now, simply because of the volume of submissions, have a high level of editorial reject. So typically, you'll reject about four out of every five papers that comes in, either because it's out of scope or because you know it's not getting through. It is becoming incredibly difficult to get reviewers. You know, typically, I'm no longer acting as an editor. I'm no longer editor-in-chief, but I'm an associate editor on another journal, and I'm currently handling papers for a special edition. And this is a themed issue where we know who the reviewers might be. You know, we have a good idea of who would be interested and so on. And typically, you're asking about 18 or 19 reviewers before you get one to agree. Well, maybe 18 or 19 to get one to agree, and then you've got to get them to actually do it and send it back. So I think that's, you know, you're not going to try and add. You're going to, if you feel it's not going to get through, you're going to reject it outright. First of all, because it's only fair to the author, because then they can submit it somewhere else. But secondly, because you're adding more pressure to your pool of reviewers on whose goodwill you're depending.
[00:07:17] Speaker 1: Makes sense to conserve those scarce editorial resources. I mean, it is volunteer work. I mean, the system is not sustainable as currently constructed. We just haven't found a better one yet. I saw an interesting stat the other day that you made me think of. Someone had done an audit at a few journals and found 53% of peer reviews had high AI detection.
[00:07:42] Speaker 2: Yeah, I'm sure that's the case now. I mean, more and more. And then you make the point about whether the model that we have is actually fit for purpose. And I've been asking myself this quite a lot. I get maybe three or four requests to review every single day. Now, I can only do it for journals where I have a link. By chance, I got two, which I did accept today from the Lancet family of journals, because I know the editors and you do that. But honestly, it is three or four every single day. Nobody could do that.
[00:08:18] Speaker 1: If you think about to sustain, Martin, here's the interesting thing on that. And we'll come to the other questions later. It's that when I thought about this as a grad student, I was thinking for, well, for every paper that I submit, if that consumes three to four peer reviews, that means for every paper I publish a year, that's how many reviews I should be doing just to keep the system in balance. And at my peak, I mean, you too, with 30 to 40 papers, that's about 120 peer reviews we should be doing to be fair in the system. And well, for many of us, that's just completely impossible. It's impossible. So that's, if you do get invited for a peer review, it is a fantastic experience. I recommend all of you to do it, by the way. It is really good training to almost, it's very hard to practice as a scientist, but by doing a peer review, it's one way to really sharpen and refine your skills without having that emotional involvement to a paper you're very, very close to. Martin, let's take the next question. Andrea, thank you for sending that really excellent question. We've got several from Margarita. We've also got a video submission coming up from Swati, but I wanted to continue because Margarita's fit with this theme. And Margarita asks a question, how do you view multiple concurrent or near-concurrent submissions by the same authors to the same journal when all the manuscripts clearly fit the journal's aims and scope? She goes on to say, are these evaluated independently or does acceptance or rejection of one manuscript implicitly affect the handling or perceived risk of the others?
[00:09:54] Speaker 2: Okay, so this is, first of all, this can become an issue whenever you're doing a theme issue or something like that. Like with the theme issue we're doing at the minute in one of the journals where we have a whole series of papers coming in and we want to review them both individually and for coherence with the theme issue. Now, some editors will not bother about the second, but I think the second bit is really important. And what we decided to do there is that they're all reviewed independently, but we have invited to other reviewers who have agreed to look at the whole series, look across them when we get them in. Now, that adds a layer of complexity. It's also challenging because the Scholar One manuscript system, the manuscript handling systems don't really lend themselves to that. So it's a bit of an extra degree of work. The reality of it, say you get four or five papers that are from the one study, I think if they send them to the same journal, I think the editor might want to say, do you want to do a theme issue on this? And it would have to be funded and so on because trying to find reviewers who are willing to review four or five papers, it's going to be almost impossible without a special effort. So that's going to be very, very challenging. So generally, if you've got that, and there are plenty of examples, I mean, the European Journal of Public Health, for example, where I was the editor, where I'm still on the editorial board, we do special theme issues for Horizon Europe projects, things like that, but they're done separately. They go through the same peer review process, but we will then handle them that way rather than them just being a block of four or five papers. I mean, I've been involved in a number of examples of where there are paired papers, including two recently where there's just too much to get into one paper. We did one Journal of the Royal Society of Medicine looking at the UK National Health Service 10-year plan, one on the undergraduate implications and one on the postgraduate implications. We did another two on the future role of the hospital, looking in the first paper inside the hospital and the second one, the hospital within the wider health system. And they were paired, and we discussed with the editors in those cases, you can do that. But if it's more than that, you'll get maybe a reviewer who'll review a paired paper, but getting somebody to do four or five, that's a lot of work.
[00:12:27] Speaker 1: I think I'd take the question, Martin, like what you said, I'd take, if you have a lot of papers that you think you're gonna send to a journal, may be worth reaching out to the editor first. Especially if the papers fit better and have better impact as a suite of papers. We did this in a European Journal of Public Health where we had a series of natural experiment papers that we had set up with the World Health Organization and that created a smoother process. If however, that's not the path you wanna go, I recommend diversifying journals. Now, bear in mind, you might get the same peer reviewers for some of these papers if they are quite similar, because the editors will be following a similar algorithm to find peer reviewers. So it's not unsurprising that you might submit two manuscripts that are relatively similar, don't ever submit the same thing to two journals, just unethical, you'll get yourself in trouble. But yeah, ending up with the same peer reviewers who might notice that crossover.
[00:13:22] Speaker 2: And just to mention like on Scholar, while on the usual manuscript handling systems, they will use algorithms to suggest reviewers based on the content of the paper. So we often ask the authors to recommend reviewers and usually we will certainly look at them because that makes it a little bit easier for us, but obviously you have to use your judgment on that and then try and get maybe somebody from the recommended, ones they've recommended, and somebody from somewhere else, either from your own knowledge, from the list of suggestions or often from a PubMed search or something like that to try and find it. So it depends, but really given that in many of these areas you're going through 18, 19 reviewers before you get one, it's pretty difficult.
[00:14:15] Speaker 1: Yeah, let me take the second one from Margarita. I think this is a really helpful one. It's another thing that just isn't really taught and you learn from a mentor. She says, I've heard contradictory advice about cover letters. Some say it's your chance to pitch the paper, others say they're just routinely ignored. In your experience, how much weight does the cover letter carry?
[00:14:38] Speaker 2: Well, you know, this comes down to the editor, frankly. Yeah. And some editors read them, some don't read them. I have to say, I never put terribly much weight on them because, you know, by reading the abstract and so on, I usually had a good idea. Obviously people are going to push them. You know, I think given that you don't know what the editor is going to do, I think putting a bit of effort into it is worthwhile doing. I mean, I have to admit with the papers I submit, usually I don't put a huge amount of effort into it, but usually I'm working with somebody else who is willing to do it and I'm happy for them to do that. One of my co-authors will do it, but I personally don't do because, you know, I think, but it really does depend, you know, remember that, think of the workload of the editors, unless they're for the, you know, the journals like the Lancet, BMJ, NEGM, somebody like that, they're all doing this in their spare time, which is why you get so many decision letters on a Sunday morning or something like that. You know, people are doing them on weekends and it is more work to read the cover letter. So some do, some don't.
[00:15:44] Speaker 1: Yeah, so with that, I just want to share, we've actually got a cover letter guide that crystallizes some of the knowledge over the years. And I see cover letters, Martin, that sometimes if they have a lot of boilerplate text, I think that signals to an editor just to ignore, don't pay attention to this. But if you write it in a very kind of human, jargon-free way, that's succinct and to the point, it tends to have a better chance. And it is a way to explain, you're trying at this stage to get past desk reject.
[00:16:16] Speaker 2: Yeah, you are.
[00:16:18] Speaker 1: You can be bold and say, why is this paper important to this specific journal? So the format, we've, yeah, please, Martin, that is absolutely key.
[00:16:30] Speaker 2: Going back to my earlier point where, what's the first thing I looked at? Is this, is it important that this paper actually gets out there because it will be relevant for policy or practice? Now, it's often by just looking at the abstract, you can see that it is, but if you have any doubt as to whether the editor will get it about, you know how important this is. If you have any suspicion that it will be somewhere that's a little bit obscure, but actually it really is important for some audience, then say that, but say it shortly. You know, don't write a two page cover letter, really. Exactly. I personally think.
[00:17:08] Speaker 1: Exactly, this is in our guide here, common mistakes. Yeah, yeah. Keep it short. I like to keep it on one page. Don't try to squeeze in every detail, just your main good stuff. All of that I agree with. And if it looks very generic, then the journal is thinking, oh, well, maybe you didn't even know why here. It might've been hopped around lots of different places. So I had an example here of a paper that got sent out, just to give you a real example. Again, I anonymized some of this to show you kind of how this might look in practice. But just to kind of say, you kind of want to get to what's the debate, very quickly, what's the big question that this journal is going to care about? Well, it's the American College of Obstetrics and Gynecology. They recommend this. And then kind of get to very quickly what your value add is, the gap in your research question, and then how you delivered something important on it. Martin, I put the reviewers in the cover letter to make things, because my goal again is, one principle here is, I want to make it as easy for the editors as possible going forward. We got other questions coming through. Any last thoughts on the cover letter, Martin?
[00:18:20] Speaker 2: I think that's- No, no, I mean, I can't say, keep it short, keep it focused. It's worth putting one in on the basis that some editors will read it, but not all.
[00:18:32] Speaker 1: Great. We've got some people joining us here. Malcomu, hey, great to have you with us. And I see we've got Wizard Polar and Hamza12456 here. We'll take your questions later on. So keep those questions coming in the chat and we'll get to as many as we can. Also being courteous of Professor McKee's time, who, as you can imagine, is incredibly in demand. But let's go over here to Swati's question. Hopefully you can hear this, guys. So I'll try to get this volume up, Martin, and I'll pause halfway. Okay.
[00:19:03] Speaker 3: Good morning, sir. My first question is, means how we can submit our research in unpaid journals? Okay. So I want unpaid- Okay.
[00:19:14] Speaker 1: Yeah. She wants, just in case others couldn't hear, Swati's asking a very important question. How can I submit my work to an unpaid journal? These processing fees, article processing charges, or APCs, as they're called, can be huge. I've even heard of them being up to 10,000s in nature journals now.
[00:19:33] Speaker 4: Yeah.
[00:19:33] Speaker 2: God, I wish I knew. So one of the ways, first of all, I do have a, I can see, I know the arguments as to why we have open access publishing. I get all of that, but I've always been concerned that this was not fully thought through in particular, because it actually creates a potentially damaging power dynamic in academic departments, where the head of department can use the provision of funding for papers as a means of controlling their research team, which is fine if they're altruistic and they're good and they're supporting early career researchers, but it is open to all sorts of things like, well, I'd only pay for it if you put my name on it, or something like that, or I'm not going to provide any funding because you were difficult to me, or let's be brutally frank about this, or because, well, we're getting into abuse and all sorts of other things there. So people, there are lots of motivations why people might hold back the funding. So I have a real problem with that because I think it disempowers, it is damaging to early career researchers, particularly female early career researchers in some settings, and can reinforce hierarchies that we know, social other hierarchies that we know exist. So that said, that's not answering the question. The question is, well, first of all, some of you will be working in institutions that do have publishing agreements. The London School has a whole range of them. Some of you may be working with collaborators who have publishing agree, at institutions who have publishing agreements, but that requires that the corresponding author has to be from that. Not the first author, not the senior author, but the corresponding author. So that can be handled, in that the paper gets submitted through your partner, usually in a high-income country. I recognize all the problems with that. You know, it's not fair in many ways, but the important point is, I mean, I don't get too fussed about author order, but obviously we live in the real world where people do. Then the person who's doing the work can be first or last or whatever. That's another way. There are waivers for some low-income countries. That's another possibility. And there are hybrid journals. And there's a diminishing number of hybrid journals because there is huge pressure from the research funders to have everything open access. But, you know, I do have a problem with this because it does actually make it difficult for people who are trying to get started, people who may be in an organization in a middle or high-income country that isn't covered by these waivers, don't have the links. And so on, and it becomes very difficult. So, you know, do think about hybrid journals. I also think that the whole process has actually created lots of other perverse incentives, particularly with the growth of journals that are just, you know, cash costs, basically. So there are lots of things wrong with the system, but I think the answers are, see if you can get a waiver because of where you live or where you're working. See if you can find a way of doing it through some of your collaborators or look for a hybrid journal. That's basically all I can say there.
[00:23:10] Speaker 1: Yeah, Martin, really, really helpful points. I get emails, just like you, countless emails, but I get several because of our mentorship program saying, my paper's accepted, can you help me get funding? And it makes me terribly sad because the researchers didn't realize going in, they might be early stage researchers submitting for the first time, they didn't realize that they were gonna end up with a bill on their hands. Yeah. Let me share with you what we do to avoid this. In our community, we have a kind of a final hurdle, a journal selection validation metrics. And there are things to do that Martin talked about of finding the fit of your journal. You can do this from papers you cite in your introduction, papers similar to yours, but we always have everybody go through certain legitimacy checks as well. If they're just starting out to make sure that they're not going through scam journals, so you can find journals that you know are gonna be legit from the cited recognized databases. So if it's in PubMed, if it's in Web of Science, these are curated and they do hold the gates. So those scam journals won't get you out there. Pay attention that the journals, if they're out of these, it's not guaranteed as a scam journal, but make sure that they're using real metrics that are indexed and covered as well. You'll often see the scam journals use these fake indices that try to sound real, but aren't. And you can also look at their editorial board. You know, if you see anything that says a 48-hour publication, run away. And then the APCs is kind of what Martin said. Check if your university or country has an agreement. For example, in Italy, all Springer journals are covered. There might be a waiver, but also just check. Usually you have to go to the journal's website and see if they have a non-open access submission. Sometimes it's called a subscription model and Martin was referring to the hybrid model. And that means that you will have the option, but if it's fully open access, you don't have the option and you will most likely have to have an APC, in which case you'd need a waiver. There's some other considerations in our validation step, but wanted to share that with you. And Nagamini, hey, Nagamini, I know you're in our collective community, has something here. Just let's see if we can understand the spirit of this. Publishing papers in open access will degrade PhD scholar CV weightage. For scholars, which is best, open access or subscription? I think what Nagamini means is that, she thinks that maybe the open access journals are given less weight. I'm not sure. You obviously factor that's gonna carry the prestige a bit. What do you think that might be interpreting?
[00:25:45] Speaker 2: No, I don't think that's the case at all. I mean, if it is one of these open access journals that is maybe somewhat more questionable and of course there are a number of them, but actually in the end of the day, it's a bit like saying, what will an editor want? What will a PhD examiner want? I mean, PhD examiners come in many, many different shapes. I mean, I learned very early that one of the first priorities in selecting an examiner for my students PhDs was to find somebody who didn't have an enormous ego and was trying to persuade everybody that they were cleverer than the candidate. There are some pretty dispassionate people out there in academia and the whole climate in academia often encourages bad behavior. But I would have thought that going into a PhD as an examiner, you're generally, and I know that there are exceptions to this. There definitely are. Generally is that my point of view is that if the student has done all the work to get a PhD, there needs to be a very good reason to fail them. I have done it once actually, completely, but that's very, very rare.
[00:27:02] Speaker 1: Yeah. It also, some of the faculty will have incentives not to fail because it's much more difficult to fail a student than- Yeah, but- That's another conversation.
[00:27:12] Speaker 2: Yeah, I mean, in that case, the problem is not the student, it's the supervisor generally. You know, should never have got to that stage, but that's another issue.
[00:27:20] Speaker 1: Let's keep going. Thanks for asking that, Nagamani, and good to see you again. We've got another one here that I also really liked from Margarita, where she said, is it true that junior authors have a lower likelihood of getting a review paper compared with an empirical paper? No idea. I don't know.
[00:27:40] Speaker 2: There's no reason why it should be. You should, there's no reason why anybody should be judging. Okay, let's face it. There's the famous example of when Albert Einstein sent a paper to a journal and was proceeding to the Royal Society in London or something like that. And the editor wrote back and said, dear Professor Einstein, thank you for sending your paper. We have sent it out to reviewers. And Einstein wrote back and said, what do you mean? You've sent it out for reviewers. You know, I didn't give you any permission to do that. And, you know, you should make the decision yourself. And it was very, but, you know, things have obviously moved on from that. But if you have a Nobel laureate who's sending a paper in, you know, it is likely.
[00:28:27] Speaker 1: I think here in this situation, what I find is if you're looking at a systematic review, those tend to be submitted. I think what the question here is, is that some lit reviews in some fields, especially if they're a narrative or a traditional kind of lit review, will be invited. And some journals would just won't publish many of them. And so the gates are closed before you even start. And you kind of earn the right, and this is fair, Martin. I mean, I've seen your lit review kind of commentary style papers, and people want to hear your voice because you are you. But it is an unfair comparison. I mean, a postdoc would struggle to write. And that synthesis is interesting because it's not robust in the way or replicable like a systematic review is, but it's a very interesting take that adds to the field. And so that's where I think you need to look at the type of review paper that you're doing and not compare apples and oranges here.
[00:29:25] Speaker 2: And for those commentaries, you know, I will, as far as I can, I usually involve a junior researcher with me in it personally, but that's the way I do it.
[00:29:34] Speaker 1: I think as a supervisor, that's just a great way to build confidence and get some early wins for your research team. Unfortunately, some supervisors are not as flexible or open with their co-authorship. You know, a few prima donna types in the field who- Oh, I know.
[00:29:54] Speaker 2: Well, I mean, there are people, unfortunately, who get their trainees or their junior staff to write the paper and don't even put their name on the paper. I know of, we both know of people like that.
[00:30:06] Speaker 1: Yeah, but I know you, well, and I've adopted this philosophy from you to be very liberal with co-authorship and build the careers of others. We're gonna turn to an actual paper someone has shared with us to get Martin's take as an editor, but before I wanted to answer Suleymanian, because we got your question, Suleymanian, but we were struggling a bit to decipher it and you've made it much clearer here. He asked, I'd like feedback on networking job opportunities cover letter. We covered the cover letter a bit and recommendation to obtain a scholarship once enrolled in the DBA program and in the thesis year. So, yeah.
[00:30:49] Speaker 2: I'm not sure I can answer that one, to be honest. You know, I mean, networking, I mean, if you present papers at conferences and things like that, you'll get visibility, but then you've got to get the funding to get to the conferences, of course. There is another issue, just going back to one of the earlier points, that if you do publish a paper and you're a junior researcher and you're not well-known and you're writing it with a much better known supervisor, the reality of it is that the community who know that supervisor will say, oh yes, that was a very good paper by Stueckler and colleagues or by McKee and colleagues, even though it was a junior researcher who did all the work. I mean, that's just because of name recognition and so on and not much you can do about that. Networking, you know, again, all of these things are visible. I mean, obviously social media now helps, although obviously I would strongly recommend against being on X for obvious reasons, but I use Blue Sky personally, but LinkedIn and others are perfectly good. I mean, those all help. Yeah, so there's election.
[00:31:57] Speaker 1: I think, Aaron Martin, on this one, I'd just say your best opportunity for scholarships, networking, job opportunities, everything comes down to one word, publishing. Look, no world is a perfect meritocracy, but the university is one of the closer worlds to it where the currency is papers. And Martin, you've often emphasized papers are like money in the bank. Look, if you are in your PhD and you get three papers published in the Lancet, you're gonna get an academic job. And okay, that's not very easy to do. Very few will accomplish that, but it is a merit-based world in that way. And we look for that proverbial fast track of aiming to get about four papers out a year. That's kind of a way to plant a flag and start cementing your name in a space that you get invited to conferences, you get seen as a rising star, and then you might even have some of the good and the great like Martin McKee reach out and wanna fold you into their research networks because they see you're doing amazing things. So, Suleiman, thanks for sharing that with us. Okay, Martin, let's pivot. I wanna look at the submission we got from Sonny. And look, I think some researchers who I work with say that I'm fierce, but loving. And so I would wanna give you here as an editor, Martin, that permission to also be fierce, but loving on this paper. But let's try to apply some of the things that you've said earlier in an effort to kind of help this paper's prospects. We won't have time or the ability to go through the full paper, of course, here, but I think it's useful to get your reactions as an editor kind of in the same way. Okay, well, we don't have the cover letter, but imagine you're just looking at this for the first time. And let's look at just the kind of the title page here. Let's look at the title, yeah. Title page and the abstract. Can you see this? Is this big enough?
[00:33:50] Speaker 2: Yeah, we've got the title.
[00:33:52] Speaker 1: The title has got an awful lot of words in it. You know, it's good you mentioned that. I often use a breath test for titles. If I'm out of breath trying to say the title, then it probably needs to be changed. So hybrid expert system integrating rule-based classifier and ensemble learning for musculoskeletal disease diagnosis. Okay.
[00:34:11] Speaker 2: Yeah, I mean, maybe for a specialist journal, for a highly specialist journal where this sort of thing is very, very familiar to the editor, but all I could say is as a general journal, that would immediately put me off just because it's just too many words. And it doesn't tell me what, the title should really tell you what the paper is.
[00:34:36] Speaker 1: Yeah, it's kind of vague. What did they do here?
[00:34:39] Speaker 2: Yeah. I mean, are they describing an expert system? Are they evaluating an expert system? Are they designing it? Are they, well, they say design, implementation, evaluation at the end. But what, I don't know. I think, I just think it's a bit off-putting actually. Too much work to try and work out what they're actually doing.
[00:34:58] Speaker 1: Exactly. Remember how busy the editors are. You want to make it easy for them. Typically, I like in titles to have like what the study did after the colon. So it kind of went there. And then what the big question was, or what it found or showed. Yeah. So you can get a picture and you can imagine. Yeah.
[00:35:20] Speaker 2: What's the gap? What's the problem? Exactly.
[00:35:23] Speaker 1: So let's see. I suspect based on the title, we'll see this come out very quickly in the abstract.
[00:35:27] Speaker 2: Okay. So let's just see the background. So represent.
[00:35:38] Speaker 1: All right. So Martin, I've got to say something too. Before you jump in as you're digesting, this reads to me like some AI was involved in not a good way. Yeah.
[00:35:56] Speaker 2: I want to know what the diagnostic process is often complicated. Yeah, sure. All of that is true. But so what? I mean, what's the problem? I mean, we know that, you know, speaking as a physician, I mean, most musculoskeletal disorders are relatively easily diagnosed just simply because, you know, if you know the anatomy, you know what can go wrong. If you've got pain on the outside of the hip, it's likely to be a trochanteric bursitis. If you've got pain on moving the knee, it's likely to be there's a limited number of things, you know, so on. So yes, it is true that particularly with multimorbidity and so on, but what is the particular problem being addressed?
[00:36:39] Speaker 1: This is the problem. Your background is like a mini snapshot of your introduction. And an introduction needs to get to why is this conversation important? What do we know about it? And bring us to the edge, what we don't know about it, and kind of open up the space to justify the existence, to justify the need for your study. And it's not doing that job here. None of it's inaccurate. It's not incorrect. It's just not doing the job it needs to do here. And then we'll kind of stop here, Martin, on methods.
[00:37:11] Speaker 2: Yeah, they're not really methods. I mean, they are a bit.
[00:37:15] Speaker 1: It looks like a bunch of, to me, just a bunch of jargon smushed together, but not kind of telling us what was done, but not showing us. What did they actually did? That's not a method. This study presents a hybrid expert system. This needs to, instead to say, we designed a system. So to plug the gap you highlighted here, we designed a system that was going to, and you said the problem here is, okay, it's not reliable. Okay, so we designed a system to overcome past reliability issues using these algorithms that are important because, I don't know, they're more stable than all the others that use this other algorithm. This is why ours is better. And we tested its stability. How are you going to show now that it's more reliable? How did you show that? Reliable than what? More reliable than what? Exactly, and this doesn't come out. So this reads a little bit more like feasibility, but even so here, these numbers on their own are kind of numbing in the sense that they don't mean anything. What does this mean? Just tons of numbers.
[00:38:23] Speaker 2: I mean, the problem to be solved is that a problem that these are in, you know, a setting where there's nobody, no other health workers to, well, I mean, that has its own problems. Yeah, I'm really not sure what this is actually trying to do. So I think this is a paper that would be an editorial reject for me, but maybe for a much more specialized journal that can see something that I'm missing.
[00:38:48] Speaker 1: I get a lot of people coming to us with papers like this, and almost with a sense of confidence before clarity, which is one of the AI failure modes that we noticed, that AI has been like, this is great. Yes, they're going to love this. And it's a hard conversation to have when sometimes some radical surgery is going to be needed to bring this up to a top-tier journal mark. And unfortunately, Sunny, but I think this is going to take a lot of work. It doesn't mean the core idea is wrong or bad. It's just not being presented in a way that's accessible or fits with publishing norms in top-tier journals right now. So Martin, thanks for that very honest feedback here. I've got a question directly from you, for you, from you, from Olaida Anigba. I don't know from the email, but as you've published over a thousand papers, he directly asks, what are the combinations of strategies that will help me publish up to a thousand papers? Okay, I've been asked this before.
[00:39:54] Speaker 2: How do I become the next Martin McKee? Well, so first of all, have a broad range of collaborators. And this is true. I mean, other people have been, Sandro Galea and others, I know have been asked the same, other people in my position, have a broad range of collaborators. Mentor junior, early career researchers, support them, really work with them. And the other thing is work very hard. I mean, David and I work extremely long hours. Now that's not good advice for everybody, but the reality of it is that for most of my life, I've worked 14 hour days and maybe five or six weekends. That's not something I would advise for other people. Mr.
[00:40:37] Speaker 1: Martin, you're supposed to be able to do this while sipping a margarita on the beach somewhere, I thought.
[00:40:43] Speaker 2: No, no, you're not. I mean, it is hard work. I mean, even over the holiday, you know, we're going on holidays. I mean, I was just saying to my wife, because we were off for a few days, but I mean, there are the issues where I kept, I've got quite a few papers in progress at the minute, and it was just doing the proofs, which can take a bit of time, signing the licenses, contacting coauthors on things, stupid things like that, that all take time. But if you just, you know, if you don't do them, it doesn't move on to the next stage. But the answer really is hard work, a broad number, a wide range of collaborations and support for junior staff, junior colleagues.
[00:41:24] Speaker 1: Yeah, I really like that. And Martin, what you're pointing to is something that I'm going to be releasing a video on the channel coming up about the importance of building your own kind of research operating system so that you're not in this trap of just one researcher, one project, and like going down a burnout pathway, because that doesn't scale.
[00:41:44] Speaker 2: So yeah, and one of the, so there are a number of issues here. One is that in many academic institutions, they want you to focus and become narrower and narrower, you know, to be really, really focused. And that's been an issue here at the school. You know, many of my colleagues have been told, don't be like Martin, you know, be really, really focused. There's a perfectly good argument for doing that. But if you want to publish a lot of papers, that's not the way to do it. There are other reasons why you might want to be, you know, really focused on a particular molecule or something like that, and that's good. But then I think, you know, you want to be collaborating with your, if your molecule has the good fortune to be involved in a whole different series of disease processes, then work with the people who do that. The other thing is I think just efficiency. Now, obviously we are moving into a time whenever we still haven't worked out quite what the place of artificial intelligence is. I think that we're going to be writing papers in a different way in the future. There are huge problems if you just wanted a, you know, chat GPT to write a paper for you, wouldn't recommend it. But I think that we are moving to a stage where with another one of David's colleagues and one of your, the group has been on here, Richard Rosenbach, we've been talking a lot about this interaction between the human and the machine. But the other thing I would just simply say is referencing, use EndNote or something. I mean, if you can get access to EndNote by the person who sees, yeah, it's the best one for collaborating. If you don't have money to do that, then you've got Zotero and Mendeley and things like that. But don't put in references manually. I mean, that just wastes far too much of your time. And then, you know, make sure that you do things like spell checking and so on. There are now packages like Grammarly. I don't have any financial interest in any of these, but although I've written so many papers, I now use Grammarly because it simply is efficient.
[00:43:38] Speaker 1: That's in our tech triad of Grammarly and Zotero, two of our tech triad that are indispensable for research. I mean, especially track changes, you're gonna get grammar quirks that just come in, no matter how good you are.
[00:43:51] Speaker 2: But even with Grammarly, don't accept everything. I mean, there's some things that I don't, I wouldn't accept, but use it. But don't just go through it and accept everything. Use your judgment and go through it. But it can save you a huge amount of time.
[00:44:07] Speaker 1: So yeah, being smart with the tech that can save time and not get you in any ethical quagmires. Martin, okay, I know we gotta be courteous with your time and you're gonna have to jump off in just about five minutes. Just wanna blast through some of the questions that came through in the chat very quickly. This one came from Sav, Age of Aquarius, which, oh, thank you for the tip. You have shared a coffee with Martin and I. We appreciate that. Did Robert Maxwell make scientific peer review mandatory for all journals and not just medical journals, as was the case before he monopolized them? Martin, you're more of a history buff than me, so I don't know the answer to this.
[00:44:40] Speaker 2: I don't know the answer to this. I mean, Robert Maxwell set up, was it Pergamon Press? And he made a lot of his money out of publishing journals that were accessible in Eastern Europe and before the transition from communism. You know, he was Czech by background. And he did a lot to advance scientific publishing. The answer is, I don't know. I mean, no, it was always there. I mean, peer review was, as I mentioned, the example of Einstein, I mean, that was in the 1920s, I think, when he was offended that a journal sent one of his papers out for peer review. So it's been around for a very long time. And it's not, I mean, mandatory. I mean, there are still a few journals in some disciplines where the editor will just decide themselves. Yeah, and of course, for things like commentaries and things like that, often they don't need to, they don't have to go to peer review. The editor will make a decision because.
[00:45:38] Speaker 1: Yeah, so I don't know the history of peer review. That is fascinating. It just seems like the least bad system we have, and people are now trying with the archives, some other kind of crowdsourcing models of peer review. No. Unclear, let me tap a few more questions here. One says, any tips for those who aim to be an editor themselves, a scientific journal in their discipline?
[00:46:00] Speaker 2: Well, I think the thing is, so we just interviewed for a new editor-in-chief of the European Journal of Public Health. We had several very, very good candidates. I think you need to talk to editors and see if there are any openings for associate editors. You're not going to be an editor-in-chief straight off, but by maybe editing special theme editions, things like that. Getting lots of experience as a reviewer is important.
[00:46:29] Speaker 1: What about being on an editorial board, Martin?
[00:46:31] Speaker 2: Yeah, getting onto an editorial board. If you've got an opportunity to get onto an editorial board, all of these things where you getting experience as a reviewer, as a member of an editorial board, something like that, yeah.
[00:46:46] Speaker 1: I wouldn't do it, Martin. I would not want to be an editor. It's a thankless task, a lot of work. You do get field esteem, but nobody likes you, especially because you're rejecting more papers than you accept. I don't recommend it, personally, but I don't know. Did you enjoy your experience, Martin? Would you do it again?
[00:47:06] Speaker 2: Yeah, I did, I did. I did it for about six years, I think, and you know, but I'm not sure. Yeah, I mean, it's a lot of work and I admire people who do it. If you do it properly, that is. I mean, the thing about it is I was making a decision on a paper today, actually, this associate editor role. Now, some people, as editors, just simply send out the review comments without any additional editorial work. I don't do that. I will read through them, I will consolidate them, and I will tell the author what among them to pay attention to and which ones to ignore. Sometimes there are times when you read the comment and you say, actually, although the reviewer said that, that's, I disagree with that, and that's frequently the case. And also you've got to be, you know, one of the things that if you're a really good editor, you'll give them a very good steer. So you'll say, look, these are the things I need you to do. These are the things you can do optionally. These are the things you don't need to do. But also be aware that if you've got a word limit of 3,500 words and you've got review comments that go to 5,000 words, you're putting the author in an impossible position if you ask them to add all these things in. I mean, it's just completely. So I think if you're going to do the job as an editor well, but a lot of editors, sadly, are just overwhelmed. I mean, they're doing it in their spare time. Yeah, yeah, yeah.
[00:48:32] Speaker 1: So be sympathetic to your editors. We have, Agbor says, I'm enjoying it. I got a last, last couple ones here. Someone asked a very practical question. Is it necessary to publish a paper in the journal? Is it necessary to add references that belong to that journal?
[00:48:49] Speaker 2: It's unethical if you're just putting them in because they're from that journal. There are some editors who actually, there are some reviewers who will say you must cite particular papers. I had one recently actually for a journal where the standard boilerplate text explicitly said, and if reviewers ask you to cite particular references, be sure that you do not need to do that because people, the editors know that the reviewers are asking people to cite the reviewers' own publications. There are journals, there have been examples in the literature of journals, editors that actually do that. I mean, it is unethical to do that. So the answer is no.
[00:49:34] Speaker 1: Okay, in terms of necessary, I think it can be quite a strategic sign though. If you see you're citing in your introduction a lot of that journal, that journal might be a really good fit. It means that that's probably your hosting conversation that's there.
[00:49:50] Speaker 2: If it's relevant.
[00:49:52] Speaker 1: If it's relevant. If you're bending over backwards to do it, no, but if it is a good natural fit, that locates your campaigns there. Last one, guys, and look, if we haven't caught your question, I go back after and reply, and you can also join our private mentorship communities where we hold private workshops five times a week. But let's go here, Martin, the last one. So some have criticized Google Scholar metrics like H-index and I-10. What are your thoughts on this and how may we better evaluate the impact of a scholar in the field? And I think this is a bigger question, Martin, to reflect for you. How do you, because impact is what I think motivates a lot of people to do research. So kind of how do you even judge your own career looking back here and what it's done more than just an H-index. But Martin, yeah, take us home.
[00:50:40] Speaker 2: I mean, while you're asking that, I'm just looking to see where my, what my current H-index is. The answer is I don't know, but I can quickly find out.
[00:50:48] Speaker 1: What, you don't have it like the deer's antlers where the academic equivalent of like how old, you know? Yeah, so it appears to be- My antlers are bigger than yours?
[00:50:57] Speaker 2: It appears to be 199.
[00:51:00] Speaker 1: Oh my gosh, well, see, now I feel very small, Martin. I think I'm only, now I'm gonna check too. I haven't checked in a while. Yeah, see, I'm very small. I'm a 107. Shoot, Martin.
[00:51:11] Speaker 2: Yeah, you're- I'm never gonna catch up to you. Yeah, you know, that's pretty good, David, actually. 107 is very good. Yeah, you know, so, but the thing about it is I don't take it terribly seriously. And I think people are wrong if they do take it very seriously. You know, it's interesting, but it's not a good way of evaluating the impact of a scholar because, you know, the impact that you have, and I've often said this because, you know, I'm getting to the end of my research career. What is the impact that I've had? It's not in any of the papers at all. It is the fact that people that I've mentored in the past, like Professor Stueckler and others, have done so incredibly well. And that's the legacy. I mean, that's what's important. And, you know, people my age, we all say the same. Many of us say the same thing. It's the generation that we have mentored and supported and who in turn are mentoring other junior, you know, like, you know, this morning, I had a meeting with one of my former PhD students, Dina Balabanova, and one of her current PhD students who is doing brilliantly. And that's what makes you feel good. That's what you feel the impact is. And then, of course, there may be things that you've had an impact on policy and maybe because the work I do. I mean, there are a number of those and that's good too, but it's not actually, it's not these metrics. I mean, you know, and I can say that because I, you know, I do well on them, but yeah, I don't take them too seriously.
[00:52:37] Speaker 1: That's a legacy, Martin. And I can speak personally that it's transformational when you work with someone like Martin and it shapes how you see the world, how you navigate truth in a world that seems to struggle with meaning at the moment and is full of noise. But that will get us digressing, but it's exactly why, Martin, we're very grateful to have you as a friend of mine. But it's exactly that. I've tried to tap the insights I've learned from great mentors like Martin and others along the way and things I've learned in training students and encapsulate it into a format that makes it accessible to everybody. That's what this channel is about. I see many of you have some questions about AI. We're gonna do another workshop coming up on AI very, very soon. We're gonna have next on the list in coming weeks, our mindset coach joining us. We're gonna do some more systematic review workshops. So keep sending us as ever, like you've done today, our best, your best questions, and we'll do our best to give you our best answers. Martin, thank you so much for joining us. Thank you very much. We'll see lots of-
We’re Ready to Help
Call or Book a Meeting Now