[00:00:00] Speaker 1: If you step back and look at academic publishing, it really makes no sense. Researchers do the work, write the papers, review each other's work for free, and then often pay to publish it, only for universities and the public to then pay again to read it. To put hard numbers on the scale of this, it's estimated that between 2019 and 2023, researchers paid over $8 billion globally in article processing charges just to publish their work, and then universities paid a million more per institution per year to access those same journals. Harvard, the richest university in the world, is one of the institutions publicly warning that, even for them, the costs are becoming unsustainable. So we pay for knowledge twice, once to create it and once to access it. If you were a Martian looking at Earth's research system, you'd think this system's a mess and it's completely broken. But here's the thing that people miss, and what I want to explain in this video. Academic publishing looks irrational until you look deeper and you see how the way it is, is a structural response to a deep economic contradiction. I'm Professor David Stuckler, and in this video, I don't want to rant. I don't want to say, let's burn it down and tear up publishing. Right now, it appears to be the least bad system we have. I want to offer you some explanations so you can better understand the publishing paradox, it means for you as a researcher, and how we might think towards the future of building a better system. So let me start with the paradox in its purest form. Research is publicly funded, publishing is privately controlled, and quality control depends on unpaid expert labor. That combination doesn't, shouldn't work, right? In most systems, you have production and dissemination in greater alignment. You have private input, private output, public input, public output. Academic research violates that alignment. So you have public agencies, typically, that are funding the science. You have the NIH, European Research Council, UK Research Councils, other national research councils. But the final product is dominated by a small number of private publishing firms. So to give you a concrete example, Elsevier, the largest academic publisher, reported gross margins of roughly 38% in recent years. That's higher than Apple, Google, or Pfizer. And that mismatch is where the core tension comes from. Now, I realize that private institutions, pharmaceutical companies, fund research, but we're talking about really the core of research production worldwide, which is publicly funded. Just a quick interruption from this video by today's sponsor, me. I want to share with you something that's incredibly exciting and you're going to love. If you want to work with a real researcher, a real professor, to get feedback on your work and to save time and optimize your success from publication, click the link below. Because what I'm doing is I am taking 10 researchers who I'm going to work intimately with and go so far as to offer a personal publication guarantee. That is, if you show up, you work with me, you do the research, I'm not going to leave you hanging. I'm going to work with you directly to the end until your paper gets to the finish line. Again, keeping that small and intimate and only opening that up for a select few researchers to have the opportunity to work with me. If that's of interest to you, you want to work with a real person, not AI, click the link below. Let's jump on a call and see if you could be a good fit. So seeing these data about the publishers, it's tempting to conclude, well, the publishers are just greedy. And yes, of course, profit matters. And like any institution, they are responsible, private institution, they're optimizing wealth for their shareholders. But it's an incomplete explanation, right? The deeper issue is that research is what economists call a public good. It's like a park. Once you set up a park, everybody can benefit from it. And it's similar, like knowledge is an intellectual commons. Once the knowledge exists, everybody benefits from it. And we know when that happens that markets, left to their own devices, structurally underprice and undervalue these public goods. They're very good at pricing scarcity, exclusion, private benefit, but really bad at pricing things with long-term social value, prevention of harm, shared truth. And that's why research needs public funding to exist at all. But here's the crucial tension. Public funding goes into the knowledge production, but it stops there. It doesn't create the institutions to curate, validate, and maintain it. Peer review alone represents an enormous hidden subsidy. Estimate found that in 2020, the year 2020, researchers globally spent over 100 million hours reviewing manuscripts, sort of 15,000 years of full-time labor. And when somebody calculated the shadow price, or what was that in economic terms, for these highly trained professors who are donating their time, it was estimated in the U.S. alone at over 1.5 billion dollars. That labor is not directly compensated. It's relied upon for goodwill of these researchers. And that institutional gap is where publishers have entered, and something had to fill this space. So a lot of people look at this and say, well, why can't we just crowdsource peer review, just open source it? That runs into another problem. Expertise is not democratic. You can't simply pool everybody's opinion and get a reliable judgment. The judgment of one expert will be more valuable than the uninformed opinion of a thousand, well, 10,000 novices. That's why the Food and Drug Administration uses expert panels, courts rely on expert testimony. You have engineering certification and standardization bodies who pay specialists on them. The bottleneck in peer review is this attention from experts. And right, peer review is capturing this scarce valuable judgment to maintain the system. And that big subsidy is not being paid for directly. It's being paid for through a status economy. So we compensate peer review loosely through prestige, career advancement, symbolic capital. It's not a cash economy. And status economies don't scale well under load. And to be fair, the status or the benefit that you get from doing a peer review as a researcher might be a little line on your CV to say you were a peer reviewer for various journals, but definitely not something that's going to move the needle in terms of getting you promotion or international recognition. So it's incredibly valuable, but not compensated at all. So you look at this and think structurally, this system's going to collapse. It's built on a three-legged chair of the publishers, privatizing and profiting off the public funded research dissemination side and a volunteer subsidy of reviewers to help curate and maintain the quality of that research. And for a while, it sat in this stable, unpleasant equilibrium. Journals need the researchers for review. Researchers need the journals for legitimacy. Universities aren't controlling quality themselves. Governments are funding production distribution. Everybody complains. Everybody participates. It's the least best system we've got. It's a systems problem. But that equilibrium also depended on some clearance, like in a market of a number of manuscripts in for a certain number of researchers who could do goodwill for publications coming out. But that has rapidly changed. And I think a lot of that's attributable to AI, where the production of manuscripts has surged. People have been complaining from the time I was a graduate student about the publisher parish culture. It's only gotten worse, and it doesn't show any sign of getting better. Publication timelines have compressed, and there's ever-grading pressure to produce more quantity of articles, despite many calls for, say, let's focus on publication quality. It's just not where academic incentives are at, especially numerical-based ones with emphasis on citation indices, H indices. So there's been an estimate, looking at the sum of published articles, that global research article output increased by about 50 percent from 2016 to 2022. And AI is ramping that up. So another analysis that was published recently, in 2025, looking at the open access platform on Archive, found that those who use Gen AI tools increase their paper output by about 33 percent. And it was over 50 percent on platforms like BioArchive and SSRN. That boost has been even more. It's been estimated at over 70 to 90 percent a greater number of manuscripts coming in from non-native English speakers. And that may be because AI is helping with the language components that might have held them back in the writing side. Okay, so that's a lot of papers being dumped into the system. But the number of qualified peer reviewers on the other side hasn't kept pace. So editors now report needing 18, 19, 20 reviewers to be invited to get one single completed review. And most papers need about two to four peer reviewers for them. Historically, there was a much greater match in the system where maybe two, two to three invitations per peer review were needed. So that is a lot of strain. And with that strain, the volunteer system of peer review is breaking down. And in late 24, nearly 80 percent of top medical journals had begun to issue formal policies on AI use in peer review. And that was just reflecting the reality that the reviewers were using AI tools to cope with the load. That doesn't mean they were doing it irresponsibly or badly, but it does give us a sign that the system is being pushed to a breaking point. And peer reviewers are trying to find shortcuts, hopefully without sacrificing quality, to complete the peer reviews and keep the system stable. So one natural solution would be to simply call for more public investment in research. But looking at the US, UK, continental Europe, that investment has stagnated over the past several decades, where funding going into the system in real terms in several places has even declined. Although that would be a natural solution, it doesn't seem likely to happen in the near future. So here's a practical takeaway for you as researchers, because I often see many getting infuriated by the process when it's really a symptom of a broken system, not individual failure of the reviewers or editors. So first, don't take rejections personally. There's a broken system. Mistakes get made. Delays aren't conspiracies. It's not that they don't care. It's that it's a voluntary status-based labor system that doesn't really work. And even success doesn't mean that the system of gatekeeping that's happening is fair. You're operating inside a system that is deeply structurally constrained. It never was meant to handle this kind of volume, speed, or level of automation. When you see that clearly, you stop getting frustrated at these imaginary villains, the horrible reviewer too, and you stop blaming yourself for these kinds of systemic frictions that enter into your research. So I don't have a silver bullet. No one does to fix this broken system, to resolve this paradox. And as researchers, you don't have to love it, but you do need to understand it because this equilibrium is not stable and it is starting to change. And it's the people who survive in periods of great change like this. They're the ones who are able to adapt to those changing conditions and realize clearly enough these flaws aren't a reflection on their own performance, but are inherent issue in the system. And so what does that mean practically? It means sympathy for editors when they can't get peer reviewers, setting up your manuscripts to make it easy as possible to conduct peer reviews. For yourself, setting boundaries on what scope of peer review is necessary. If you benefited from two to three peer reviews yourself in the current system we're operating in, that means for every manuscript you're submitting, out of fairness, we would look at a similar number of peer reviews to go out. I know not everybody's going to do that and for some of the people who run publication mills, we're not expecting the PIs to peer review 120 papers when they're submitting 40 manuscripts a year. So understanding the system doesn't mean accept or even be happy about its flaws. It just means knowing where to push, where to protect your energy, and where not to waste it. Let me know in the comments below if you have an idea for how to solve this publishing paradox and what you think the future of publishing might be as these pressures from AI ramp up in this unstable system that we've created.
We’re Ready to Help
Call or Book a Meeting Now