Speaker 1: People think academia is boring, but it's like a soap opera out there. The first thing you need to know about is this. Chat GPT detector catches AI generated papers with unprecedented accuracy. Now, there is a cat and mouse game between AI and AI detectors. In academia, this is a very hot topic at the moment. But essentially this is an article that's all about using machine learning tool that can spot chemistry papers when they are written using the chatbot GPT. The tool identified written sections from chat GPT with a 100% accuracy. That is pretty impressive. This is the paper here, accurately detecting AI text with chat GPT. They told chat GPT to write like a chemist. And they have all of these graphs. The most interesting one is this one, where they have two human written pieces of content. And then they've got GPT5 and GPT4 with the first prompt, and GPT3.5 and GPT4 with the second prompt. And you can see that their tool in the purple detects AI writing significantly better than the other tools, the zero GPT and the open AI one. Now, the question I've got about this. This is all well and good. We want, you know, to know if something's written by AI. But is it that important? Clearly, we don't want AI coming up with results. But if there are AI tools that can write better than a human, that can take away some of the really sort of like horrible writing tasks that academics have to do, and turn it into better writing that's easier to read by humans. Why are we fighting this? Yes, it's a good academic task to be able to detect it. But why? If it is resulting in better papers, if it's resulting in people not having to write the boring parts of a paper that no one reads anyway, why are we bothered detecting it? Sure, it should be sort of like a little acknowledgement section at the bottom of a paper where it says, you know, this is written with the help of AI large language models. But this is sort of like only saying that we can detect it. I think we need to go a bit further and say, okay, using ChatGPT to detect stuff, does it actually make things better? We should be looking to see whether or not ChatGPT and other large language models can actually help academics, especially those with English as a second language, create better papers. That's surely more important than just saying, yeah, we can detect it. So what? People are going to use it. We should stop trying to fight the future. The expectations on PhD students are increasing. When I was a PhD student, back in the old days, I just was like expected to produce papers if the situation arised. But now it is a huge expectation on PhD students to produce as many peer reviewed papers as possible. In China, this recent report just come out, Publish and Flourish, investigating publication requirements for PhD students in China. They're essentially saying that every single Chinese PhD student needs to publish a paper which is in the Science Citation Index and that they should have a first author count towards their degree. And they're also saying that they should have first authorship as a mandatory requirement to get their degree. So I think it's a little bit unreasonable to expect first author papers from PhD students before they graduate. The fact that they need to be the first author also means they're less likely to collaborate which is a huge issue and it means that we'll be gaming the system even more. It's about sort of like making sure these PhD students graduate with the knowledge and skills to enter a workforce, not about just producing peer reviewed publications. And I think a lot of the time it's hidden. The real reason it's hidden. We say to these PhD students, sure, you should do your PhD, get a first author paper because it looks good for you, when in fact it's the supervisor that benefits the most by sucking up all the credibility from the PhD students that work underneath them. And it's just this kind of like fake sort of face we put on it and go, well, it's good for the students. No, it's good for the university and the supervisors primarily. And I think this is probably just a step too far. If a first author paper happens, brilliant. If not, it shouldn't be mandatory. Arguably, the world is turning against science and I don't think it's surprising when we see all of these high profile retractions, all of these lawsuits, all of these sort of like falsified data claims in the newspapers. But now we're looking at this, an existential risk to research from failure to demonstrate impact. And here it says that a survey of 400 global academic leaders found that 68% agreed that there's an inability to demonstrate researchers' impact that could become an existential risk. The problem is, is that we've been focused so much on what makes us happy as scientists and researchers. That is, getting publications, getting grants. We're putting very little effort into demonstrating that what we're doing is actually beneficial. So if universities are not seen to be delivering benefits to society, their funding will be at risk, said this Dr. Fowler. The public might say, why do we need these universities? Why don't we fund healthcare or something else instead? Well, of course that's going to happen. We need to focus on not only sort of like doing the research, but also making sure the research we're doing has benefits and outcomes that affect everyday people. And so the problem is, is that I think as academics, here it says we recognize and reward publications in certain journals and citations that follow, but pay little attention to everything else that happens around it. The teamwork, the collaboration, the infrastructure, and the support that enables those publications. So we're too focused on just the things that further our own career. And of course, if you give clever people a metric, they will gain that metric for their own benefit. So the conventional approaches can incentivize poor research practices, encourage less creativity, and limit the diversity of what gets research and who succeeds in a research career. Absolutely. I mean, it's been suggested that maybe we can look at other sort of like subsections of the public to help us formulate research ideas. Maybe there should be kind of a lay audience that comes on to grants and says, you know, I understand why this is important. We should definitely do this. But that's not happening. It's all insular. We're all looking at each other's work going, yeah, this is brilliant. And no one on the outside really understands what we're doing. That's a huge risk to research. Just here it says that 48% said they were concerned about the consensus on what constitutes impact. Of course, if we don't have a target, we can't hit it. Here's some more news on that impact. Impact or perish, gaming the metrics and good hearts law. So this is all about how we've kind of got a little bit lost in gaming the metrics as academics. And so here it says, the desperation to publish and retain tenure and advance in academia spawned a number of unethical practices. And I love this list. Data fabrication, falsification, plagiarism, salami publications, and that's when you take a huge kind of salami of work and you chop it up into smaller publications. So you end up with all of these bits of information out there, which increases your H index and your citations. But in fact, it would have been better as this huge lump, this salami, this huge sausage of findings and put out into the world. We've also got gift authorship, ghost authorship, and many, many more. And good hearts law says that as soon as an indicator is introduced to measure an output, gaming of the metric follows so that it ceased to be a valid indicator. And for too long, the H index has dictated academic careers. And so it's time to think beyond the H index or the I index. What actually constitutes impact and how do we measure it? It's not an easy question to answer, but importantly, it needs to be answered because we've got too good at gaming this metric. And the strange thing about all of this is there's no punishment for actually gaming the H index metric. So in here it concludes, literature has stated that the most common outcome for those who commit fraud in science and research is a long career. Well, isn't that annoying? In an ideal world, science and research is a pure meritocracy. You do well, you get promoted. But for those that are in academia, we know the reality is so, so different to that. There's bureaucracy, there's a lot of politics, and this recent article shows us how crazy it can get. And this is the story of Christy Kosky and why she hasn't made tenure despite being a very promising academic. She was listed as the Journal of Physics, the top 50 rising female stars in physics. And although she gained the respect and admiration of some of her colleagues, she clashed with an important professor that had been there for a long time. That was Jared Shore. And so the problem is, is that even if you do everything right in academia, and I know this is true for other parts of the world and other workforces as well, but in academia, there is, I think, a lot of people who hold a lot of power who can destroy your career if you say the wrong thing to them. So sure, there's bureaucracy, but there's very powerful people who have been around for a long time in academia and no one can say anything about them because they bring in a lot of money to the university, they bring in a lot of publications, so they're sat there and they're bulletproof. And in academia, if you end up annoying the wrong people and getting on the wrong side of the bureaucracy and the administration, you can really shoot yourself in the foot. So here, someone says, there seems to be a real deliberate effort not to resolve her tenure decision and her appeals. And someone else said, the administration wrecked the career of one of the most talented young physical chemists in the nation. So it just goes to show you can do everything right and still fail because you annoyed one or two people. It's a terrible state of academia and this isn't just one case. This is just one case that made the news. I know of many academics who have been bright, have been more than capable, and have been pushed out because of politics. If you like this video, remember to go check out this one where I talk about why most PhDs are actually fake. You won't believe it. Go check it out. So there we have it. There's all the wild academic news you need to know about. Let me know in the comments what you think. And also, as always, there are more ways to engage with me. The first way is to sign up to my newsletter. Head over to andrewstapeton.com.au forward slash newsletter. The link is in the description. When you sign up, you'll get five emails over two weeks. Everything from the tools I've used, the podcast I've been on, how to write the perfect abstract, and more. It's exclusive content available for free, so why wouldn't you sign up? And also, remember to go check out academiainsider.com. That's my project where I've got e-books, I've got resource packs, I've got blogs, I've got forums, I've got courses coming out soon, and everything is over there to make sure that academia works for you. Alright then, I'll see you in the next video.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now