Speaker 1: There is a war going on between academia and AI. This is a sector, academia, where things move so glacially slow that they are very uncomfortable with new technologies like AI. They don't know what to do with it. So in this video I want to share with you what's going on, why they're scared, and why ultimately, spoiler alert, academia is fighting a losing battle. The first thing I love about this is that within academia, you have scientists and researchers who are renowned for hating being told what to do. If you say don't do something, they're more likely going to go, well, what are you going to do about it? Well, here we've got, we are using it. Publishing decisions are increasingly aided by AI. That's not always obvious. It's essentially saying that academics are busy people. They're run down with loads of different jobs all the time. So why wouldn't they use AI tools? They've just got to be a little bit secretive about it. Here we've got the Associate Director of Academic Technology and Learning Innovation, and he's saying I'm sure there are people who still print out papers, read them by the fireplace on a Saturday, he said, but AI tools are helpful for those who need to be more efficient. Absolutely. There are less peer reviewers. There are more papers than ever. And so why wouldn't you use AI tools? And the one thing I love about this is that large publishers are actually taking up these different AI tools. But they're saying to the people that are writing the papers, no, no, no, you can't use them, but we can. We can use them because, you know, we're ethical and we're just using it to sort of like find who is using AI, but you can't use AI. So just write like a normal Neanderthal by smashing your hands on that keyboard. The Chronicle contacted 15 major publishers for this story. Five who responded accounted for up to 7,700 journals, emphasized that AI tools, if used at all, are never in the sole decision makers. That means editors remain responsible and accountable for the editorial process and the final course. Absolutely. And then you've got this other maverick who I love. Here we are, Marie. We are saying we are all using it, so just be open about it. That way people could learn and start to trust these tools. I'm completely in agreement with Marie. For some reason, academia has put up these barriers and are just saying, no, don't use these tools. When in fact we should start using these tools, start teaching new scientists and researchers how to use them properly. It's like handing someone an axe and saying, this is how you use the axe properly, rather than saying, oh no, that axe is dangerous, don't use it at all. Just sort of like rip at the bark until the tree falls down. This new rush on AI tools means that grant funding bodies have to be really specific about how they want AI tools to be used. Here's one from the Australian Research Council. It's just saying, you need to be careful. The ARC advises applicants to use caution in relation to the use of generative AI tools. In developing their grant applications. That's just because generative AI may be based on intellectual property of others and may also just hallucinate, and we know that. But here I think is kind of like an attempt at being progressive with that new policy. Unfortunately, not everyone is that progressive. The National Institute of Health is saying that reviewers are prohibited from using AI tools in analyzing and critiquing NIH grant applications and R&D contract proposals. That's because they're saying that they're not sure where the data is sent, saved or viewed or used in the future. So here they're trying to create guidelines, but ultimately they're saying you're banned. Once again, the people are like, we're using them and they're like, don't. But we are. No. You're bad. Well, I guess we're bad then. Anyway, let's carry on. Besides the peer review process, there are loads of different applications that scientists and researchers can use AI for. For example, just generating text, cleaning up ideas. Using AI to act as a research assistant that never sleeps, doesn't need coffee and won't talk shit about you behind your back. Clearly using AI to write an entire dissertation is bad. This is that subtle distinction we need to make between generative AI, like that AI using its base model to generate all the information for a dissertation, versus using it to tidy up, using it to put in data, asking it to summarize something, come up with different ideas about how that data can be presented or analyzed. That's a really good use of AI. Why wouldn't you use it? So China is trying to ban use of chatbots to write dissertations. A practical issue here is how to define the difference between artificial intelligence assistants and artificial intelligence ghostwriting, which is a challenge for the degree evaluation committees and relevant detective technologies. If AI tools can get past the degree evaluation committee, surely it's time to think about how we evaluate those degrees. No longer is it sort of like useful to only produce a thesis. There must be evidence behind it to show they have written, to show they have done the experiments, to show all of this stuff. We need to start looking at more ways we can capture the evidence to make sure that we can satisfy the requirements of any degree. No longer should we rely on this chunk of text that now can be gamed. I think now is the time that academia really needs to focus on this, tackle it head on and say, you know what, everything's got to change. But they're scared to do it because no one likes change, especially academics. At the moment, academic trust is down. Arguably, it is the lowest it's been for a very long time. So where do we have to go? The way we tackle this is openness. We tackle it with transparency. And the problem is, is there's large corporations that are holding all of the scientific knowledge sort of like hostage unless you pay a fee to access it. This is a reason why we need to get away from the paper dissemination only model and start looking at releasing our data. And I think universities are slowly catching up to this. Even Google has removed the written by people from suggestions for website owners. And that is just a sign of the times. We cannot detect AI content. And if we start trying to ban AI writing, it's just going to be an arms race that we'll always lose. AI tools will always be faster, quicker, more advanced than the tools that are used to detect them. So we need to stop focusing on producing papers as the sole metric of value. Going forward, I think it's important to recognize that ChatGPT can produce papers that are suitable for peer review, i.e. they pass peer review. Experts can look at it and go, yes, this is good enough. And to be honest with you, a lot of times, the sorts of sections we're using AI tools on in a paper are the ones that people don't often read anyway. It is important as researchers that we focus on the things that are the most valuable in those papers, the results, the analysis of those results, and our interpretation and future directions for our research. The Conversation requested a panel of 32 reviewers to review one version of how ChatGPT can be used to generate the academic study. And reviewers were asked to rate whether the output was sufficiently comprehensive, correct, and whether it made a contribution sufficiently novel for it to be published in a good academic journal. And the big take-home lesson was that all these studies were generally considered acceptable by the expert reviewers. There's no doubt that ChatGPT is a tool, a tool that could be used for bad or good. But we need to start understanding how this can fit in with academia. We cannot put it back into the box where it came. Banning it is not going to help. It's about training people in how to use it, embracing new technologies and new tools. And only then do I feel like academia will be in a position where it can take that next step forward, focusing on results, focusing on the things that are important, not just sort of like the text that produces papers so that we can further our own careers. It's a really interesting time. I don't have all the answers, but I know that banning it outright is certainly going to cause more harm than good. If you like this video, remember to go check out this one where I talk about fully autonomous AI papers. There was a research group that did data to paper and everything is explained there. And the results are so-so. So there we are. There's everything you need to know about academia versus AI in the current really kind of like messy state we're in at the moment. Let me know in the comments what you would add. And also remember there are more ways that you can engage with me. The first way is to sign up to my newsletter. Head over to andrewstapeton.com.au forward slash newsletter. The link is in the description. And when you sign up, you'll get five emails over about two weeks. Everything from the tools I've used, the podcast I've been on, how to write the perfect abstract and more. It's exclusive content available for free. So go sign up now. And also remember to go check out academiainsider.com. That's where I've got forums. I've got blogs. I've got resource packs. I've got my books. I've got courses coming and everything is over there to make sure that academia works for you. All right, then I'll see you in the next video.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now