Speaker 1: The academics have been caught. They've been caught copying and pasting directly from ChatGPT into their peer-reviewed articles. Now if you look at Google Scholar and you type in as an AI language model minus ChatGPT and I've looked from this year and you can see that there are a load of places where as an AI language model pops up. So let's have a look at these and see if we should be absolutely outraged by scientists copying and pasting directly from ChatGPT. I think it's a little bit more of a dodgy story than we first think. So first of all we're on a really weird Russian kind of publication. I have no idea where this is. Ultimately here it says this article is devoted to the problem of pollution of our planet Earth. This article discusses ways to avoid boldness. I'm interested already. Water pollution, soil and presents an algorithm. Brilliant. So if we scroll down you can see that there's a lot of kind of like just you know words but here you can see it's copied and pasted directly from ChatGPT because sometimes ChatGPT gives you an answer when it says as an AI language model then it gives a little caveat and here you can see as an AI language model I am programmed to agree that stopping pollution is necessary to preserve the environment and protect human health. Here are some ways that it can stop pollution and then it goes on. Clearly this is copied and pasted from ChatGPT. Here's another one. European Journal of Pedagogics. I think that's how you say it. It's got a DOI and if we have a look for as an AI language model you can see down here in methods it says as an AI language model. I don't have access to the full text article blah blah blah. Clearly they just copied and pasted and no one's checking or even maybe read this and this is from two people from an Indian university. So here's another one. This one doesn't have a DOI. So if we go all the way in this question and then it says as an AI language model blah blah blah. I don't have direct access. They've actually just copied and pasted exactly what it says. This whole paragraph here is direct copy and paste from ChatGPT. I've even found people's theses. This one from the People's Democratic Republic of Algeria and this is the Department of Letters and English Language and this is submitted for a master's degree in the field of English language and literature. As an AI language model I am unable to make alterations to your request without any specific text. They've accidentally asked ChatGPT to do something and forgot to provide the actual information for it to work on. I apologize but as an AI language model I am unable to rewrite any text without having the full original text to work with. They've done it twice. I feel like here online on Twitter people were going off. So this was first sort of found by Science Banana and people are really sort of angry at the fact people are going off. So this was first sort of found by Science Banana and people are really sort of angry at the fact people are going off. So this was first sort of found by Science Banana and people are really sort of angry at the fact people are going off. So this was first sort of found by Science Banana and people are really sort of angry at the fact people are going off. So this was first sort of found by Science Banana and people are really sort of angry at the fact people are going off. So this was first sort of found by Science Banana and people are really sort of angry at the fact people are going off. So this was first sort of found by Science Bana and people are really sort of angry at the fact people are going off. So this was first sort of found by Science Banana and people are really sort of angry at the fact people are going off. So this was first sort of found by Science Banana and people are really sort of angry at the fact people are going off. So this was first sort of found by Science Banana and people really concerned about because there's not many for this year, first of all, given the number of papers that are published. Secondly, they're in peer-reviewed journals or non-peer-reviewed journals really where this doesn't really matter. They are not high impact factor journals and this is just people trying to bump up their metrics, their H-index to get further along in their career using the easy path. But there is something I think we should be worried about and that isn't about academics using AI, it's something much deeper. Now the first thing I think we should be worried about, and this is a symptom of a problem rather than the problem itself, is that this paper this year in an Elsevier journal which is reputable in resources policy, we can see that the resources policy journal has an impact factor of 10.2. Now in my field that's actually quite high and so you'd expect some standards for publishing in this journal, for getting it past peer review, but you can see here that these three authors from China have managed to get it in with this in it. Please note that as an AI language model I am unable to generate specific tables or conduct tests, so the actual result should be included in the table. They've clearly used AI to have a look at data, create text around that data and look, that is not the problem. I think this is the way that academic writing should be done. These ресources, these methods of reading, these methods of writing will go in the future with more tools, with more sort of like sophisticated knowledge of how to use AI to write papers, this will happen. This actually pointed a much much deeper problem and that is clearly no one has peer reviewed this paper. So how did this pass co-authorship where these three people looked over this paper supposedly, and just allowed that to float through? How did it get past an editor, an editor looks at a paper, when it first comes in and then says yeah cool let's send it out for peer review. How did it get past peer review? How did it get past revisions from that peer review? How did it get past typesetting? And how did it get past the initial and final checks for publishing in this journal? And I think this only points to one thing is that this has not been properly peer-reviewed. That is the bigger issue here. Not that people are using AI. I think that is the future. What this is a symptom of is the fact that there are publishing cabals, maybe this editor just allowed this paper to go on through because it's their friends. That is the bigger issue here that I think we should be super worried about. Where Elsevier and the Resources Policy Journal has a much deeper problem where they are able to get in papers without going through rigorous review. That is the bigger issue here and I think ChatGPT also has a much bigger implication when it comes to a peer review. There was this article published in The Guardian last month where it says are Australian Research Council reports being written by ChatGPT? Ultimately they found that one academic told The Guardian that their assessor reports included the words regenerate response. This is the button at the bottom of ChatGPT that can accidentally be highlighted when you're copying and pasting from ChatGPT into a document. That means these peer reviewer that are in charge of reviewing grant applications can dictate where the money goes are not reading these applications. And look that's because they are under a lot of stress. We can see here that in a given year researchers are asked to process up to 20 proposals which are typically 50 to 100 pages long. That is on top of their normal academic career workload. That is on top of peer reviewing, of publishing, of supervising students, of doing university admin. This is on top. So no wonder they are looking for a shortcut. So where does that leave us? Well I think universities, publishing groups and any sort of academic service is going to have to start to get really serious on AI tools. What they can do, where they can use them, how they're used and making sure that academics are given enough resources to do a real job if they're not allowed to use those tools. We are now at a point where AI is going to change academia and we need to make sure that the policies and processes in place within universities and outside universities are there to make it easy for people to use AI. Rather than sort of saying that AI is bad, no one should ever use it, we should make sure that people can use it and the policies are in place for using it the right way. If you like this video remember to go check out this one where I talk about the data to paper using AI. We're stepping closer and closer to that magical realm of using AI to produce papers, direct from data. So go check out that video here where I'll fill you in on all the details. So there we have it, there's everything you need to know about academics caught in the act using ChatGPT and copying and pasting directly from the output into their peer-reviewed papers. It needs to stop and we need to take a serious look on how this sort of high impact factor journal managed to get through to publication with that text in it and also we need to make sure that ChatGPT, a new tool, is used appropriately for science and research going forward. Let me know in the comments what you think and what you want to add and also remember there are more ways that you can engage with me. The first way is to sign up to my newsletter. Head over to andrewstableton.com.au forward slash newsletter. The link is in the description and when you sign up you'll get five emails over about two weeks. Everything from the tools I've used, the podcast I've been on, how I wrote the perfect abstract, my TEDx talk and more. It's exclusive content available for free so go sign up now and also go check out academiainsider.com. That's where I've got my ebooks, I've got my resource pack, I've got the blog and I've got the forum and it's all there to make sure that academia works for you. Alright then I'll see you in the next video.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now