Speaker 1: I started hearing some really interesting things about Claude, the competitor to ChatGPT, a large language model that could be better for research. So let's check it out. I really wanted to check how it would act in a range of different areas. We've got writing, we've got talking with research, we've got data analysis and also generating research questions. Now largely we're going to stick to their base models and ask it some questions, but I'm really interested to see what kind of responses come up and whether or not it's actually better than ChatGPT. It's important to note that I've actually subscribed to Pro on here and subscribed to ChatGPT+. That should hopefully give us the best comparison between the two. The first thing I wanted to check was writing and its ability to debate and sort of like handle ideas. Your task is to engage in a debate and so I made it pretty long to make sure it kind of gave me what I wanted it to give me and I put that into both Claude and ChatGPT. Now Claude's response was short. It seemed to definitely have a debate with itself but it didn't kind of like get into the nitty-gritty and also it lacked a structure whereas ChatGPT had exactly the same prompt and you can see here it's actually sort of like realized that it's having a debate, it's going backwards and forwards. So we've got opening statements for the Pro AI argument, Anti AI argument and it's got the main points of discussion as well for both the Pro and Anti and I think this really is what sets ChatGPT apart when it comes to sort of like thinking about debating. It kind of understood the brief a lot better than Claude did. Now for academic writing, how does it perform? Now for both of them I've got write me an introduction suitable for a literature review about organic photovoltaic devices. So I'm not really asking it to go and get data or papers or science but just using its base model I wonder what each will be able to do. So I'm going to click go on both of them and see what happens. Once again you can see ChatGPT has started to kind of like actually create a structure. It's got introduction and even though it's a subheading it just kind of gives me a little bit more confidence that it understands the brief a little bit more. Now here we've got Claude and you can see that it's a good introduction and it's got some facts in there and overall I think it still has done an okay job. You know you can use this as a basis. ChatGPT has also kind of gone into the appropriate things and it's always this sort of stuff. I think every large language model has got like little tells and so however however is another one. They also like saying firstly and then they also like to say in conclusion which I don't think is in this one anyway. Claude has given us a little bit of a structure so it's saying beginning with a brief background it's then going to look at photoactive layer, the electron donor, electron scepter and interfacial layers. So that's really important in OPV devices for efficiency and the review then examines his recent work so overall it's kind of done a good job. Win-win for both really in different ways but I feel like ChatGPT understood a little bit better and now it's saying you can then proceed the sections of the detail with the history, materials, mechanisms, breakthroughs, challenges and future prospects of OPV devices. It's actually helped me out a little bit which I like. Another really important thing that large language models can do is actually look at research. You can give it a PDF you can give it research and say hey summarize this for me. It's something that I would have done a lot when I was sort of like first going through the literature and trying to separate out what's important and what's not important. Here we can see we can upload stuff. This accepts PDFs, text, CSVs etc but only up to a maximum of 10 megabytes. So if we go in here and have a look at my papers here's some papers that I produced during my PhD and postdoc. Let's just go with the pathway to high throughput low cost indium 3 there we are so we'll click that open and it's going to sort of do some stuff and then it always does this. I tried it before. Text extraction failed for one of the uploaded files. Please try again. It does not like PDFs from academic institutions and journals. I don't know why but with ChatGPT you do still require a plugin to access PDF documents. So I think this is a little bit of a win for ChatGPT but neither of them are able to take that information on unless you're copying and pasting it in. I kind of got frustrated with Claude. I tried numerous academic documents and PDFs but it just kept on saying that the text wasn't able to be extracted. I think I would go to my favorite place which is HeyGPT where you can chat with PDFs. I've chatted with PDFs in the past. I can use plugins, chat with files, you select them here, there's a PDF, you apply it, it goes in, you say use. It's just so much easier. Neither of them do well which is why unfortunately we're going to have to still rely on external companies to provide large language model interaction with PDFs and academic PDF documents. Both of these large language models allow you to upload data which I like. However, there's a little quirk with Claude which means that it makes it almost unusable for large data sets. Here we go. We've got Claude up and I want to select data. I'm going to go to my desktop and I'm going to select data. I've uploaded some data but the problem is down here you can see that this is 724% over the maximum length that I can put in and it wasn't really that much data. ChatGPT in the plus mode, if I go to advanced data analytics and I put in my file and then I've got a really short prompt which says write a short report highlighting the major changes and trends observed. We're going to set that one on its way whereas with Claude you just can't get it to work. I had to make the data set so small that it almost became annoying. If I had to do this for my data as part of my research I think I would just get frustrated. In this case ChatGPT is just winning when it comes to the code interpreter or what they call it now the advanced data analysis. It knows what it needs to do with the files, it's uploaded it and you can see that it's sort of like beginning to formulate its ideas about what's in there. It's working hard to do different types of analysis, different types of graphical representations. It may not always be amazing but considering it's one very small prompt it's doing a pretty good job. If we try to do the same thing with Claude on a much shorter data set so I would have to split it up over multiples data series to get this in there. As you can see it's taken a fair amount of time to go through the data but now it's kicking it out. It's not trying to do anything other than give me the major changes and trends. It's done a good job at picking out the information but I think if you're looking for something that's sort of more advanced and it's looking really to work with you in finding the most interesting things from a data analysis perspective I think that ChatGPT is really sort of trying its best. We've got the five countries with the highest pollution just from input in the data and asking that question. We've got the five countries with the lowest pollution related deaths and once it's in there and it's done that initial like check over that's when we can start asking it questions. I feel like this is a win for ChatGPT. Now I was interested in what these large language models would do based on their base data if I was just to say hey formulate a research question for me. They need to understand what a good research question is and they don't necessarily need to go out and find information. Let's have a look to see how well they do. So I've got a simple prompt that I'm going to put in both. Given the rise in reported mental health issues among teenagers propose three research questions that delve into the relationship between smartphone use and adolescent mental health. Let's send that one off and let's do exactly the same with ChatGPT and let's see what is going on. So once again Claude has done a really good job and it's given me three potential questions that I could look into. Once again it's a relatively short answer and as I'm reading and looking over this I noticed that ChatGPT has really sort of like done that structuring thing so much better again. It has got questions and it's got rationales and I think it understands on a much deeper level about what a good research question is. Claude may be a great tool for that first little attempt and I don't think it really provide much by paying a little bit more whereas with ChatGPT I think the access to GPT-4 as well as the advanced data analysis as well as the plugins I think it's much better for research at the moment. Let me know in the comments what you think because your mileage may vary. If you like this video remember to go check out this one where I talk about how ChatGPT unlocks research genius. I think you'll love it. So there we have it that's what I think about Claude versus ChatGPT for academic research at the moment. There's no doubt based on all of the testing that I've done that ChatGPT just understands the prompts and what researchers and scientists really want from the information they're gathering. They seem to be able to structure the data much better whereas Claude gives much shorter answers but doesn't have the capability of putting in a lot of data and also always kind of like feels like it's stunted. It feels like it could give a little bit more but it doesn't whereas ChatGPT especially when it came to data analytics really started to go deep, really started to sort of like understand the sorts of stuff I wanted and I feel like I could rely on its base model a lot more to really understand what a good research question is all about. So let me know in the comments what you think and whether or not you've experienced something different when comparing Claude and ChatGPT. I'd love to find out and also remember there are more ways that you can engage with me. The first way is to sign up to my newsletter. Head over to andrewstoughton.com.au forward slash newsletter. The link is in the description and when you sign up you'll get five emails over about two weeks. Everything from the tools I've used, the podcasts I've been on, how to write the perfect abstract and more. It's exclusive content available for free so go sign up now and also remember to go check out academiainsider.com. That's my project where I've got ebooks, I've got resource packs, I've got the blog, I've got the forum and everything is over there to make sure that research and academia works for you. All right then I'll see you in the next video.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now