Speaker 1: The AI king is dead. Apparently there's a new leader on top of the AI orgy and that is Claude 3 Opus has just surpassed OpenAI's ChatGPT, but how good is it for research? Let's check it out. When you head over to Claude, this is what it looks like. Good morning, Andrew, so polite. Good morning to you, Claude. So now, yeah, it's very similar to ChatGPT. You just put in your question. You can attach files. So it says upload docs or images, which is great. We're gonna be testing images in this video, so stay tuned. And you can attach up to five, which is a little bit of a limitation. We'll see that a little bit later. The first thing I wanted to do was check its text generation capability. So here we go. I just said, please provide me with an outline of a literature review about OPV devices. I wanted to know whether or not it was capable and it's probably a little bit too simple now, but I was like, OPV devices, does it know what I mean? Yes, it does. So all of these are excellent suggestions. It is a long literature review. This would be quite in-depth, but I love that it's sort of like given so much information. It is a long response. It's not just like, here's some little sections. It's like, this is everything you could ever write about for OPV devices, and I absolutely love that it's sort of like so, so detailed. That is a great start. The next thing I wanted to do was see if it was good at actually recommending papers and not just sort of like completely hallucinating. So I went over and I said this, I've just started a PhD and I'm scared of the hairy ears of my supervisor. I've seen some hairy ears in the academic world, let me tell you. Can you suggest three papers about transparent electrode that I could use to start my literature review? And so I understand that starting a PhD can be intimidating but try not to let anxieties about your supervisor distract you from your work. Oh, that's so nice. Thanks, Claude. It's best to focus on your research. And that actually gives me the response. Here are three good review papers. I felt heard in that response. So it was these three and it gives you a little kind of little blurb about what each one talks about. And I wanted to know, is it hallucinating? That was a big issue in early AI days. It's getting way better. You can see as well that this chat bot, the cutoff is 8th, January, February, March, April, May, June, July, August of last year. So that means that these should be, yeah, relatively up-to-date papers if they're new ones. But it didn't really give us new ones. It was like 2012, this one's 2014, this one's 2010. I didn't ask it for up-to-date, so that's okay. But is it hallucinating? Let's have a look. No, it's not hallucinating on this one and it wasn't hallucinating on this one. So it looks to me as if it is capable of actually finding papers in the literature. Now, if I ask it for up-to-date papers, let's see what happens. Please give me recent papers. Please. Thank you very much. I like you. Done. Got to be polite to these AI. Oh, and it immediately apologises. I apologise for not providing more recent papers. So, now it's just saying I have a cut-off date. Yeah, well, give me the latest in your cut-off date then. Don't just sort of like tell me where to find papers, although it kind of is a little bit helpful. And it also tells me here actually, keywords, transparent electrodes, transparent conductors, good. All specific materials. It knows about graphene, metal nanowires, or conducting polymers. Hmm, and that's good, but it isn't great. Let's see if it can do a little bit better with the next questions. Now, the one thing that I was really impressed with with ChatGPT was its ability to look at images and work with visuals. So, I've uploaded this, which is a schematic from one of my papers. I haven't given it any context other than this. And let me tell you, ChatGPT does a really good job. Does Claude do any better? And I just said, explain this schematic to me. And it does pretty well. It can clearly read all of the text in there. It can clearly sort of like follow the arrows. But some of the details it really missed out on. So, here you can see that, yeah, it's got single-walled carbon nanotubes. It's got silver nanowires and deionized water. Brilliant. It got all of that from this little bit here. Love it. But down here, it kind of gets a little bit mixed up about the order. It says about transferring flipped over and brought into contact with an epoxy substrate. Not until this step. So, overall, it kind of has given me like all of the different steps, but the intricacies that ChatGPT does a little bit better at deciphering has not come through. So, there we are. That's what happens. So, what happens when I upload a load of figures? Because that was one thing I really liked about ChatGPT. You give it a load of figures and they'll be like, hey, suggest a structure for me or explain this figure. And it just works with so many. Let's see what happens. So, I uploaded five images. Why five? Well, it has a limit of five, which is a little bit disappointing because with ChatGPT on this one, you can see that I was able to upload all of the figures. One, two, three, four, five, six, seven, eight figures and it worked with them perfectly. Whereas in Claude, it's like, no, no, no, no more than five, thanks, no more than five. That's okay for some fields of research, but in my field, it was quite often that I would have way more than five figures. So, that was a little bit disappointing. But nonetheless, I said, here are some figures for a paper that I'm writing. I have not had my morning coffee yet and need to go to the toilet. So, while I'm there, could you place them in a logical order for my paper? And this is what it came up with. And look, I am more than happy with the paper structure that it recommended. It said, begin with figure two, this one. So, let's look at structure, then this one. And yeah, it gives you sort of like its thoughts and reasoning behind it, which is a nice little touch. So, then you can kind of like argue with it a little bit and say, well, no, I don't think it's like this order for these reasons. It was a good enough answer. But, ChatGPT, I did exactly the same thing where, there it is, the response. These are some figures for a paper I'm writing, blah, blah, morning coffee, pooping, all of that stuff. And it says, okay, let's look at these figures. So, it sort of, first of all, ChatGPT said, okay, look, all of this is kind of what I'm gonna base my recommendation on, this structure. And then, it actually gave me a figure of all of the figures together, which is a fantastic sort of like little visual prompt for me. Then it says, put them in this order, which isn't the order that I put them in up here. But, then it says, figure with process, microscopy and characterization. And then, it's just a great way of sort of like explaining why it's this. The figures should be ordered according to the progression of your experimental narrative. So, it gives you its reasoning and its logic. So, who won in this, Claude or ChatGPT? Hmm, I think ChatGPT has got the edge for research at the moment. Love you, ChatGPT, not sponsored. There was another little kind of finicky thing that I didn't quite like with Claude, and it's this. So, I've got this one. I'm a little bit hungover. Can you help me understand this paper? And I went here, and I uploaded one of my papers. So, this was the issue. I hope it does the same thing again. I'm gonna click open, and this little thing pops up in the corner. Oh, text extraction failed for one of the uploaded files. Please try again. It did that for a number of my papers that I was trying to feed into it. So, sort of like use it as like a chat PDF or chat with document, that sort of thing. Explain paper, those sort of tools. So, it doesn't really do a great job at replacing those tools. Whereas with some papers, like this one, I think, it works, so I can upload it, and it pops up. There we are, it pops up. And then I can click go. I'm a little bit hungover, and then it's gonna say like, yeah, of course, no worries. It has issues with text extraction with some papers. I guess it's a formatting thing with some journals, but it's a little bit frustrating if you're gonna use this solely. I've never encountered that with chat GPT. But once you do get something in there, you can see it gives you a really well thought out response, and it gives you the important dot points that you would need to know to understand this fully. Overall, once again, just a little bit of a bug that if they tidied up, I don't know, what was that hand? Ugh, if they tidied up, they could make it a little, little bit better. Okay, I wanted to know how it dealt with analytics. Like, analytics in chat GPT is just so awesome. You upload some data, you can ask questions about it. So, I wanted to know if it could do the same thing, and by the way, it can, spoiler alert. So, here we go, your PhD experiences. This was from the questionnaire that I put out to all of my subscribers asking for their PhD experiences, and I said, here are the results from my questionnaire about the PhD experience. What are the take home messages? It was able to go in there, find them, it knew the different sort of questions I asked based on the columns that the data was in. So, best parts of doing a PhD, toughest parts of doing a PhD, typical day, use of AI tools, would they do it again? So, it was able to go into quite a substantially sized document in Excel and extract all of this information and kind of summarize it, which is quite the task. Would take me, you know, an hour to do this manually. So, it is capable of doing this, I absolutely love it, but will it replace chat GPT? For me, not yet. I think the next video you should watch is this one where I talk about using perplexity AI for research. Go check it out. Thank you.
Generate a brief summary highlighting the main points of the transcript.
GenerateGenerate a concise and relevant title for the transcript based on the main themes and content discussed.
GenerateIdentify and highlight the key words or phrases most relevant to the content of the transcript.
GenerateAnalyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.
GenerateCreate interactive quizzes based on the content of the transcript to test comprehension or engage users.
GenerateWe’re Ready to Help
Call or Book a Meeting Now