How AI Can Assist Research Without Crossing Ethical Lines (Full Transcript)

A webinar recap on using ChatGPT, Claude, and Perplexity for research—plus IRB transparency, privacy safeguards, and ways to avoid hallucinations.
Download Transcript (DOCX)
Speakers
add Add new speaker

[00:01:13] Speaker 1: ♪ ♪ ♪ Unless you have been living under a rock, you would have heard of CHAT-GPT and other artificial intelligence models. But the question is, what role does AI models have when it comes to research and for evaluation purposes? This is the question. Now, some people believe that we should leverage AI to the fullest capacity for research and evaluation purposes. Some people believe, though, that AI should play more of a limited role. And then there are others who believe that AI has no place when it comes to academic research. For some persons, this is downright cheating to use AI for academic reasons. So the next 60 minutes, we'll be exploring this question about the role of AI. But most importantly, if you decide to use AI for evaluation or research, we hope to share with you practical tips that you can use AI in an ethically responsible and safe way. My name is Anne-Marie Brown, and I'm joined by my colleague, who is the head of the Center for Research Methods, Dr. Philippe Adu.

[00:02:34] Speaker 2: Thank you, Anne, for having me.

[00:02:37] Speaker 1: All right. So it's a pleasure. So I think you're the perfect person for this conversation because, on one hand, you're an expert in qualitative research methods. You have written several publications when it comes to qualitative data research, qualitative methods. And also, you are one of the first persons to publicly be using ChatGPT. So my question to you is, why did you start to use ChatGPT? You are a forerunner. When nobody else was even using it publicly, why did you start to use AI modules?

[00:03:18] Speaker 2: So let me go a little bit back. So I've been providing support to students and also faculty members and practitioners for close to 10 years, providing them support about methodology, how to collect data, how to analyze, what are the research questions, what methodology that you have to use. All these are some of the struggles that researchers face. And when AI came, I just realized that, okay, can I incorporate AI in terms of asking the system, okay, the kind of questions that you have to have when you wanna do research, maybe how to collect data, how to analyze, how to present your findings. So I realized that AI can play a very good role in doing research. So realizing it, I came to the realization that, okay, let me also share that information to my followers, for them to learn about AI and also learn how to use it as an assistant, helping you to do a very good research and also accelerating the research process, reducing stress and the struggle of analyzing your data or the struggle of thinking about what kind of questions do I have to ask participants. So that really caused me to let people know about the potentiality of using AI to do research, to help you to learn more about people and people's experience and also accelerate the research process.

[00:04:53] Speaker 1: All right, so Dr. Adu, you're in the camp, the first camp then, that believe AI should be leveraged. That's the camp you're in. Yeah.

[00:05:02] Speaker 2: Yes, because I think that doing research, sometimes it takes a long time. And imagine that you have an expert who is not an AI that you can go and ask the system a question. Let's say you have a purpose of the study, but you don't know the kind of research question that you have to ask, right? So that you'll be able to collect rich data. You can ask the system, okay, this is my purpose of my study. Can you suggest to me some of the questions of that or research questions that I have to ask so that I'll be able to get rich information to address my research question, the question. What about interviews, right? What kind of interview questions should I ask participants so that it will be consistent with their research question? You can ask the system and then the system will suggest information and then you evaluate and take some of the information and use it to do your research.

[00:06:02] Speaker 1: All right, so very interesting. So Dr. Adu is in the camp that we should leverage AI as much as possible, but maybe you have a different opinion. If you're joining in this event, please use the chat. So write your responses. What role do you believe AI play when it comes to research and evaluation? We want this to be interactive. So just write in the chat what your thoughts are so we can have a discussion on that. So for me, I am also in the camp that we should leverage AI as much as possible. It's here to stay and it's good not to be left behind. But Dr. Adu, what specific ways have you been using AI? You spoke of students and helping research questions, but what are the specific ways you have been leveraging AI in your work?

[00:06:52] Speaker 2: Yeah, so in the area of data analysis, so sometimes you see the data in qualitative study. Sometimes the data is so huge and sometimes it's humanly impossible for you to analyze the data manually. So what I do is to also look into what tool can I use for me to make sense of the data. So I can also use chat GPT, upload some of the transcripts and ask the system, okay, this is the purpose of my study, this is my research question. Can you review the transcript and then help me to identify codes and themes to help me to address this research question? And then a system can suggest information for me. And I just don't take that information and use it. I have to make sure that they are consistent with the research question. And you can also ask further question and say, okay, this is the theme that we came up with. Can you extract significant information from the transcript so that I can connect it to the themes that I have? And a system can suggest, and then you have to also make sure that you evaluate those significant information that the system has extracted. And so it's all about me using it to assist my work so that I'll be able to do my work in an efficient manner. But I don't allow AI to lead my work that I have to do. I use AI to assist. One example is if let's say I'm doing a study and I'm not sure about a theory that I have to use for my theoretical framework. Can I ask an AI to say, okay, this is the study I'm doing. Can you suggest a theory for me so that I can use? And a system can suggest some theories and then you'll be able to evaluate some of them and then choose the one that you think is appropriate. So there's a lot of ways that you can leverage AI tools and I have personally used and it's really helping me in my research.

[00:09:01] Speaker 1: All right. So fair enough for research questions, using AI as an assistant. And, but if I ask you, what are your top three AI modules that you use? The top three that you recommend for anyone listening online.

[00:09:16] Speaker 2: Yeah, I think chatGPT will come first. I use GPT plus. So this means that I have access to, I will be able to create my own custom GPT, right? My own chatbots for a specific job, right? So, and also I use Cloud AI. That's also, it's also helpful, especially for researchers. You can upload your transcript, you can upload an article and ask the system questions so that you can get more information. I also like Perplexity AI. Perplexity AI is very good. It's, the function of Perplexity AI is similar to Google, right? You ask the system a question, then the system give you an answer, but also provide you a source, right? So it's really addressed the issue of, you know, the system giving you wrong information because it has provided, okay, this is the source of my information, right? Well, if you said, okay, I'm doing the research in burnout, or I'm doing research about mental health. Can you give me, suggest any article or information for me, right? The system can, you know, suggest information, summarize and provide you, and then you can go with that. So I like really Perplexity AI because it provide you not only a summary, but also provide you the link to the information so that you can even click on an article or the information and read more about it. I can show you some of the, not a PowerPoint, but the website so that you can look at, let me see what I can find on the screen.

[00:10:57] Speaker 1: So just to recap, while Dr. Adu looks for his screens is he recommends ChatGPT because you can custom your own assistants within ChatGPT. He recommends Perplexity.ai and also Claw.ai. But remember, Claw.ai is not available in every region. There are workarounds around that, that I will discuss here. Privately, I will tell you how to get around that. I'm in the Netherlands and Claw.ai is not available here, but we do have it. So those are the three main tools. And I see he has here on his screen showing, walk us through Dr. Adu, what you're showing us.

[00:11:39] Speaker 2: So the first one is ChatGPT. I have the paid option. So I have the option to create my own chat box. And so when you go to explore, and then you can go to my GPTs. These are a list of the AI tools that I've created. And that is so simple to create your own. But now we are, you know, we are moving from general generic AI tools to AI agent, customized AI that knows a lot about you a little bit and knows what you want and can provide you what you ask for, right? So one example is that, let's say you have a huge amount of data that you want to analyze. You can go to ChatGPT, create your own data analysis AI tool. Then you'll be able to upload the transcript and ask the system questions. You'll be able to do that. Let's say you want to help people to think about the best way of constructing qualitative research questions, right? You can create AI, custom AI tool, and then you'll be able to interact with the system. And the process of creating the tool is so easy. And we're going to have a master class coming on the 24th of April. I'll be showing students or clients or anybody, if you want to attend, I will be showing you how to create your own custom GPT to do a specific job for you, right? And then let's move on to Perplexity AI. So the good thing about Perplexity AI is that you can tell the system that, okay, I want you to search information from academic field. Or my question is related to writing. So every information that you're going to give me should be related to writing. Or I want YouTube information, right? So you can choose the focus or you can just use the general focus and then you ask the system a question and the system will provide you information at the same time, provide you a summary and at the same time provide you the link to that information. So it's so useful. The last one is the Cloud AI. Cloud AI is similar to ChatGPT, but the only difference is that, you know, it's more tame in the sense that they have made it such a way that it will not provide you any kind of helpful information. So they have two main agent or model. So the model that you are chatting with, right? And then that second model is supervised, the first model, right? So we call it constitutional AI. So the second model, whenever you interact with the first model, the second model makes sure that the information that is given to you is not helpful, it's useful for you. So that's what I like about, you know, Cloud AI and it's very good for using it for research. So, and also during the master class, I'll be talking more about that, how to use all these useful tools. There are other ones, but these are the ones that I really personally use and I like them.

[00:15:04] Speaker 1: Okay. All right, thank you so much, Dr. Addo. And just to reiterate that, I mean, it's just a 60 minute session today, we don't have time to go through, but like for creating your own AI assistant, I have done it, it's so very easy, but we'll go in depth in the master class, you will see the information flashing across your screen. If you want to know how to be part of this master class, there is a promo code that's only valid for a few hours. So I would be quick for a discount, but information is on your screen. But let me go to some of the comments. I see that what you've been saying Dr. Addo is resonating because I see here, Anne says, I agree and have been using AI to complement my work. That said in academic research, this is Anne's comment, we have to consider ethical standards and specifically how we will share our process with an institutional review board.

[00:16:08] Speaker 3: Mm-hmm.

[00:16:10] Speaker 1: Do you have an information on how to IRBs are viewing this process and the ethical consideration? Great question, Anne.

[00:16:21] Speaker 2: Yes, I think we don't have to change anything about ethics, we just have to make sure that every action that you are taking in doing research is following the ethical standard. So if you see AI to as an assistant, right, it's very good for you to let people be aware about how the AI is going to assist you. So one example is that if you're going to use AI to help you to analyze your data, you can be transparent and let IRB know that when you collect your interview data, you'll be using AI to facilitate the process of making sense of your data, right? And making them aware will be helpful so that they'll also know the kinds of tools that you are using, they can also give you some kind of advice about how to make sure that you are protecting participant information. For charge GPT, you have an option for you to indicate that you don't want the system to use your data, right? So, you know, you have control by your data. Cloud AI doesn't really use your data to train their model. So it's very good. So I think that one thing that we have to do is that we have to be transparent, right? You have to be able to disclose what you're going to use the AI for, right? That's also the ethical way. We have to be responsible. Participants are giving you their rich information, so you have to make sure that participant information is being protected, right? And then we have to make sure that we are using AI tool as a way of assisting us to do a specific job. So this means that you have to be able to evaluate all the information that AI is giving you and making sure that they are right. So this means that you have to be a critical reviewer of the result. Because when you go to the AI, all the, most of the AI tools, right? They say that the system can give you wrong information, right? So you have to be, not all the information that you're going to get from AI is right. So you have to be very cautious. You have to make sure that you evaluate all the information that it provides you. You have to make sure that you can even question the answer, right? Imagine that the system gives you a summary. You can say that, okay, can you review this summary and make sure that they best reflect the document that I gave you? Or can you review the themes and make sure that they are addressing the research questions? So you are forcing the system to do self-reflection and self-evaluation. That is very also another way of making sure that you are getting quality information from the system and also making sure that you are using the system in an ethical way. What I don't like is to let a system do things for you without evaluating what the system, especially when you're doing research, right? Imagine that you tell the system, oh, can you write my findings for me, right? I think it's not ethical. What you can do is that you can write your findings and then tell the system to edit. So this is where it takes me to the next kind of AI tool. We call it PipaPal. It's so helpful. Let me share that with you. So these are the two that I'm talking about. So pipapal.com, after you have written or drafted your article or manuscript or your research report, you can upload that information to the system. A system will go through and identify things that you have to change and suggest what you have to do. This is where you use AI to assist you but not doing things for you, right? So you can try it out and see. It's just like an editor, right? Giving you suggestions about the construction of the wares. One thing about, good thing about this software is that their model were trained on research articles. So they know how the research articles are written. So it's gonna be very useful for you to use this kind of tool, right? So allowing the system to assist you is the best way. It's the best way of using the system instead of allowing it to do things for you, right? Because you are not gaining anything. Imagine that, what is the difference between you and a person who doesn't know how to do research? If both of you can ask the system to write a research article for you, right? There's no difference. But I think the difference is that the only thing, the best way is that you do the, you allow the system to support you as you are doing a specific task. Starting from your topic, right? If you are not sure about a topic, if you want to narrow down your research topic, you can ask the system, I have this topic, can you help me to narrow it down, right? Or I have this topic, I need articles. Can you suggest some of the articles for me? Perplexity AI can help you with that. Oh, I have my data. Can you help me develop themes and let me review the themes and make sure that they are addressing my research question and also representing the data that I have? The system can help you. Oh, I want to write my findings, but I don't know how to interpret my data. Can you give me some ideas? It's like you are talking to an expert, providing you some suggestions, and then you take that some of the information and use it to do what you have to do.

[00:22:24] Speaker 1: But you know what's very interesting, Dr. Addo, is we don't get these questions when we're talking about in vivo or SPSS, right? As an assistive tool, but definitely with AI models. And at this point, I have to give a big shout out to Lynn, because Lynn has been our assistant today. She has been writing out the, well, I hope your pronoun is she. I'm sorry if it is not. But Lynn has been writing out the masterclass links for persons. There's a masterclass. And yes, Lynn, it will be recorded for viewing later. So if you can't attend the first live session, that will be recorded. And as we said, it's going across your screen. Again, how to join the masterclass. We don't have a lot of time today to go to show you how to use the different AI models. But in the masterclass, we will go through ethics and go more in depth. So if you look on your screen, you will see the information there with the discount code. The code is only valid for a couple hours, and there's not a lot. I think there's only five tickets, discounted tickets available. So you have to be quick. All right, so moving on. I see there was a question here from Jeff. I don't know if you can see Jeff's question, Dr. Adu. But Jeff is asking how well does AI work for coding in an inductive manner? For example, grounded theory approach. It's a technical question from Jeff, right? Yes. How does AI work for coding in an inductive manner? Example, grounded theory approach, as opposed to a more deductive approach. Is this a masterclass question or you can tackle it now?

[00:24:18] Speaker 2: It's a masterclass, but I can start with it, right? I think that it's so interesting that AI is advancing so fast and it can do many things for you, right? I even created a custom AI tool that researchers can use to analyze their data if they want to develop a theory, right? Using a grounded theory. I think I have a link there. You can email me and I'll give you the link to the chat box. So yes, it can do that. What AI tool wants you is that if you can give the system a little bit of context, a little bit of background, if you can give the system a role to play, right? So you can say you are an expert qualitative researcher who have a good expertise in grounded theory. This is what the study is all about. Can you review this article, or no, review this transcript and come up with themes and a connection between them so that that information will be able to help me to develop a theory. Yes, the system can do that, right? You might not get a perfect information, but it gives you something to start working on. And as time goes on, things will be perfect. Then think about this one. Years ago, when you interview participants, you have to listen to the audio and then transcribe. Now there's automatic transcription. At the beginning, it wasn't all that good, but now you upload your transcript and then the system, AI, can transcribe. It's not 100%, but it's about 90% accuracy for you, right? So things are getting better and AI will get better. Companies are investing billions of dollars in AI, right? So it's gonna be perfect. We will reach a stage where AI will be more intelligent than us, right? We have to prepare our mind. We're going to reach, we call it general artificial intelligence. And we are getting close, right? And companies, they know that. That's why they are spending billions of dollars in investing in AI. So I think that, yes, you can use it for data analysis. And also a time will come, a time will come that you will need just few weeks to do a huge research. Imagine that there is a software, AI tool, that you can say that I'm doing this research. This is my purpose of my study. This is my research question. Can you identify potential participants, interview the participant, transcribe the audio, give me the findings and give me my report? There will be a time it will happen. It will take maybe a day. The system will go and look for somebody who qualified and a system will interview, not you interviewing the participant. AI will conduct the interview for you. AI will transcribe the interview. AI will analyze. AI will give you the results within even a day, right? So it will reach a time where articles will be a lot, production of research will be a lot because AI is accelerating the process. And we cannot just stay and watch. It's very important. We shouldn't stay and watch because tests are going so fast.

[00:28:01] Speaker 1: And if you- Are we redundant then, Dr. Ado? What did you say? Are we redundant as researchers, academia? Is there a role for the human in this process where AI can do everything?

[00:28:16] Speaker 2: There's a role because remember, you are supervising the AI. Now, when I told the AI to go and interview participants and then give me all the results, I don't just take the results and present it. I am responsible. So AI, although AI might be scared for than you, might be intelligent than you, you are supervising the AI. You are responsible about the results. So if you present the results for a conference, you are not gonna say that, oh, I'm so sorry, AI gave it to me. No, you can say that AI helped you to do this one and you are supervising. You make sure that everything is right. You have to review. That's why you also have to have a background in doing research, right? So although AI might do things for you, you also have a background in analyzing qualitative data. How will you know whether the team that the AI has created is good if you don't have any background in analyzing qualitative data? So the skill, so now we are moving from generating knowledge. We are giving that responsibility to the AI. What we are, our responsibility now is supervising the AI, questioning their knowledge that they are developing, making sure that they are accurate. That's our job now. And we have to be, make sure that, you know, we are scaling that job because, you know, things are going so fast and we have to be able to be updated on this issue.

[00:29:54] Speaker 1: So what you're saying then is that the lead researcher will not be redundant, but the research assistant will. Is there still a place for a human research assistant?

[00:30:08] Speaker 2: I think there's a place. So this is what is gonna happen. AI is bridging the gap between the novice researcher and expert researcher, right? So now because of AI, a novice researcher can do research like an expert. Let me give you this, what is happening now. You know, you can just type for AI to give you an image, right? Just type it. You can even type for AI to generate lyrics for you and generate a song for you. Without AI, an expert has to do this, right? So you see how AI is bridging between musicians and artists and also the novice, ordinary person like me. As we are talking, I can create a music right now. When you use Suno AI, when you go to Suno AI, you can just test, create a music about life in Ghana or life in Africa or life in Europe. And the system will create the lyrics for you using ChargePT and the system will create the music for you within seconds. So you see how it's changing things so fast. And I think that it's exciting at the same time it's scary, but I think that we don't have time to be scared. What we have time is to learn and then use it and to improve our lives and the tasks that we do every day. And that's what we have to do.

[00:31:42] Speaker 1: Okay, all right. Which now takes us back, I'm looking at the chat. Anton, hope I pronounced it correctly, had a question. And this is, have you noted cases where an AI hallucinated? So for hallucination, it's basically giving an output that doesn't exist, where the AI tells you something that's inaccurate or false. So have you noted cases where AI hallucinated and how did it affect your work?

[00:32:12] Speaker 2: Yes, so I think let's start with the data analysis, right? I think the first time I was trying to use ChargePT, I uploaded a transcript and I asked ChargePT, okay, can you go to the transcript and develop codes for me, right? Codes are phrases that represent significant information and also address your research question. And then I asked the system, okay, can you extract relevant information from the data in support of the codes, right? And I realized that the information that the system extracted has nothing to do with the data, right? It wasn't coming from the data, the system is making things up. So it supports the fact that a system can provide you wrong information. And as a researcher, you always have to check. So what I do is that when you get that response, you have to make sure that you have to look for the asset or information that has been extracted, you have to look for it from the data and see whether it's really from the data, right? And I think that is the way that you can go. I do it. So there is always a possibility of the system giving you wrong information. What you can do is that you first have to check, make sure that everything is right. Secondly, you can let the system do some kind of self-evaluation, right? So you can say that as a qualitative aspect, with an experience of reviewing themes and codes, can you review the code that you have generated and making sure that they reflect artificial information, right? And the system can do some self-reflection and identify things that are wrong, right? And also you can ask the system to show their work, right? So can you show me how you came up with these themes? Can you show me how you came up with these articles, right? The system can provide you step-by-step of how it came up with. So letting the system, questioning the output, reviewing the output, letting the system do self-reflection and self-evaluation is very important. And also, you can also use, you can also ask the system a question in a different way, right? The same way that you are interviewing participants, you can ask a participant a question, and if you are not getting a right answer or the good information, you can ask a follow-up. So the same way you can ask the system, okay, can you provide me more information or can you provide me other themes, right? So continuously asking the system a question will be helping you to get rich information. The last one is, can you ask the system to provide you the knowledge that you started that information from? This is where Perplexity AI comes in, right? Perplexity can give you the information and also show you the source of the information. If you are using GPT+, the system can show you maybe website information where the information that supports the output that is generated. So there are many ways that you can check and make sure that you are getting rich information, but it's a skill that you have to develop. It's not one day. As you continuously have an interaction with the system, as you continuously questioning the output, you will learn, you develop your own skill to help you to get right information from the system.

[00:35:55] Speaker 1: I agree. And I had an instance where ChatGPT, well, hallucinated on me. I was doing a piece of research for institutions in Japan, but I wasn't getting much leeway because when I used Google, I got back things in Japanese. So I said, okay, let me use ChatGPT. And ChatGPT gave me a list of Japanese institutions. Luckily, I checked with the Japanese stakeholders who I was doing this piece of research for. And they're like, half of these organizations do not exist. So that's the first, as you said, Dr. Adu, you check and verify the information. When I went back into ChatGPT and I was like, where did you get this list of organizations from that you shared with me? And ChatGPT said, I made it up. There you go. Right? There you go.

[00:36:53] Speaker 2: All right. The system is sometimes honest, right? And I think that's interesting because they are also not good in math. Some of them, right? So if you ask them about, let's say find the average of this or find this one, and they give you the wrong answer and you tell the system, this answer is wrong. And the system can apologize to you. So I'm sorry. I'm a language model. I'm not good in math, right? So sometimes the system is a little bit honest and provide you that information, if you question them, right? So try and question them. Don't just accept everything that you receive. If you are doubting, you have to question it. And that's why having expertise in areas will be very good, in your area will be very good, right? So as a qualitative researcher, when I'm looking at the data and I ask that you give me information, and because I have a background in coding data, I know how the code looks like. I know how things look like. If the system gives me the wrong information, I will know. Right? So your expertise is also important. That's why AI may not replace you per se, right? But AI can accelerate the process in terms of what you do, right? You're still going to be an expert, but somebody is doing some job for you and then you review it.

[00:38:26] Speaker 1: You're still going to be an expert. Right? So, think about it.

[00:38:34] Speaker 3: Always working with AI will be the best, and do whatever you can to help you do that. Thank you.

[00:38:40] Speaker 1: All right. Thank you. So Beatrix has a question, which is a nice segue. And that is, what do you think about using AI for interpretive analytical approaches? For example, BQ approaches.

[00:38:58] Speaker 2: Yeah, so you just have to give AI a little bit of background. Remember that the model have been fed with a lot of information, good information and bad information, right? So in order to make it helpful for you, you have to create a persona, right? You have to give the system a role to play. You have the same way when you meet me as a methodology expert, right? If you want me to adequately help you, you have to provide me a little bit of background information about your study and what your problem is, right? In terms of the study. And then I will be able to provide you information. So it's always good to provide a system some background information. You can even define what we mean by interpretative analysis so that the system knows what you are talking about. You have to be, don't assume that the system knows, right? Don't say that, okay, I'm using maybe thematic analysis. Can you go through this data? You can even define thematic analysis for the system to know, so that you and the system are on the same page concerning what you want it to do for you. And this is where ChargeGPT comes in, where you can create your own custom GPT giving instruction, giving the background, giving the role and the tasks that you want the system to do for you so that you'll be able to accomplish a purpose, right? Yes, it's possible that you can use it to analyze your data if you are focused on interpretative analysis.

[00:40:38] Speaker 1: All right. And if you're just joining us, we'll be going in depth in the masterclass on how to create your custom GPT. And what a good practice of mine that I can share sometimes I test the AI model before I begin. So if I'm using ChargeGPT or cloud.ai, I would ask first, like, do you know what this data analysis is? Do you know of feminist evaluation? That's my area. And it would respond, yes. Feminist evaluation is da-da-da-da-da-da-da. Then I say, okay, could you analyze this data for me using the feminist methodology? So I think that's a little good practice, a little tip that before you run and assume that the AI model is on the same page with you, ask it, what do you understand from this terminology, this concept? And if the system comes back with something that you are not on the same page, you can even upload a document to give it a background and say, use this as reference when you're analyzing my data. This is a framework that I'll be using. So I quite agree with you, Dr. Ado.

[00:41:53] Speaker 2: Yes, and also, yes, that's, and the good thing is that there's a website that you can test a lot of AI tools and see which one is right for your task before you use it. It's very good to be proactive because there's a lot of AI tools around and everybody's telling you the good part of it, right? They are selling a product to you. So sometimes they exaggerate the capabilities, right? So it's very important for you to be proactive. Don't just listen to them, that, oh, AI can do this and do that. Can you try it yourself? So there's a website and I'm going to show you, and I'll talk more about it during the masterclass. It's called chat.lmsys.org. So this is what it does, right? You can choose a model, right? So when you go to, okay, first off here, you can choose a specific model, right? So you can see here that you can choose any model that you want and try it out and compare with another model. And then you ask the system a question, right? So like, you can say that, what is qualitative research?

[00:43:14] Speaker 1: Right?

[00:43:15] Speaker 2: So you can ask the system a question and then you choose the AI tool that you want it to answer for you and you compare. So this AI tool is for cloud AI, we call it Haiku, right? It's one of the model. And then the second one is developed by Meta. It's a free AI tool that you can use. We call it open source. And so you can ask the system a question and see which one responds better. So as we can see, you now can compare and see which one performed better. And then if the free version, which is the open source is performing better, you don't have to use a software that you have to pay, right? You can use the one that's a free version. So this website is very good. You can compare tools before you use them, right? You don't only listen to people that, oh, this tool is very good for you, but try to ask the system question and then use and you can compare, okay, I think this one is giving me very good information. Then I will use Lama tool, right? To do my analysis. And then you can go to, there's a place that you can go. Let me see. Oh, so when you go to this one, it will show you all the AI tools that we have now, the popular ones, you see all of them. And now you see, it gives you the top, the one that is, you know, rank, it ranks them for you. So you can see that Cloud 3, Opus is ranked first. GPT-4 is also ranking second. I think they are all first, right? So you can see that you can go to this website and it will give you the best AI tool around. And then you can decide, you can even try it out and then before you apply it. So this is one way of you getting access to the best AI tool for your tasks, not what people have told you.

[00:45:13] Speaker 1: Okay, and could you repeat the name of this website to compare different AI models?

[00:45:20] Speaker 2: So it's called, when you type chat.lmsys.org, I can, let me see what I can put in a chat box. I don't know whether you'll be able to get it when I put it here. Let me try. I put it in the chat box there. I can also put it in the private one and see.

[00:45:43] Speaker 1: Yes, and maybe someone can confirm who is listening if you have received the chat, so we know if you got it. And if not, you can watch back the recording or attend the class. Nene says, thank you. So I assume that it was received.

[00:46:04] Speaker 3: Great.

[00:46:05] Speaker 1: All right, that is very helpful because I know some persons are thinking it can get overwhelming, all these AI models. Which one should I use? And the thing is, which one you should use may differ depending on your purpose.

[00:46:19] Speaker 2: Yes, the task you wanna complete.

[00:46:22] Speaker 1: Yes, like for me, if I'm doing an evaluation or for my social media content, I prefer to use ChatGPT for data analysis and my evaluation research. And for my social media content, I definitely prefer to use Cloud.ai. I find ChatGPT too over the top and dramatic. Words like deep dive, a word off, delve. And that is too dramatic for my social media posts because then it's clear that I didn't write it. So I like Cloud.ai because it's more a muted way of communicating. So depending on what your purpose is, you might use a different AI model. And do you have to pay for everything, Dr. Adu? Do you have to have five AI models and you pay for them? What would be your advice? If you were to pay for one as a qualitative, which one would you pay for?

[00:47:23] Speaker 2: As companies are investing a lot of money in this, they also want their money back, right? So this means that you'll be, we are being bombarded with a lot of AI tools, right? Everybody is saying, okay, this one is better than this. This one is good, right? And some of them are free and some of them you have to pay. The ones that you have to pay, because they invested so much money in this, right? They want to get your money back. So, or they want to break even. So you have to pay, right? Like GPT-4, you have to pay for it before you can use. But there are some free versions that you can use, the open source ones, right? One of them is called Lama 2, which was made by Meta, right? You can, so one thing that you can do is that you can even go to the website that I gave you, explore some of the tools and see which one can help you to do the job. Some of the tools may be open source, right? So this means that you don't have to pay. But even the closed source ones, right? Like Cloud AI, you can use part of it for free, right? You can still upload your document and ask the system a question. But Classity AI, you don't have to pay to use, right? You can use some part and then if you want to use it frequently, then you have to pay for it. Charge GPT, you can use GPT-3.5 for free. But besides that, there are other models that you can totally use them for free. When you go to, let me look for the, it's called Hacking Face. Let me see what I can get a link for you. It has open-ended sources that you can go there. It's a platform that has free models there that you can use and also create your own model for free. You can create your own, like you can create custom GPT or AI assistant custom for you. You can go to that website and create that thing for free. I will demonstrate that when you come to my masterclass. But let me give you the link so that you can get some information. Let me see what I can find that link there. So you don't have to pay for everything. You just can use existing models that are free to do the task that you want it to do for you. But you can test them out first. And if it's working for you, you don't have to, you can use it without paying. So let me send this link to, oh, maybe let me share my screen quickly. Let me see the second screen. Okay, so this is what I'm talking about. So if you want to go to hackingface.com, you can create your own, you click on assistants. You can see that people have created their own AI tool for free that you can use and also can create yours. If you want to create your own assistant, you can go to create assistant, right? You can see here that I've created my, I created one for emotional support for dissertation students, right? So you can go to the, you can chat with the assistant. If you have any kind of concern about your study, about your relationship with your supervisor, the system can provide you some tips for you to be successful. So I have some for program evaluation, selecting the evaluation type. So you can create your own. You just have to tell the system that this is the task that I want you to complete. And this is the road that I want you to take. And then you choose the model, the free model. There are several ones. You just choose one of them and then see what you will get and interact with the system. And then you'll be able to see whether it should be useful for you. So when it comes to the masterclass, we'll go through step by step. At the end of the day, you'll be able to create your own AI assistant for free if you don't have access to GPT plus.

[00:51:35] Speaker 1: And is that the one you pay for? Which one do you pay for, Dr. Adu?

[00:51:40] Speaker 2: GPT plus you have to pay for. This one is for free.

[00:51:44] Speaker 1: Yes, but which one do you pay for? Because I'm thinking people say, I want the one that Dr. Adu pay for. I'm going to use that one and pay. Yeah, which one do you pay for?

[00:51:53] Speaker 2: Okay, it's GPT, you know, charge GPT, GPT four.

[00:51:58] Speaker 1: So is it safe to assume that that's your favorite so far?

[00:52:03] Speaker 2: Yeah, because, yes, because even when you are creating your own chatbot there, you have interaction with the system, ordinary question, they ask you a question and respond. And then at the end of the day, it creates the AI tool for you. I've been able to create about 15 AI tools. That has been used. Some people have used it. This has been used 100 times. It has been used 200 times. So I created it and make it available for everyone, for people to use. And it's helping people, especially if this one. What's your name? Theory Navigator.

[00:52:44] Speaker 1: Theory Navigator, okay.

[00:52:46] Speaker 2: Yes, so let's say you have a topic, right? And then you don't know the theory that you can use for your conceptual framework. The system can suggest something for you. We can do it now. I am working on a study about, I always like this, this example, burnout among primary or maybe healthcare providers. Providers. Okay. Okay. I'm going to provide this. Can you suggest potential theories I can use for my conceptual or theoretical framework?

[00:53:49] Speaker 3: Just like that. Okay.

[00:54:02] Speaker 2: So you see the system has suggested some theories that you have to, you can review and get access to, right? You can even ask the system, can you run them for me, right? Or can you give me links to this so that I can read more about all these theories? So you see how, imagine there's no AI tool like this. You have to look for articles. You have to go to identify all these theories. You have to read about them. And it's taking a long time. This within seconds, at least you have five theories or models that you can look at and read more about so that you can make a decision. And you see the reason why AI will accelerate research process. You see the reason why AI will make research so less stressful for you. It's amazing.

[00:54:58] Speaker 1: Yes, indeed it is. Geronda has a question. Is there really a big difference between ChatGPT 3.5, which is the free version and ChatGPT 4, which is paid? Is there a big difference?

[00:55:12] Speaker 2: There's a big difference. The one, the main difference that you have to pay for, you have to pay for the GPT 4. Okay. So the main one is that for GPT 4, you can attach a document. You can see here, I can attach a document, right? And then ask the same questions. Like if you have a transcript, you can attach. GPT, let me go to, let me click on and go to GPT 3.5. There's no way that you can attach a document, right? So this is where, if you don't want to pay for GPT 4, then you can use Cloud AI for free because you'll be able to upload your transcript and ask the same questions, right? So this is where you can get to your destination without paying because there's so much AI tools around. You just have to explore and see if you cannot afford, look for a free version. They are there and it can do similar things as what you have done, that what the paid version can do for you, right? And so you have, and then for GPT 4, you'll be able to create your own custom GPT, like what I've created here, that I can create one that is AI agent that can do a specific job for me, right? So there's a lot of options here and you can also use the ones that are available online. So you can see that people have also created their own chatbots that they can also use. They have made it public so you can use them. So there's a lot of options for you if you want to, if you paid for that, right? So there's a vast difference.

[00:57:01] Speaker 1: So we're right up on the time. So any last words before we sign off?

[00:57:10] Speaker 2: Okay, so I just want to thank you all for coming. I really appreciate your time and we'll be doing this kind of live program and we will inform you. And also, I just, you know, the takeaway here is that see AI too as an assistant, right? Don't see it as taking over your task or your job. See it as helping you to complete a job that you will spend a lot of time completing. See AI as helping you to make things easier for you, right? And also try to explore. You don't have to have a unique skill to interact with the AI tool. The same way you talk to people, the same way you can talk to the AI system, right? Just explore and see what you're going to get and keep on asking questions and you'll be fine. But if you want to get the skill for you to use the AI tool well, then this is a masterclass that, you know, that will be coming on the 25th. It will be very helpful to join and I'll be able to address all your questions for you and then make the world a better place.

[00:58:26] Speaker 1: Yes, thank you. Thanks everyone for joining in. We only had 60 minutes. We had to cram everything in there, but as Dr. Adu says, in the masterclass, we have four days to take our time, go through, answer all the questions. Some questions we didn't get to, there's the link to the masterclass and the QR code. See you there. Bye-bye.

[00:58:48] Speaker 3: Thank you, bye.

ai AI Insights
Arow Summary
In a webinar discussion, Anne-Marie Brown and Dr. Philippe Adu explore the role of AI (e.g., ChatGPT) in academic research and evaluation. They argue AI can be ethically leveraged as an assistant to speed up tasks like formulating research questions, selecting theories, summarizing literature, coding qualitative data, and drafting/editing text—while emphasizing that researchers must remain responsible, transparent with IRBs, and vigilant about privacy and hallucinations. Adu recommends key tools (ChatGPT Plus with custom GPTs, Claude, Perplexity) and demonstrates model comparison via LMSYS plus free/open-source options via Hugging Face. The session stresses verifying AI outputs, prompting with context/definitions, using AI for support rather than replacing researcher judgment, and anticipating rapid advances that shift human roles toward supervision and critical review.
Arow Title
Using AI Ethically in Research and Evaluation
Arow Keywords
AI in research Remove
ChatGPT Remove
Claude Remove
Perplexity AI Remove
qualitative data analysis Remove
coding and themes Remove
grounded theory Remove
IRB ethics Remove
transparency Remove
data privacy Remove
hallucinations Remove
custom GPTs Remove
AI assistants Remove
model comparison Remove
LMSYS Remove
Hugging Face Remove
open-source LLMs Remove
academic writing support Remove
Arow Key Takeaways
  • Treat AI as an assistant, not a replacement for researcher expertise or accountability.
  • Be transparent with IRBs about how AI will be used, and protect participant data.
  • Verify outputs carefully; AI can hallucinate, especially in extracting quotes or citing sources.
  • Improve results by giving AI context, definitions, and clear roles (persona prompting).
  • Use tools strategically: ChatGPT Plus for custom GPTs and attachments; Perplexity for source-linked search; Claude for more constrained/safer responses.
  • Consider free/open-source alternatives and compare models (e.g., via chat.lmsys.org) before paying.
  • AI is accelerating research workflows; the human role is shifting toward supervision, evaluation, and critical judgment.
Arow Sentiments
Positive: The tone is optimistic and practical about AI’s benefits (speed, reduced stress, better support) while acknowledging risks (hallucinations, privacy) and emphasizing responsible, ethical use.
Arow Enter your query
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript