Exploring AI in Peer Review: Objectivity vs. Bias with Alexandru Barbu
Join Alexandru Barbu from C&O as he discusses the impact of AI on peer review, weighing its potential for objectivity against the risks of bias.
File
AI-Driven Peer Review - Objectivity or Bias Interview with Alexandru Barbu from Sciendo
Added on 09/25/2024
Speakers
add Add new speaker

Speaker 1: Welcome everyone to our special interview series celebrating peer review week with ACSE. And today we have the pleasure of speaking with Alexandru Barbu from C&O to explore one of the most thought provoking topics in scholarly publishing, AI driven peer review, objectivity or bias. Thank you, Alex, for joining us today and sharing your expertise on this critical topic. So let's start with your quick and brief introduction.

Speaker 2: Thank you to all of you. It's a pleasure to have you here today. And I hope that the information that I'm going to give you, it's going to be as good as possible and relevant. I actually like this topic, but coming back to who I am and what I have been doing in the past years. So as it was mentioned, my name is Alexandru Barbu and I'm the sales and business development manager of C&O, responsible for global markets. And I have been in the publishing industry for 10 years now. So that gives me a little bit of insight in what happened, what's happening and what will happen in the publishing industry, no matter the model of publication.

Speaker 1: Thank you. In your role at C&O, how have you seen the AI influence in peer review process and do you believe that AI brings more objectivity or are there any concerns that it could introduce new biases?

Speaker 2: At the moment, we are considering but not using AI, but we have done a lot of study on this topic, especially for the peer review process. And my honest answer is that from my opinion, it's both good and bad at the same time. And I'm going to tell you why I think it's a good thing and why I think that it can be also dangerous and can create issues on both short term and long term. I consider AI as a good thing because it can have a significant influence in the peer review process by introducing new tools and systems that make the process way faster than it is now and can make the workflow much easier. For instance, and I'm just going to give you an example, AI can quickly analyze manuscripts for issues, all sorts of issues, plagiarism, proper framing or anything that you can think of. AI can do it way faster than a normal person and can analyze an article, taking in consideration also the journal guidelines. And it can speed up probably hundreds of times the process of analyzing the paper and peer review. So that's one of the good things in my opinion. And also you have to take in consideration that AI can assist in the selection of appropriate reviewers by analyzing all of the databases that are out there with reviewers and selecting the ones that are proper for that particular paper. Furthermore, AI algorithm can flag potential, I don't know, ethical issues, such as a conflict of interest between the reviewer and the paper. And that can be done by cross-referencing data from multiple sources. But as I was saying, as this is a good thing, at the same time it can also be a bad thing because the AI software is as good as the database that it has incorporated. So that was, in my mind, one of the major issues that the databases at this point worldwide will influence on how an AI can behave. So you are going to create, if the database is not proper, you are going to create many issues. I don't know, just to give an example, issues regarding geographical regions or representation or research topics or anything that you can think of can be influenced by the database on which the AI is built.

Speaker 1: Thank you for sharing your answer.

Speaker 2: It can be a blessing and an issue at the same time.

Speaker 1: Exactly, exactly. So from your perspective, what are the key challenges in ensuring that AI systems remains impartial in peer review and how we can prevent the introduction of unintended biases in AI in our group?

Speaker 2: As I was saying also in the first question, the main challenge is the data used to train AI. So that's the first thing that we need to look at and work on. If the training data reflects existing biases in the academic publishing ecosystem, that will reflect in the implementation of AI. And there are many methods of actually covering that. First of all, it's by rechecking the databases that you are going to use to train your AI before implementation. So that's the first thing. Second of all, I think that one of the best things is to regularly audit and adapt the database. Audit it and then adapt it if you notice that it's biased. So I don't think that at the moment AI can work on its own. I think that it needs human supervision and it will need human supervision from now on for a long period of time until the databases are completely suitable to this sort of utilization.

Speaker 1: Exactly, but using AI in each segment, do you think that there is a danger of over-reliance on AI in peer review? Potentially, we will make human judgment. And how can publishers strike the right balance between AI-driven efficiency and human expertise?

Speaker 2: Oh, there's always a danger when technology meets humans because it's our human nature to try to simplify our lives. So just to give you a short example, look how much we rely now on our phones. They are always with us, they're always present. So there's always the same danger with AI. Yet again, as I was saying, AI can efficiently handle repetitive and objective tasks, make them faster, speed up each process in which they are involved. And that can be addictive. So AI can sometimes aid in decision-making in areas where critical thinking is necessary and so on. So yeah, the danger is real and it's there. And I think that the way to move forward is considering the fact that it's not ever going to be perfect because it's going to always work on databases created by humans. And I think that to solve this danger, you have to keep that in mind and consider that a human evaluation and a human perspective, it's always needed. So you cannot completely rely on AI.

Speaker 1: Exactly. So looking forward, how do you environ the evolution of artificial intelligence in peer review? Do you see it as a tool that will enhance fairness or would it intentionally widen the gap between objective evolution and inherent biases?

Speaker 2: I actually think that it can be a tool to enhance fairness because in its purpose, it's not biased in any way. The only thing that, as I was mentioning so many times, the only thing biased for the system is the databases in which it works. So if that's corrected, the system on its own, it cannot be biased. It's just a program that decides what's there to be done based on the initial programming. So the only biased part of the system is the human part. So I think that, to be honest, in the future, AI has the potential to significantly enhance fairness in the peer review process, or if you think about it, in any other process, because it can help identify potential issues early and ensure the submissions are evaluated against a standard set. But however, if not carefully managed, AI could also widen the gap between objective evaluation and inherent bias. So yet again, it depends on the human factor and how are we going to use it. It's nothing more than that. So the risk always exists. It's out there, and I think that it's up to us to mediate the impact that it's going to have on our day-to-day work and how it's going to streamline everything and how it's going to affect us.

Speaker 1: Insightful. So my last question would be, what are your thoughts for this year's theme of Peer Review Week, Innovation and Technology in Peer Review? And how do you see that the Peer Review Week platform is contributing to this feat?

Speaker 2: I think that, first of all, it's timely and relevant because it reflects a growing interest in leveraging advanced technology. So including AI to improve the peer review process, it's something that's relevant for our current status. So Peer Review Week, from my point of view, serves as a crucial platform for facilitating discussions that previously were not taken in consideration. It provides an opportunity to share all of our experience. My experience, your experience, and whoever also participates to this, because opinions and experiences are different. And that's going to influence the impact and actually the whole vision on AI in this subject. So by bringing together diverse perspectives, we can drive forward the conversation on how technology can be harnessed to our benefit.

Speaker 1: Exactly. Exactly. Thank you so much, Alex, for taking your time for today's brief and quick interview and sharing your insights on this interesting topic of objectivity or biases when it comes to AI-driven peer review. Thank you for joining us today.

Speaker 2: Same to you. I hope it was something that the market needed. I'm just sharing my honest opinion on what I think and how I think that the market will go. I really think that AI will have a huge influence in the next couple of years in how the publishing industry and all the publishing industry process will be implemented and how fast it's going to work and so on. But at the same time, I'm still a little bit skeptical on its bias, because at the end of the day, we are the ones running the AI. We are the one, every AI, we are the ones programming it. So yeah, there's the saying, take it with a slight grain of salt.

Speaker 1: Exactly. Exactly.

ai AI Insights
Summary

Generate a brief summary highlighting the main points of the transcript.

Generate
Title

Generate a concise and relevant title for the transcript based on the main themes and content discussed.

Generate
Keywords

Identify and highlight the key words or phrases most relevant to the content of the transcript.

Generate
Enter your query
Sentiments

Analyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.

Generate
Quizzes

Create interactive quizzes based on the content of the transcript to test comprehension or engage users.

Generate
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript