Ensuring Safe, Ethical, and Inclusive AI: Insights from UN AI Advisory Body
Karma Artigas discusses the UN AI Advisory Body's report on making AI safe, ethical, and accessible, addressing global disparities and governance challenges.
File
U.N. Report Warns AI May Increase Global Tech Inequality Amanpour and Company
Added on 10/02/2024
Speakers
add Add new speaker

Speaker 1: So now to artificial intelligence, which, as you know, is the next frontier of the technological revolution. But as it continues to evolve at breakneck speed, how can we ensure it is safe, ethical, and accessible for all to use? To answer this, Hari Sreenivasan spoke with Karma Artigas, co-chair of the UN Artificial Intelligence Advisory Body.

Speaker 2: Paula, thanks. Karma Artigas, thanks so much for joining us. You are co-chair of this UN AI Advisory Board, and you've published this final report. What's the top line? What are the findings that you're most interested in making sure that people are aware of?

Speaker 3: First of all, I think we all are aware of the great possibilities that artificial intelligence is going to reach humanity in terms of efficiency in productive processes, of course, the opportunity to spread public health or education, and of course, on scientific research. We are all aware that these are great possibilities, but at the same time, there are a lot of risks at the short term in terms of fundamental values, but also in the long term in terms of safety. What we need to ensure is that all these opportunities are absolutely developed. If we leave all this technology, which is very transformative and governed, not only will we not be able to capture all these opportunities, but probably we're going to exacerbate some of the problems we have today, especially talking about lack of inclusiveness.

Speaker 2: When you talk about lack of inclusiveness, the report points out in several different ways, the sort of giant gaps there are just in how unequal the distribution of where artificial intelligence is today. One of the things that you pointed out is seven countries are party to all the different kind of AI governance efforts that are happening around the planet, and 118 countries are part of none of them. Is there a risk here that the rest of the world, meaning the majority of the world, gets left behind? Yes, it is. In fact, there is a great

Speaker 3: risk to increase the current digital divide with a new AI divide. I think that we must ensure that the benefits and the cost of any technology revolution is equally distributed among different social classes, among different countries. The reality is that even though there are a lot of very important international efforts for government in terms of ethics guidelines, even regulations in some parts of the world, we cannot leave all these countries without being sitting at the table and being part, not even in the development, but also already in the discussion. We want equality in the benefits. We need to ensure equality in the access. To ensure equality in the access, we need to make them participate in all these new instruments we are proposing to ensure that AI is governed at the global level. I also provide with these less developed countries with the right tools they need to develop their solutions, especially when we think that AI is going to be fundamental in the achievement of sustainable development goals. I mean by these three main, I would say, entities that are needed, which is data, computing capabilities, and talent. That is one of the proposals we have is on a capacity development network and capacity building, also funded by a global fund for AI.

Speaker 2: Some of this comes down to the kind of computing power and where that computing power is located. Right now, what you point out is that there is almost no computing cluster, all the hundred biggest computing clusters in the world, none of them are in developing countries at all. If the physical horsepower that's necessary to enable the talent in a smaller country to try to build applications on AI, etc., just doesn't exist there, how do we even begin that?

Speaker 3: Exactly. That's the right question. We don't have access to that computing capabilities and this is why in the capacity building network initiative, we propose a global fund that can be funded by private and public entities, but also not only in money, but also in kind. We need to provide the capacities to build their own entrepreneurial ecosystems that these countries need. That's what also proposes a data framework, because most of the problem is that all the special and large language models, general purpose AI systems and models that are being developed by the global north, they are only developed by data from the global north. So, there is a lack of representation and therefore, we cannot pretend that this is a universally adopted technology that can benefit all, which is our aim.

Speaker 2: Just the other day, we saw that there was an investment between BlackRock and Microsoft and they want to put $30 billion down to co-invest in data centers, right? They even have NVIDIA as a partner, but most of that is America-centric. I wonder, how do the suggestions that you're making here, do you pick up the phone and call Satya Nadella and say, hey, listen, how about a couple of those data centers and a couple of other countries that could use it?

Speaker 3: We're talking a problem that has a lot to do with geopolitics and of course, we're not messing into that. What we see is what are the gaps, what are the instruments that need to be set? Where is the place where all these conversations need to take place? The point is that we don't have yet a multilateral platform for collaboration, for example, on safety. Safety is very important for AI to design the safeguards and guardrails that we can really trust the technology and therefore adopt it because I think we are all interested in getting all these benefits and get all these opportunities and therefore that we can adopt it with trust. Trust for the consumers and trust for the citizens. What we are not saying here is giving all the answers to all the problems. What we are proposing here was what are the instruments that are not yet in place that are necessary because they are covering the gaps. I think the other important thing for me that one of the most important recommendations is the scientific panel. We need transparency on the risks and on the opportunities and without data and scientific evidence, not even policymakers can be sensitive roles to guide AI properly.

Speaker 2: How do you create that incentive for transparency? Right now, for example, when it comes to intellectual property, there is a lot of concern that a lot of the large language models have been trained on copyrighted material. I wonder if you have this ability to convene different countries, whose law do you agree on? Whose intellectual property law are you going to go by? Whose human rights law are you going to go by? What is freedom of speech in one country versus another? How do you get through those kind of thorny issues?

Speaker 3: Well, I think there's a difference that so many people make, which is one thing is ethics, another thing is regulation, another thing is governance. When we're talking about ethics is how should companies or governments, because this is also the use of AI by governments, how should they behave in a morally acceptable way or in the way we expect them to behave? That's ethics. But then it comes to governance, and governance means what are the instruments I need to put in place to ensure that these companies and these governments are behaving ethically? Regulation is one of these tools, but it's not the only one. I come from Europe, and I've been an active negotiator on the European AI Act, and we solve this on our European way, but it doesn't need to work for everyone. What we said here is that when we talk about governance, it can be through regulation, but it can be also through market incentives. It can be with oversight boards. It can be with treaties. It can be with many other ways. We are proposing some instruments to make this happen, and in terms of regulation, we cannot expect that all the countries in the world have the same regulation. But what we can expect is that there is a convergence on a very important minimum, which is that anything on AI is for the common good. It's based under the UN chapter, under the international law, and based on human rights, and I think that's the very minimum we should ask to any country in the world and any company in the world. You are, for our audience

Speaker 2: that doesn't know, the Spanish Secretary of State for Digitization and Artificial Intelligence, so you've had these conversations across Europe, and I wonder how did you balance the need for making sure that it's comprehensive, that you understand the technology, and at the same time, the sort of need for speed, because so often we find, at least in the United States, regulation is about, I don't know, five to eight years behind where the technology already is. So by the time it gets maybe litigated in the court system, technology has evolved so fast, right?

Speaker 3: Well, exactly. That's the big challenge. How can we regulate the technology, which is a continuous evolution? How can we make these laws, these regulations, or these best practices future proof? And that must be embedded in the law mechanism itself. So in particular, the AUAI Act has its own renewable mechanisms, and a lot of the things that are proposed have been designed together with the industry. We have all the same principle. Anything that we are proposing here, as the global governance, are very agile instruments that can evolve according to the needs. But what we cannot do is to do nothing, to wait until the harm is done, because governance must not be seen as an inhibitor, non-inhibitor of innovation. It must be seen as an enabler. If we give trust to consumers and users, people will adopt AI massively. I think that's what we are not seeing. And I think the risk, we don't need to wait five more years to know what are the potential risks. I think we are on time now to make things happen and to ensure that everybody's doing the things right first time, because probably we are not going to be able to revert

Speaker 2: back the potential harm we can create. I looked at the report, and the amazing confluence of the number of experts who are very concerned about some of the negative risks of AI, when it comes to information integrity, on how people are able to tell fact from fiction. That's something that we are here now, right before an election in the United States, thinking about much more closely. But I wonder, what are the conversations that are necessary to try to figure out some sort of baseline of ensuring that a surveillance state doesn't take over in a harmful way, or that information integrity is not destroyed across different societies?

Speaker 3: Exactly, this is where we think there must be this consensus. I will say that we can compete for market share, we can compete for talent, but we cannot compete for safety, we cannot compete for human rights. I think that national countries will have to put their own regulations to limit the power of governments or companies. Again, in the new AI Act, we put five cases where we consider forbidden uses of AI. So things that, even though are technically feasible, we don't want that to happen in Europe, for example, social scoring, where we know that other parts in the world is something widely accepted. So we don't pretend, through the UN, to replace the role that all government leaders need to put in place in their countries. What we're seeing is, wherever it is at the national level, must be encompassed by a consensus in very important things, on what are the risks, how do we prevent from unintended misuses of technology, how do we set up guardrails, that a risk in a country is also a risk in another country. Also, how do we align each other on the technical standards, how do we set up scientific panels to all these risks you mentioned are not just fears with no scientific evidence, because we are focusing a lot on the risk and we're not focusing, therefore, on the opportunities. We are huge. And I think we are all in the same boat, companies, citizens, and governments, that we use AI for the good of humanity. And I think that is a great opportunity.

Speaker 2: Even if you wanted to focus on the potential benefits of AI, there are significant concerns with the amount of energy that is necessary to power some of the data centers where all of this computing power would be working. So here we are, on the one hand, in a climate crisis that is a significant cost for the world. Are we making things worse when we are looking at how AI is developing today without really any environmental guardrails?

Speaker 3: Absolutely. The level of development of AI, with the level of consumption, not only of energy, but especially of water, is not sustainable. And because we think that AI can be very positive for the development of sustainable development goals, we need to ask for sustainable requirements also to the software industry, and they are required by any other industry. I think one of the recommendations that we expect the scientific panel will show some light on how to do that in a better way. How can we be more efficient in the software development so that we don't have this excessive consumption, which is absolutely contradictory, or using AI to improve energy consumption and be more efficient, but at the same time, the technology is not sustainable by itself. I think that's the way we want to go. That's where this international consensus

Speaker 2: must take place. Most of what we're thinking about AI as consumers might be chatbots, but there are much darker uses of artificial intelligence that we're slowly starting to understand. One is in the use of autonomous weapons. This is a completely discreet conversation that has mostly military stakeholders and heads of state involved. I wonder if in this kind of an advisory model, whether you've come up with anything to suggest or alter the course of how AI could be used in defense.

Speaker 3: In this particular matter, we see that we don't need to provide a different instrument because we already have the Geneva Convention. What we are recommending in the report is, of course, we expect or claim for a treaty in 2026 to ban these autonomous weapons. This is a proposal that we do, but the place where this must be discussed is in the Geneva Convention. We don't need to create a different instrument for that. There is already a multilateral platform to discuss this topic. Of course, when we're talking that AI must be for the good of humanity, we of course consider that this cannot be harmful for people.

Speaker 2: We've also already seen here in the United States horrible cases where the models that the artificial intelligence was trained on, especially for visual recognition, ends up creating biases and exacerbating biases from the people who might have been programming it. Some of that might be conscious, some of that might be unconscious. How do you figure out how to create any kind of a conversation, much less a standard, so that companies in Europe and companies in the United States and maybe even companies in China can say, here's what to avoid to make sure that your data on the response and the output is better?

Speaker 3: Yes. I would say that companies are very, very responsible. I see that all the software sector is very responsible, but they need to do things better. They need to improve the products. When we are talking about this, for example, the AI policy dialogue, we include the companies there. It's not only the countries talking among them. We need to include academia. We need to include the companies. We need to include the government, the civil society. That's where we need. There is a conversation needed. I think that all this is going to progress on the technical point of view, but it is true that all these models have a general purpose, then are going to be refined also by private data for specific use cases on the industries. I think one is the normal evolution of the products, and the other is how can we assess the risk that this can have, for example, in discrimination and fundamental rights or values. That's where, again, we need this conversation. We need to put together the developers, the users, the governments, policymakers, and come to this consensus and standards.

Speaker 2: What do you think right now is the biggest obstacle of trying to establish something like this, even if it's not a hard agency, but even the softer steps that you're suggesting? What

Speaker 3: does success look like in five years? I think now the first immediate step is to gain the support of the member states in the voting they have on the Global Digital Compact that's going to take place on Sunday within the discussions of the Common Agenda on the United Nations. The first thing is that this proposal with a group of independent experts that, as you mentioned, is not only the 39 members of our board. It's more than 2,000 experts in the world that have participated in different consultations, more than 70 consultations all over the world. We are quite confident that these recommendations make sense and that these recommendations have gathered absolutely all the sensitivities. The next step is that we really gain the support of the permanent representatives on the United Nations to push forward these initiatives. But if even all these recommendations are not adopted, I think we should start the discussion from the cyber society point of view. We need to put all these challenges on the table. We expect that we create a conversation around these topics from now on.

Speaker 2: Carmen Ortigas, thanks so much for joining us.

Speaker 3: Thank you so much.

ai AI Insights
Summary

Generate a brief summary highlighting the main points of the transcript.

Generate
Title

Generate a concise and relevant title for the transcript based on the main themes and content discussed.

Generate
Keywords

Identify and highlight the key words or phrases most relevant to the content of the transcript.

Generate
Enter your query
Sentiments

Analyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.

Generate
Quizzes

Create interactive quizzes based on the content of the transcript to test comprehension or engage users.

Generate
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript