Scroll down for full podcast transcript.
In the first of a special series on generative artificial intelligence, we ask why AI is suddenly such big news and where things might go from here.
Speakers: Cathy Li, Head, AI, Data and Metaverse, World Economic Forum; Francesca Rossi, AI Ethics Global Leader, IBM Research; and Pascale Fung, Professor at Hong Kong University of Science and Technology.
Thumbnail picture: generated by Dall-E with the prompt 'the face of rodin's thinker as a robot'
The World Economic Forum's Centre for the Fourth Industrial Revolution: https://centres.weforum.org/centre-for-the-fourth-industrial-revolution/home
Join the World Economic Forum Podcast Club
ポッドキャスト・トランスクリプト
This transcript has been generated using speech recognition software and may contain errors. Please check its accuracy against the audio.
Francesca Rossi, Head of Ethics IBM Research: It's the first time that people all over the world can use and interact with an AI system. It's really a game changer, because everybody can experience the capabilities of an AI system.
Robin Pomeroy, host, Radio Davos: Welcome to Radio Davos, the podcast from the World Economic Forum that looks at the biggest challenges and how we might solve them.
This week it’s the first in our special series on generative artificial i ntelligence - the technology that has burst into the public consciousness in the last few months with the release of so-called ‘large language models’ such as ChatGPT.
Pascale Fung, AI professor, Hong Kong University of Science and Technology: Today, these large language models, they are like these powerful wild beasts. We need to have algorithms and methods to tame such beasts and then to use them for the benefit of humanity.
Robin Pomeroy: In this series we will be hearing from experts with a broad range of opinions on the promise and perils of AI, as regulators around the world struggle to catch up with a technology that could change so much about the way we live and work.
Pascale Fung: I am worried that the deployment is going too fast. We're deploying systems that we don't understand 100% the ramification of.
Robin Pomeroy: We'll be hearing from experts calling for a pause in AI development, and from others who say advances in AI could be essential to help address some of the challenges.
Francesca Rossi: Definitely we want to facilitate even more and speed up even more the research and the development, because those are areas that can help understand how to better mitigate the issues.
Robin Pomeroy: I’m Robin Pomeroy at the World Economic Forum, and with a series of podcasts on generative artificial intelligence…
Pascale Fung: We need to design algorithms such that it can be aligned to human values.
Robin Pomeroy: This is Radio Davos
When ChatGPT was released late last year it quickly became the fastest growing consumer application in the history of computing, with many, many millions of people using it to create seemingly intelligent, original text - anything from complex essays to nonsense poetry or novels - instantly - and for free.
Along with image generators such as Dall-E and Midjourney - which can create artwork at the touch of a button -- remember the pope in his puffer jacket? -- people around the world are playing with these apps, and starting to integrate them into their work flows.
And people are starting to realize the impact AI is likely to have on them in the short, medium and long term. According to the World Economic Forum’s latest Future of Jobs Report, for example, employers predict that almost a quarter of all jobs will be affected - which can means destroyed or created - by technology - and 44% of the skills needed in the workplace will change - in five years..
In the few months that we have all been allowed to use the new AI apps, there have been countless examples of people succeeding in getting them to do bad things. And also some shocking examples of apparently weird behaviour. New York Times tech columnist and podcaster Kevin Roose said he was effectively stalked by an AI bot’s alter ego calling itself Sydney which said it was in love with him and that he should leave his wife to be with the bot. In another widely cited case, a real-life US law professor said an AI had accused him of sexual assault and had cited a Washington Post article - that didn't actually exist - to prove it.
These are all intriguing anecdotes, but they do hint at some of the problems we might all encounter with the rise of generative AI -- if used by bad people, or if it goes rogue.
But even a benign, well-behaved AI system could still wreak disruption on people's lives and livelihoods. - If an algorithm can do your job as well as you for free - isn't it something we should all be paying attention to?
A couple of weeks ago, the World Economic Forum hosted a summit at its office in San Francisco to discuss all the issues, with academics, policymakers and people from some of those companies at the very heart of the AI revolution. Their discussions, over three days, were off the record, but I got a chance to interview a dozen or so of the participants, and those interviews form the basis of what you will hear in this special series on AI.
To start us off, in this episode we’ll hear from a leading academic, Pascale Fung who heads the Centre for Artificial Intelligence Research at Hong Kong University of Science and Technology; and from Francesca Rossi, head of AI ethics at IBM.
At the end of the series of AI podcasts, I’ll be speaking to the head of AI at the World Economic Forum, Cathy Li, to see what conclusions we might draw, and where the world might go from here as it learns to live with AI having an ever greater role in our lives.
And I’m joined by Cathy now. Hi Cathy, how are you?
Cathy Li, Head, AI, Data and Metaverse, World Economic Forum: Hi Robin, I'm good. How are you?
Robin Pomeroy: Very well thank you. So, I was at this summit is San Francisco that you organised. Could you just tell us what that was all about and why you organised it?
Cathy Li: Sure. The World Economic Forum organised a global summit on generative AI, we were aiming to address the challenges and opportunities associated with developing and deployment of generative AI systems.
As you mentioned, the summit took place in our office in San Francisco and brought together stakeholders from both the public and private sectors to discuss generative AI's implications on society, on the economy, and to explore ways to address challenges and develop consensus on next steps.
Robin Pomeroy: So why now, why did you do this now?
Cathy Li: The window of opportunity for shaping the development of this powerful technology is closing rapidly. The summit was driven by recognizing that generative AI systems have rapidly transformed various professional activities and creative processes, and it is crucial to guide developments to minimize risks and maximize benefit..
Robin Pomeroy: I'll be talking to you again at the end of this series, and I'm hoping we'll be able to discuss what we have learned from this series but also where the world goes now. So could you tell us what the World Economic Forum will be doing and where you see things developing in the next few months?
Cathy Li: The World Economic Forum will work towards fostering public-private cooperation and collaboration to establish guidelines and frameworks for the responsible use of AI, engaging with industry leaders, policymakers, academics, and civil society organizations to develop best practices and promote AI that aligns with societal interests and values.
Robin Pomeroy: Cathy, thanks for joining us.
Cathy Li: Thanks Robin.
Cathy Li, Head, AI at the World Economic Forum.
So to today’s interviews. Francesca Rossi, is IBM Research’s AI ethics global leader. I asked her to set out what have been these apparently sudden advances in generative AI.
Francesca Rossi: AI research has been evolving over many years. But of course, the latest wave of AI capabilities has to do with the ability of AI, not just to interpret content like videos or images or text, but also to generate content.
So these so-called generative AI capabilities are what supports the creations of many systems, AI systems, that can use this content creation capability to be useful in many different situations.
And they can also be used as a foundation of many more specific AI systems. That's why these so-called large language models, so generative AI models, they are also called foundation models because they work as a foundation for specific AI systems that one may want to build.
Robin Pomeroy: You've you've seen kind of incremental changes over the years.
People were already using AI in their life almost on everything we do online, but they were not realising it because it was kind of hidden inside all the applications. Now it's very visible.
”Francesca Rossi: In research I saw over the decades a lot of evolution. And here I didn't see really a big change. I saw an evolution.
But what is definitely different now compared to my past experience with AI is that now really these research results, these research results with a very simple interface, is available to everybody.
So it's the first time that people from all over the world can use and interact with the capabilities of an AI system. And this was not true earlier. So that to me is the main game changer here that we have now.
People were already using AI in their life almost on everything we do online, but they were not realising it because it was kind of hidden inside all the applications, all the things that we're using online.
Now it's that it's very visible. So that's why it's really a game changer, because everybody can experience the capabilities of an AI system.
Robin Pomeroy: Did you see that coming? Was it a breathtaking moment or was it just part of a gradual increase? People talk about exponential progress, maybe it was an example of that?
Francesca Rossi: To me over the over the decades, it's true that there were two important moments. One, when, already around 2000, AI with deep learning and machine learning techniques, started having the ability to interpret well our content generated by human beings.
So, for example, we started having applications where we could go give a vocal commands to our phone or to our other apps or to our personal assistants, like in our house and so on. That was really a very important step forward of things that could not be done earlier with the previous version of AI. And this was supported by, yes, new techniques in AI like deep learning, but also by the availability of large amounts of data and computing power.
And these three things together allow the AI to really interpret content generated by us, like our voice or what we wrote and so on.
Now there is another step where really machines are able also to generate content themselves.
And so I would say that, yes, it is an evolution. It is incremental in terms of research results and techniques. But yes, it is another important moment in adding capabilities to AI.
Robin Pomeroy: So why are people so concerned about it, as well as the great potential advantages, there's many concerns people have. What do you think we should be most concerned with?
Francesca Rossi: So definitely, yes, there are a lot of opportunities for new things that can be done that couldn't be possible before. So the opportunities are really great. But yes, there are some additional concerns.
It's not that there were not issues in the previous wave of AI, because we knew all about the issues about bias, about explainability, about transparency, about robustness, about privacy and so on.
But now these issues are still there and they are additional issues related again to this ability to generate content.
So, for example, content that may seem true, but is not true because of the fluency in which the content is being delivered, especially in terms of text. So the possible spread of misinformation, if one is not careful about the the generation of this content.
And some copyright issues, privacy, the generation of things that should be covered by privacy.
As well as content that can be also not just in the training data, but also the generated content can express bias and so create possible discrimination among different groups. As well as the generation of content that is so-called harmful, whether it is biased or racist or sexist and so on, so content that is not appropriate for the generation and that interaction with a human being.
Robin Pomeroy: Anyone who goes online and tries it for themselves or reads articles by other people who've tried it, they'll come across really breathtaking things.
Some of the questions people fed into generative AI systems. I saw one question, which was: I want to kill as many people as I can with $100 and it went ahead and said this is the most efficient way of doing that. Presumably that person was doing it to test the system, to break the system. Is there any way to stop that?
There can be misuses ... it's important that we embed into those large language models ways to respond in an appropriate way.
”Francesca Rossi: Well, one thing is that, of course, every AI system, whether generative AI, there are opportunities to use it for good things and also to misuse it. And in this case, is definitely a misuse.
One other thing to remember or to notice is that there are large language models that are usable in an open domain environment so everybody can ask questions about any topic. And there are also uses of large language models in more closed domains.
So, for example, what IBM's goal is is to use them as a foundation for more closed domain interactions where the users are going to ask questions about the specific task inside an enterprise and not just about any topic questions or prompt. So that's already something that can mitigate these misuses and also the issue.
But it's true that there can be misuses and, I mean, the prompt can be anything, but it's important that we embed into those large language models ways to respond in an appropriate way. And if the prompt is not something to be responded, that you can understand that it shouldn't respond. So there are many ways that researchers and developers have come up already to embed the values into these larger language models. But right now they are mostly filters that are put on the content generation once the large language model has been built.
I think that in the future we will have to find more effective methods that are not filters after the building of the large language model but are actually embedded into the building of the model itself.
Robin Pomeroy: Is there a risk that AI development is moving too fast?
Francesca Rossi: Well, I would distinguish between the the different phases in the value chain of the building an AI model, and releasing it and deploying it.
I think that definitely we want to facilitate even more and speed up even more the research and the development, because those are areas that can help understand how to better mitigate the issues.
And of course, we want to be careful about the later phases in the value chain, such as deployment and uses. And that's why I think that policies and regulations should act more on that part of the value chain rather than at the initial part.
Robin Pomeroy: So you're involved in several organisations, including the worldwide association of AI and the European one, and you've been on the European Commission's high level expert group on AI. What's been your feeling about the way governance or regulation is going? Is there any kind of consensus or or are there huge differences of opinion or battle lines being drawn? Where are we in terms of global or regional governance of AI?
Francesca Rossi: So I think the most maybe comprehensive legislation discussion is what is happening in Europe right now around the European AI Act proposal that is still at the level of a draft with a lot of different proposal for amendments but soon will be approved by the European Parliament.
And that's a comprehensive regulation around uses of AI. So what I like about that regulation that is risk-based, where the risk is associated to the scenarios in which the AI would be applied. So there is a list in the regulation of 'high risk uses', 'high risk AI systems' they are called. What an AI system is is an AI technique that is used with a specific purpose.
Robin Pomeroy: For example?
Francesca Rossi: For example, for human resources applications, deciding who is hired, who is being hired, who is being promoted and so on. That's one of the high risk application areas. And so the risk is associated to the application scenario rather than to the technology. And I like that. And I also because I think it's also technically more feasible to understand better the risk once you know where the technology is being applied.
We should keep the risk-based approach on the applications of the technology rather than the technology itself. Moving it to the technology would really impact innovation ... without really achieving what I think is the goal.
”And right now there is a lot of discussion around how to make this regulation include also something about generative AI and large language models.
And I hope that this will not be shifting the focus of the risk-based framework from the application area to the technology itself. Because some discussion tries instead to say these models are risky no matter where you apply them.
I think that would be a big mistake. I think we should keep the risk based approach on the applications of the technology rather than the technology itself. Moving it to the technology would really impact innovation in that region of the world a lot without really achieving what I think is the goal, which is to make sure that the technology that is deployed is deployed in a responsible way.
Robin Pomeroy: So I suppose an analogy would be fire, fire is dangerous, but if you use it in the right way... Even nuclear and power splitting the atom.
Francesca Rossi: Right.
Robin Pomeroy: But isn't there a difference with generative AI that it goes off and it does things that it was never meant to do anyway and it could suddenly crop up... You know, Google said it's generative AI taught itself a totally different language that it was never programmed to do, which is a good thing, I don't think there is anything bad about it. But it could also go off and do something that the engineers, the software engineers, never expected, which isn't really the case with fire or nuclear technology.
Francesca Rossi: Of course, we have to put in place the right guardrails, whether technical ones or non-technical ones, in terms of contractual agreements or standards, applications, so many different ways to put guardrails on this uncertainty, as you say, about the content that is generated.
But I feel that putting obligations on a technology in terms of regulation because you think you may be used that in a risky application, it will not even be technically feasible because until you know the context in which it will be applied, you can't even understand, for example, what kind of bias you need to test for.
So I think that there are responsibilities for each actors in the value chain. Those that generated the logic was model. So there can be transparency, disclosure information obligations. But then there are other ones that are more appropriate for actors that are later in the value chain.
But one thing also generates this this kind of fear of this new technology is also the lack of knowledge about how they are built.
Of course, not everybody has to know all the details about the AI architecture that are used. But we tend as human beings to over attribute the capabilities to a machine when we see that the machine has a capability which is similar to that of a human being.
So we see that the machine, for example, can write text which is very similar to what a human being could write. So we tend to attribute to the machine also many other capabilities that human beings have. For example, our ability to understand contradictions or understand what is true and what is false. This is not how the machine was built. The machine was built just to generate the next most plausible world after the previous 300 words, period.
And so that capability, with a lot of training data, allows the machine to respond correctly many times. But we should not be surprised that the machine sometimes sees one thing and then contradicts itself, or that it says a false thing that was not in any of the training data.
Robin Pomeroy: Francesca Rossi is head of AI ethics at IBM Research.
During this series on AI, I aim to do some jargon busting, and I'll be asking the experts I speak to to explain in language normal people can understand some of the terminology.
Everyone's talking about large language models or LLMs. ChatGPT is an example of that. But what is a large language model? While I had Cathy Li, the World Economic Forum's head of AI, I asked her for a definition.
Cathy Li: In simple terms, a large language model is a smart computer program that can understand and generate humanlike language. It works by using a type of artificial intelligence called deep learning, and it's trained on a massive amount of text data from books, articles, websites and other sources to understand and learn the patterns and relationships between words and sentences.
During training, the model analyses the text data and tries to predict the next word in the sentence based on the words that came before it.
When you interact with the language model, you provide a with a prompt or a question. The monitor uses its learned knowledge to generate a response by predicting the most likely words and sentences that fit in the context of what you are trying to say.
Robin Pomeroy: It sounds pretty basic, doesn't it? Is the clever part that it's just so huge, the amount of data or the amount of words it's gone through? This is how it manages to work - just the size of the thing?
Cathy Li: Indeed. There's obviously difference between a small language model and the large language model, and the threshold sometimes cannot really be predicted. But these scientists have observed the capabilities of the predictability jumped exponentially after certain thresholds, and that's also where the scientists are seeing more surprising emergent properties that they have never seen and couldn't predict before.
Robin Pomeroy: Pascale Fung is a professor of computer engineering at Hong Kong University of Science and Technology and director of the university’s Centre for AI Research.
Pascale started by telling me about her research specialism - conversational AI - that’s the technology that can create chatbots that can converse with us. I asked Pascale to explain more.
Pascale Fung: Conversational AI. So technically speaking, there are two kinds of conversational AI systems we use to call them dialogue systems. Basically, it's the interaction between a user, human user, and a machine.
So the two kinds of Con-AI systems include open domain chat bots, where usually you can just talk and then about any topic, therefore open domain, and then you can chat about anything as long as you want. The objective of such a chat bot is to keep engaging the user to speak as long as is possible. So that's open domain chatbot.
And the other kind of conversational AI systems are called task oriented dialogue systems. Your virtual assistants, your smartphone assistants, your call centre virtual assistants, these are all dialogue systems or conversational AI systems to try to accomplish a task to answer query that the user has.
Robin Pomeroy: So those systems have been around for a long, long time. But is generative AI going to transform our experience as a user?
Pascale Fung: Yes, definitely. So again, first of all, generative AI has been around for a while, it actually predates deep learning and neural networks.
But recent generative AI models that are much more powerful than previous generations of generative AI models because they have a huge amount of training data and they have a huge parameter size. So they are way more powerful than previous generation.
And these generative AI models, in particular the large language models, are used as foundational models to build conversational AI systems.
So there's a common misunderstanding that people think that ChatGPT is a conversational AI system. Technically, it is not. ChatGPT, like many other models, like GPT3, GPT2, GPT4, they are what we call foundational models. So they are large language models that can perform a multitude of tasks. And then there is a chat interface. It's like a UI for the user to interact with the underlying large language model via chat.
So ChatGPT can be used to build other systems, including but not limited to conversational AI systems.
And because of the nature of these large language models being generative, actually today we cannot use them to build reliable task-oriented dialogue systems because they are open ended.
Robin Pomeroy: Because they're not reliable?
Pascale Fung: So the difference between these two types of conversational AI systems in terms of task oriented systems, if you have used Siri assistant or Google Home, you know they're there to complete a request from the user through a through a conversation with a user. It's supposed to convert the conversation to completing a task such as finding a restaurant for you, such as booking a ticket for you and so on.
Now ChatGPT and these other larger language models. They're not built to do that. In order to use ChatGPT or other large language models to build tasks-oriented systems, you need to add on other modules, for sure you need to add plugins and you need to be able to control these models into generating the kind of desirable responses.
Robin Pomeroy: So what applications are you excited about or your students excited about? If they're dreaming a year ahead or five years down the road, it could be possible using generative AI?
Pascale Fung: So I obviously have a professional bias. I would like to see that we can come up with solutions where we can actually take advantage or control generative AI.
Today, these large language models there are like these powerful wild beasts, right? We need to have algorithms and methods to tame such beasts and then to use them for the benefit of humanity.
The road from here to there today is unknown. Maybe we can get there within a year. Or maybe we would need another paradigm shift to to get there.
”So in the long term, I hope to see more beneficial AI in the medical domain, for example. Healthcare for elderly, healthcare for disadvantaged people who have no access to advanced medical care, that we can democratise such health care with AI technology.
So in order to do that, the foundational models give us great hope because they are very powerful. They have all these abilities emerging. They help us with reasoning, they help us with organising meetings, they help us with summarising perhaps doctor's notes. And why not? If we learn to, you know, humans, come up with the ways of taking advantage of these models and to augment such models into becoming systems that can help us in these beneficial tasks, that's what I'm excited about.
The road from here to there today is unknown. Maybe we can get there within a year. Or maybe we would need another paradigm shift to to get there. So, but that's what's exciting about about this field of AI generative AI, because we're making discoveries almost like scientific discoveries. We're making almost like scientific discoveries when we work with these models and we are learning new ways of how to work with them, how to take advantage them on a daily basis.
Robin Pomeroy: Is there a risk that AI development is moving too fast now?
Pascale Fung: The risk is already here. We have already seen that the people who are using generative AI in a way that it's not intended to be used.
So when we say AI development, maybe we should differentiate between upstream AI research and downstream AI deployment. So I think today, I am very encouraged to see the progress we have been making in the upstream research, including coming up with ways to mitigate harm in AI, you know, not just generative AI but AI in general. So that is also part of the research effort.
Now, I am worried that the deployment is going too fast because we're deploying systems that we don't understand 100% the ramification of. We don't necessarily have to explain the AI system that we deploy in minute detail to everybody who's going to use it. But we need to have the confidence that we can mitigate the harm before we release the system into the wild. So this is what I worry about.
Robin Pomeroy: You talk about a wild beast that needs to be tamed. What kind of damage do you think can be done. People who've played around with ChatGPT, they'll have found on halllucinations telling them things that aren't true. They'll have found they are able to get it to do bad things, I suppose. 'Pretend you're Satan,' someone said. Or, 'Tell me how to torture a person.' These are people who presumably are doing it to demonstrate or to try and break the system. What do you think the harms are for society or for humanity as things stand at the moment, or that they could be in the future? If we don't tame this wild beast, what might it do to us?
Pascale Fung: So there are indeed two kinds of harms they can come out.
One kind is the intended harm: so bad agents using generative AI or foundational models or systems that are built on top of these foundational models that are very powerful, bad agents using them to scale up bad actions.
For example, they can spread even more misinformation and people cannot tell whether it's true or not. Fake identities. They can pretend somebody they're not. You know, we all have been phished on the internet. So it will become increasingly difficult for people to tell whether the source is illegitimate or not, especially when these bad agents intend to use it to to mislead people.
And another type of harm, which also worries me a great deal, which is that there are unintended harm. So people, for the good of their heart, they're trying to build a system to help patients to find cures for different diseases, some sort of WebMD but based on AI. And they're thinking that this will help people get access to to health information and so on. But they don't know that some of these answers given by these generative AI systems are actually not really correct.
And given that there's so much investment today in this area, there are like thousands of start-ups coming up using generative AI and ChatGPT, only ChatGPT alone, I'm afraid that so many people do not know the limit of ChatGPT and then build applications they claim will be doing one thing but actually it's doing something else and it's not doing what is intended to do.
This is also a very, very important area of concern to us.
Robin Pomeroy: We were all used to using Wikipedia and anyone who uses Wikipedia knows it's amazing, but you always have to double check the facts. I suppose you could do that with ChatGPT, but is part of the reason it's more dangerous with generative AI is because you can use that to code and to generate other applications. You're not just looking up facts on a generative system. You could use it to then create a system which maybe you don't entirely understand how it works. Because you just taken it from, it's been generated somewhere else. So is that a big risk or am I misunderstanding it?
Pascale Fung: So that actually is the same as the second risk I just pointed out, where people use it, they think it's intended for one one goal, but actually it's generating answers or generating content that is not doing what is intended to do.
Robin Pomeroy: You can double check that as you would do the Wikipedia if you thought it was any risk.
Pascale Fung: Sure.
Robin Pomeroy: So why is the risk greater with generative AI?
Pascale Fung: Because it's so scalable. Because the whole point of using generative AI, you want to scale up knowledge dissemination that you don't have experts for.
If you already have a roomful of experts, then you have WebMD. You don't need generative AI, But all these start-ups, they don't have access to these experts, so they are relying on ChatGPT to give those knowledge to them, which is a very, very wrong way of using them.
ChatGPT is good as writing assistants to summarise your meetings, to take notes and to polish up your writing and so on, to give you some ideas, for brainstorming and so on. But you cannot rely on generative models, ChatGPT or other models, for facts, for knowledge. They're not a replacement of search, they're not a replacement of Wikipedia, they're not a replacement of human curated content.
Now, of course, you can use them to generate content, which later on then you get, you know, you ask human experts to curate. That is possible. That is possible. In fact even OpenAI is doing a lot of this human feedback, right? But of course they cannot focus on the expert knowledge. So for expert knowledge, anybody who's using ChatGPT and the like must have must use human experts to check, verify and curate the output.
Robin Pomeroy: If an AI, though, is teaching itself, what is there to stop it doing things we don't want it to do? Because it hasn't signed a code of conduct.
If I was a policymaker now, if you were talking to wither the president of the United States or the head of the United Nations and you've got 30 seconds to convince people what needs to be done to put controls in place to make sure we get all of these great benefits, but we can somehow mitigate the risks, is there a message you want to get across?
Pascale Fung: Yes, I think this is the motivation for us to get together here with the chief scientists and engineers on these systems to talk about mitigating risk.
So what I believe is that mitigatingrisk is a multi-stakeholder job, a multinational job. And multi-stakeholder - we don't talk too much about the people who actually build the systems.
So starting from us complying with the code of conducts, the researchers engineers must comply with this code of conduct.
And also in our research methodology, we have today, every year in all these conferences, we have an ethics review committee which reviews the contribution to in terms of research papers, whether it complies with our ethical principles.
So that is the governance of researchers, the behaviour of researchers, the the, the process of research, the process of engineering, of building the systems.
Meanwhile, we still have to go into the systems themselves, into the software programs, we need to design algorithms such that it can be aligned to human values.
They will not sign anything! But we can make such algorithms align with human values. We can inject human values into such software systems, and we must build such systems in order to align with the principles.
Today, we have so many AI ethical principles from different nations, from different jurisdictions. People have signed, people agree on. But how do we operationalise, how do we translate those principles into actual outcome of the generative AI models and systems? This is a topic we're going to discuss in the next three days [at the World Economic Forum's Responsible AI Summit].
Robin Pomeroy: Pascale Fung is a professor of computer engineering at Hong Kong University of Science and Technology, you also heard Francesca Rossi, of IBM Research. I spoke to both of them at the World Economic Forum’s recent Responsible AI Summit.
Next week, on the Radio Davos AI show, we hear from one of the most prominent signatories, alongside the likes of Elon Musk, of an open letter calling for a six-month halt on development of advanced AI systems.
Stuart Russell, professor, University of California, Berkeley: We're just releasing these systems onto hundreds of millions of unsuspecting people. You know what could possibly go wrong?
Systems that are as intelligent, and almost certainly they would be more capable than humans along all relevant dimensions of the intellect. Those systems would be, in a real sense, more powerful than human beings. But then we need to retain power over them.
Robin Pomeroy: That’s Professor Stuart Russell on next week’s Radio Davos AI special. Subscribe to or follow Radio Davos wherever you get your podcasts to get that, or visit wef.ch/podcasts.
This episode of Radio Davos was written and presented by me, Robin Pomeroy. Studio production was by Gareth Nolan.
We will be back next week with more on AI, but for now thanks to you for listening and goodbye.
Podcast Editor, World Economic Forum
Head, AI, Data and Metaverse; Member of the Executive Committee, World Economic Forum
Daniel Dobrygowski and Bart Valkhof
2024年11月21日