"AI will have some form of intelligence that will either compete with us or augment us. This is a question for us as a species. For the past thousands of years, we didn’t have a cousin or a brother and now we may have one. So it is how we understand that and how we deal with it."
On Episode 4 of our special series on generative AI, we consider the options for how we can govern the rapidly growing technology.
Guests: Amir Banifatemi, Director, AI Commons; Cyrus Hodes Co-Founder of AIGC Chain and Stability AI, Harvard Kennedy School of Government
Co-host: Lucia Velasco, Lead, Artificial Intelligence and Machine Learning, World Economic Forum
The Presidio Recommendations on Responsible Generative AI: https://www3.weforum.org/docs/WEF_Presidio_Recommendations_on_Responsible_Generative_AI_2023.pdf
The World Economic Forum's Centre for the Fourth Industrial Revolution: https://centres.weforum.org/centre-for-the-fourth-industrial-revolution/home
Join the World Economic Forum Podcast Club
ポッドキャスト・トランスクリプト
This transcript has been generated using speech recognition software and may contain errors. Please check its accuracy against the audio.
Amir Banifatemi Director, AI Commons: I guess that is a matter of time where machines and automation and AI will have some form of intelligence that will either compete with us or augment us. I think this is a question for us as a species. For the past thousands of years we didn’t have a cousin or a brother and now we may have one. So it is how we understand that and how we deal with it.
Robin Pomeroy, host, Radio Davos: Welcome to Radio Davos, the podcast from the World Economic Forum that looks at the biggest challenges and how we might solve them. On this, Episode 4 of our series on generative artificial intelligence, we ask how can and should we regulate AI
Cyrus Hodes Co-Founder of AIGC Chain and Stability AI, Harvard Kennedy School of Government: There is no doubt that intelligence greater than us will be able to solve pretty much any of humanity's grand challenges.
Robin Pomeroy: The promises of rapidly developing AI are huge, but what about the risks?
Cyrus Hodes: It is extremely important now to think about regulation and to think about the impact on society, which is - we're getting at another level now with generative AI.
Robin Pomeroy: But what kind of regulation? Some have called for a halt in advanced AI research.
Amir Banifatemi: We should not slow down research. This is a discovery. This is ingenuity at work. This is how we progress. But there should be definitely some form of guardrail
Robin Pomeroy: Subscribe to Radio Davos wherever you get your podcasts, or visit wef.ch/podcasts where you will find our full series on generative AI so far, as well as our sister podcasts, Meet the Leader, Agenda Dialogues and the World Economic Forum Book Club Podcast.
I’m Robin Pomeroy at the World Economic Forum, and with this look at how we regulate generative AI,
Cyrus Hodes: This is a critical moment, I would say, in human civilisation.
Robin Pomeroy: This is Radio Davos.
Welcome to Radio Davos and our series on generative AI. And I'm joined for this episode by my colleague Lucia Velasco, who leads work on AI governance at the World Economic Forum's Centre for the Fourth Industrial Revolution. Hi, Lucia.
Lucia Velasco, Lead, Artificial Intelligence and Machine Learning, Centre for the Fourth Industrial Revolution: Hola Robin,
Robin Pomeroy: Hola to you. Let's keep it multilingual on Radio Davos|
Lucia Velasco: Sounds great.
Robin Pomeroy: You're Spanish?
Lucia Velasco: Half Spanish, half Argentinian.
Robin Pomeroy: Which half knows all about AI? Both of them?
Lucia Velasco: Maybe the European one.
Generative AI leapt from being a tool in the hands of a few to a transformative technology, just as personal computers and mobile phones democratised access to information and connectivity.
”Robin Pomeroy: This must be a kind of a pivotal moment for AI. Why is this such an important moment in the history of this technology?
Lucia Velasco: Well, I would say it's extremely exciting to be working on something that almost everyone agrees that is needed. And also because this era will be marked on the charts. We will remember this time.
We already feel that we are part of something really big, like a relevant moment in history. And that is where generative AI leapt from being a tool in the hands of a few to a transformative technology, just as personal computers and mobile phones democratised access to information and connectivity. We're witnessing a very similar moment.
And of course, this is, you know, the culmination of decades of research. And for some it won't be news. But for most of the population, like myself, this is a real tangible moment of disruption and it's having a great impact on different sectors, from creative industries to scientific research.
The capability that we're seeing in these models, in this generative AI, ChatGPT and similar tools, that they can generate from writing, from essays to poems, or drafting code. We were told that if you learn to write code you will have a future and now we will need to rethink perhaps that sentence, right?
Also, how accessibility has exploded. Previously, using AI models required a deep understanding of machine learning and computational resources and really knowing what you were doing. But now I can use them. You can use them, my dad can use them.
Robin Pomeroy: It's just this natural language access, that we can just ask the machine to write this or create this picture or create computer code that will have a certain outcome, using your own language without knowing the technical language, the programming language, without be able to draw, without potentially being able to write fluently.
Lucia Velasco: Absolutely.
Robin Pomeroy: And it's a sudden change. We we just didn't have that a few months ago.
Lucia Velasco: Absolutely. It's like we are being able to try to communicate, not communicate, but you try to have a spreadsheet to do something. I don't know how to make it do complex things, but now if I just say, okay, calculate this or do that, with my own words, the system will understand what I'm trying to say and will come up with a proposal. And that is change.
What you will need is an internet connection. And we also need to bear in mind that half of the world doesn't have it. So we need to be mindful of that power imbalance and how we should try to provide access to everyone and not only to a few.
Robin Pomeroy: That's a really important point because we just assume, those of us who are lucky enough to have easy access to these tools, many of which are free to use, that half the world does not have any access to it at all.
I don't think that anyone who has been using ChatGPT feels that ChatGPT is going to replace that person, but probably [it will] augment their capabilities.
”Lucia Velasco: Absolutely. So when we think that this is global, we also need to understand what the world looks like.
But yes, this democratisation of AI is enabling also a whole new wave of innovation and creativity, it's exponential, you know, and we can expect a lot of new developments on top of this.
And lastly, I would say that we are seeing a shift on how people perceive AI. Although there's still some fears. I don't think that anyone who has been using ChatGPT feels that ChatGPT is going to replace that person, but probably augment their capabilities, and you can do more things faster and probably better. And it's a way of being more productive and also being a better version of yourself.
And I don't think that that is going to be the same or imply the same reaction as we had initially with the fear to automation, but rather we will see it as a partner for problem solving or for creativity.
Robin Pomeroy: That's a very optimistic outlook. I tend to agree with you. But also governance has maybe not caught up with the rapid development in the technology. So that is something that needs to be looked at now, and in fact, the World Economic Forum is helping bring the stakeholders together to discuss.
Do you think we'll be able to keep up with these rapid changes?
Lucia Velasco: I trust we will. We have all the elements that we need to make that happen. We have the key industry players ready to contribute to that understanding and tackling the issues that may affect society. We have the governments open as well to public-private cooperation, realising that they need help to keep up with technologies that rapidly change and need adaptive frameworks and anticipatory governance.
Robin Pomeroy: Let's turn to the interviewees today. The first one is Amir Banifatemi. He's an investor and. i think the first, perhaps, investor I've spoken to in this series. What would you say is the role of investors when it comes to responsible AI?
If we want to make this technology work for everyone, we should try to get everyone involved in its deployment and in its design.
”Lucia Velasco: Well, investors, the people who put money into businesses, have a very big role in shaping how AI develops, because they can choose to support companies that are open about what they're doing, treat people fairly, and take responsibility for the actions.
Also another example that comes to my mind is that investors can help make the industry more diverse and less homogeneous. And they can do this by investing in companies led by people from different backgrounds, but also more inclusive in terms of gender parity. We think globally only two out of every ten AI professionals are women, and 90% of leading machine learning researchers are men. So if we want to make this technology work for everyone, we should try to get everyone involved in its deployment and in its design.
Robin Pomeroy: So we'll be hearing from Amir Banifatemi, who was actually a co-chair of the event in San Francisco. And our second interview is Cyrus Hodes. He's been involved in several high-tech start-ups and he's been on a number of working groups looking at AI at organisations like the OECD. And he speaks a little bit about the work that has already happened in certain organisations. Are there parts of the global infrastructure that have made progress on the governance of AI?
Lucia Velasco: I think that there has been many interesting conversations and relevant ones throughout the years, but there is space to improve. We need to speed up because generative Ai is something is used across the world, this is not something that a few are using in their companies or in labs. This is something that has reached 100 million users in two months, and that is something that has never been seen.
Robin Pomeroy: So we'll be coming back to governance and the work of the World Economic Forum in that area in the final episode of this series in a couple of weeks' time.
But for now, we'll listen to today's two interviews. Lucia, thanks for joining us on Radio Davos.
Lucia Velasco: Thank you.
Amir Banifatemi: My name is Amir Banifatemi. I'm an investor and futurist and I work towards bringing technology closer to people and try to invest in impactful and sustainable organisations that bring us closer to a sense of abundance and democratising abundance and life for everyone.
Robin Pomeroy: Is that something that generative AI brings a promise of, that democratisation of progress?
Amir Banifatemi: Humanity's always looking for improvement and what ChatGPT has shown to us so far is this notion of augmenting our capability and gives us a sense of not being alone in that search of questions that we have, and be a companion, or as they call it, a co-pilot.
And I think we all believe that ChatGPT is a turning point, like mobile was, like the iPhone was, like many technologies of the internet was, in bringing the promise of a digital life to almost anyone.
And that's one of the fastest growing product launches ever. And what we see people using it for, it seems that we are avoiding the blank page issue. And then we are probably seeing many, many, many innovations trying to translate our ideas of how we interact with others, how we communicate, how we plan, how we predict, by leveraging the AI capabilities.
So generative AI, I believe, will have a huge impact on everyone, on humanity, and on how we are thinking about cognitive capabilities in general.
My surprise was mostly on the rapid implementation of a chat function to a language model and how fast it was adopted. That was the watershed moment for me.
”Robin Pomeroy: Has there been a wow moment for you? Did you suddenly go on to an application and realise, Wow, this is quite something? Or has it been more of a gentle, incremental growth of technology, in your experience.
Amir Banifatemi: When ChatGPT was introduced I was personally aware of the prior versions of GPT and language models, and have not been so much surprised. My surprise was mostly on the rapid implementation of a chat function to a language model and how fast it was adopted. That was the watershed moment for me personally.
Robin Pomeroy: You’re surprised that that many people went onto it and started using it?
Amir Banifatemi: In the industry we're talking about applications of AI in many areas, whether it's industry or Sustainable Development Goals or applications of AI in enterprise or government.
It was not really part of our thinking that so many people will now have access to these tools so immediately and so easily. When we see how people onboard on ChatGPT and use it for a variety of tasks, that's the really surprising moment.
This notion of co-pilots or an assistant to various aspects of our life will be probably the killer application.
”Robin Pomeroy: Do you have any idea of where it could take us? Let's look at the positives first. Where do you think will be the killer applications of generative AI?
Amir Banifatemi: Well, every guess is good. I believe that this notion of co-pilots or an assistant to various aspects of our life will be probably the killer application, whether it's to have some learning companion or a preventative health companion, or even allowing us to duplicate our thinking, our life, into multiple moments.
That is probably going to be the most important part because now it's going to free up almost anyone into discovery and creativity and the what if question. And although it’s trained on data that is in the past and we don't know what the future holds, we can still go into a myriad of scenarios without having to have the literacy or the training for that. That's revolutionary.
Robin Pomeroy: So you've managed investment funds that put their resources into AI. From an investor's point of view, what do you think are the big opportunities and what are the big risks?
We believe that there's going to be a gold rush of B2C startups trying to solve almost any problem with generative AI. How many of them will survive, we don't know.
”Amir Banifatemi: Investment in AI is growing at such a rapid pace today.
I think enterprise is going to be the number one opportunity for investment because many of the processes that organisations are developing and using with human in it will have to be revised and it will have to reshape, and there is enormous opportunities for investment there.
Beyond that, I think the interconnection between organisations will create opportunities for also collaboration with the APIs and the different plugins that we talk about today. Without being technical, the concept of an app store where we can have multiple areas of collaboration and implementing solutions without coding or without having to do a long cycle of development, will suscitate many investments. Probably B2B would be number one. We believe that there's going to be a gold rush of B2C startups trying to solve almost any problem with generative AI. How many of them will survive, we don't know.
Robin Pomeroy: It's an interesting expression you use - a ‘gold rush’. I suppose there was an internet gold rush as well, wasn't there, a lot of money flowing into, we're talking 20 years ago or so, into various things, some of which remain with us, maybe Amazon, for example, others of which have just disappeared into history. Is that what we're going to see? Another period like that? Is it going to be bigger and even more money flowing around this time? Can we compare the two?
Amir Banifatemi: The investment in AI is not new. It happens gradually over the years and the last six or seven years have increased. But what we look at how the web, Web1, which was the internet came about, and Web2, which basically led to ecommerce and all the companies that were building platforms and Uber and other companies came about, the promise of Web3 with decentralised approaches and maybe crypto tokens and others, have a huge following in terms of investment. And now with what is happening on that market and the discoveries that we have made in generative AI, we can see two fronts of investment.
One is the investment that was basically going into the cryptocurrencies being more on the AI side, but also AI itself is promising so much change in society that all the funds today have an AI or generative AI narrative and investment thesis. So I am personally investing in multiple generative AI companies. I believe that most funds will have some form of association or adjacency to generative AI and to the AI tools in general.
I don't see this slowing down. We're probably going to have in one year a better understanding of what's shaping up and what major new investments are coming up. We're going to probably identify the next Google or the next Meta in the next year, who's going to be the winners. And we are all learning right now. But definitely it is an interesting moment where the excitement goes beyond the existing capital.
Robin Pomeroy: And what about the risks, then? Obviously, there's a risk of you invest in a start-up and you lose your money. Is it any riskier than that, do you think, investing in AI?
Amir Banifatemi: The risk of investment is always the same. We have what's called a portfolio strategy, so we try to invest into a number of companies associated with the investment thesis.
There are risks though to an investment in AI The first risk is the risk of fragmentation where not every investment is a winner. And having so much competition will probably erase some of the investment.
The second risk asset is that the development of a solution needs to abide by some form of responsible AI ethical framework, fairness and transparency, and probably be subject to some policy and regulation.
So many of the solutions may not have, one, the capital to go through all the depth of development, but also be stopped or slowed down by some form of national or regulatory context.
Robin Pomeroy: There's various ways that jurisdictions are looking at regulating AI. Have we come to any kind of, have you come to any conclusions yet about what would be the best approach?
We should not slow down research. This is a discovery. This is ingenuity at work. This is how we progress. But there should be definitely some form of guardrail in terms of the models that are developed.
”Amir Banifatemi: There is multiple conversations and debates about slowing down the pursuit of development of language models, their training and the speed at which is going.
Definitely technologists themselves don't always capture the amount of capability that these tools may have. But also regulators and policymakers need some time and buffer to really understand where all this is going.
I personally believe that we should not slow down research. This is a discovery. This is ingenuity at work. This is how we progress. But there should be definitely some form of guardrail in terms of the models that are developed themselves, how these models are used to build applications, who these applications are serving, how safe they are, and also controlled release.
We cannot expose the general public that may not be fully trained or literate about usage of these tools and expose them to models that are in the works. So the risk is there. And I think regulators are looking at this. And if we had to slow down something, it would probably be to release or have some form of context for responsible release.
Robin Pomeroy: I've heard this phrase 'responsible release strategies'. Is that what we mean by that?
Amir Banifatemi: I would qualify that as responsible release strategies where we want policymakers and academic and researchers in enterprise to have conversations and to understand the limits of those release, because we still don't know exactly how these models have to be deployed in all cases. There could be some niche deployment, for instance, in healthcare and education. There are some contexts where there is less risk. But remember, there's always risk inherent to the models, risk in the applications, but also risk of bad actors and those who are exploiting this.
And then the responsibility vis a vis the general public is that we have to be careful and consistent in how we, not only promote the research and the investment in those solutions, but also how we have a common understanding. You used the term common - common understanding of what that release means to people, what's the trade-off between benefit and risk, and evaluate that and communicate with the public. I think we should have an open public debate about this and those stakeholders that are the best to organise this public debate should probably act now.
Robin Pomeroy: Act now - so things have to move quickly then, don't they, in governance?
Amir Banifatemi: Time is accelerating. And what we've been doing over the few years now, we see that is changing almost every week. So I think policymakers and regulators probably are going to be pushed to accelerate the way they understand, the way they come together, and the way they create frameworks for us to navigate better.
Robin Pomeroy: So from that experience, do you think AI is intelligent? Is that the word we would use? Some people say it's a misnomer. It's not intelligent. It's just doing lots of calculations or lots of predictions very rapidly. Is there real intelligence? And will we have maybe even beyond intelligence, consciousness or something like that?
For the past thousands of years, we didn’t have a cousin or a brother and now we may have one. So it is how we understand that and how we deal with it.
”Amir Banifatemi: I think AI is intelligent, it is not as intelligent as us for now, I don't see why not in the future, AI may not be as intelligent.
Consciousness is a different question, how we define consciousness. Having awareness and having an embodiment in life will define probably more consciousness. Maybe with robots one day we'll get there.
But I guess that is a matter of time where machines and automation and AI will have some form of intelligence that will either compete with us or augment us.
And I think this is a question for us as a species. For the past thousands of years, we didn’t have a cousin or a brother and now we may have one. So it is how we understand that and how we deal with it.
Robin Pomeroy: Amir Banefatemi, one of the co-chairs of the World Economic Forum’s Responsible AI Leadership Summit that recently brought to together a range of stakeholders to debate these issues.
You’re listening to Radio Davos and our series on generative artificial intelligence. Here’s someone else who was at that AI summit in San Francisco.
Cyrus Hodes: My name is Cyrus Hodes. I work in AI and governance of AI, both as a policy and as an entrepreneur. As far as policy is concerned, I started something called the AI Initiative at Harvard Kennedy School, which is now part of the Future Society, which is a non-profit which I helped grow over the years.
We're one of the first entities that is discussing public policy and AI. And what I mean by that is educating policymakers about AI and impact on society.
So back then when we started in 2015, it was kind of like an alien subject to policymakers,
Still to this day, a lot of variation to be made on their end. But at least, you know, this bridge that we help build between the technological community and the policy community.
And since then, I've been part in many discussion into shaping these governments, into making it into a reality. I would say starting with the IEEE which had a great initiative on getting towards standards so ethically aligned designed AI and automated systems. It's a mouthful.
Robin Pomeroy: The IEEE?
Cyrus Hodes: The IEEE. So that's the largest and oldest association of engineers in the world, and they work towards standards. So for instance, they came with the ISO standard. So really global standards have been adopted. We're working on standards toward the governance of the AI.
But after my work with IEEE as a volunteer, we we set this discussion within the international platform. And I would say the most important one that I contributed to is the OECD. So I was part of the discussion on governance of the AI. The OECD embraced that very early - a fantastic platform for gathering government around around policy and policy recommendation. And we came up with a set of AI principles that have been adopted by all OECD members. But beyond that, most importantly, during the G20 summit, they've been adopted by the G20. So that means that not only OECD countries but also China and Russia have adopted these principles.
Robin Pomeroy: What are the principles? You don't have to list all of them, but can you give us a flavour of what are the most important?
Cyrus Hodes: Sure, the pillar, the core of it is that AI has to be human-centred. So, you know, make sure that the AI is getting us toward the Sustainable Development Goals of the UN, which is embedded in these principles.
Also fairness, accountability, transparency, high-level topics that have been discussed and adopted by many other countries. But the OECD was the first one to basically gather various thoughts around that and embed them into principles as recommendations for governments.
Another platform that is rather important that I'm collaborating with is the Global Partnership on AI. The Global Partnership on AI - GPAI - started between France and Canada. Actually both their headquarters are in Montreal and Paris. Same thing, looking at beneficial adoption of AI for government.
So I'm contributing there through AI and climate action and biodiversity. We published a road map during the COP at Glasgow for climate action. During Montreal, COP15, we did the same thing for biodiversity.
I'm working with also an AI and agriculture group, which is another group with GPAI, and recommendationd for farmers and this sector to use the AI for the greater good. And obviously there's a strong angle on climate action there.
Robin Pomeroy: Are we at a pivotal moment now? You've obviously been working in the field for several years, as you've said, on that regulation or policy side. It seems to be the feeling that now generative AI is really starting to take off. It's been released into the wild and probably policies haven't caught up yet. Do you see this as a pivotal moment where policymakers now really have to hurry to catch up? Where do you see the state of play ?
Cyrus Hodes: You're 100% right Robin. This is a critical moment, I would say, in human civilisation.
We're creating what Tristan Harris called Golems, which is Generative Large Language Multi-Modal Models. And by multi-modals, I mean this applies to text and image generation. But it goes beyond that. It goes into coding, which is extremely dangerous to let loose in the open. It goes into any kind of data set that you have. For instance, Tristan is citing this example of an MRI that's been trained on the brain of a human. Understanding what the images represent and then give a very clear representation of what a person is thinking about. So within the next couple of years, we're going to have these Golems understanding our thoughts. So it is extremely important now to think about regulation and to think about the impact on society, which is - we're getting it to another level now with generative AI.
So having all companies that are working on generative AI is something essential right now. And that's why it's important to have this discussion here with leaders in generative AI. Obviously there's a lot of tension, but there's no doubt that we are at a pivotal moment and we should look at making sure that we have frameworks and safeguards in place - if it's not too late. I still hope it's not too late to be able to make sure that we're getting stronger, stronger AI systems that are going to transform our societies even more.
Robin Pomeroy: Tristan Harris You mentioned that he's executive director of the Centre for Humane Technology.
If someone comes up to you and says, I don't know anything about the technology, but could you explain to me the concept of artificial general intelligence? How do you explain it to them?
Cyrus Hodes: Well, so in a nutshell, you know, with artificial intelligence, we're trying to replicate intelligence. We're trying to replicate human intelligence. What we call artificial general intelligence is when we get to a human level intelligence.
Usually AI systems are extremely good looking at very precise topics and they beat humans. You know, a great example is chess when they beat us, they beat us very easily. The game of Go, which is more complicated, the same thing. But it happens pretty much in all the spectrums of society. And as it's integrating itself, and large language models and multimodal models can do that, this is the moment in time where we get general intelligence, we are getting closer and closer to general intelligence, and this is why we should regulate.
In my mind and in pretty much all the AI researchers' minds, there is no doubt that intelligence greater than us will be able to solve pretty much any of humanity's grand challenges.
”Robin Pomeroy: But AGI must have some benefits as well, because some of these companies, it's their explicit stated aim. 'We are going to develop general intelligence'. So what is the upside? You've worked, for example, in climate tech over the years. We've got a few years left perhaps to prevent climate catastrophe. Maybe super computer intelligence could help us avert that.
Cyrus Hodes: In my mind and in pretty much all the AI researchers' minds, there is no doubt that intelligence greater than us will be able to solve pretty much any of humanity's grand challenges. Again, you know, the 17 SDGs can and will be addressed by a safe and beneficial system, a powerful AI system.
Robin Pomeroy: The Sustainable Development Goals.
Cyrus Hodes: Right, and climate action is part of it.
I believe this is an issue that is transformational to society in a good way and in a bad way. In a good way it can help solve the SDGs. Great. But let's make sure before that that humanity and civilisation stays as we are, or at least develop in a way that will be beneficial and appropriate to us and keeps us on this planet.
Robin Pomeroy: Has there been a moment in your life when some technological advance really took your breath away?
Cyrus Hodes: So why I got into the field of of AI is because I realised, with a lot of people, you know early on, that we are creating an intelligence that is about to exceed ours. So basically, in essence, we're creating a new species.
We're talking about alien encounters right now and UAPs and UFOs are being acknowledged by D.O.D. (U.S. Department of Defense) and other government agencies. At the same time, we as humans are creating this intelligence that will surpass us. There's no doubt about that.
To me, that was the wow moment that that I realised this is something absolutely transformational and, you know, we should have the best minds in policy and technology working toward it.
Robin Pomeroy: So you've dealt with policymakers over the years. What is it they need to know now and what is it they need to do now. When you're talking to them, what are the things you realise they don't understand and you have to tell them?
Cyrus Hodes: Well, most policymakers are concerned about re-election. They have a short-term view that is often, I don't want to generalise, but, you know, often selfish, often self-centred about how I'm going to get re-elected to the next cycle.
At the same time, we don't talk about this most important topic, which is the rise of a new intelligence. There's a lot of education to be made within policymakers worldwide.
There was kind of a wake up moment with nuclear safety where the world decided to coaelsce together and agree upon safeguards. You know, creating the International Atomic Energy Agency and looking at the SALT agreements to ban proliferation of nuclear weapons. This is the moment in time where policymakers worldwide should regroup and work towards such an arrangement.
So I don't have the answer yet, but I know transportation is being regulated. Pretty much any industry is being regulated. Why don't we regulate AI at a world level? It baffles me, but I believe, you know, with all my colleagues ringing alarm bells, this is the moment in time where it's going to happen.
Robin Pomeroy: Cyrus Hodes. Thanks to him and to our other guest on this episode Amir Banifatemi. I spoke to both of them on the sidelines of the Responsible AI Leadership Summit that the World Economic Forum hosted a few weeks ago at its office in the Presidio of San Francisco. And the Forum has just published an outcome from that meeting which you can read online, it’s called The Presidio Recommendations on Responsible Generative AI.
More information on that online, you can find links in the shownotes to the episode. And we’ll be discussing that and all the work of the World Economic Forum on generative AI in greater depth in the final episode of this series next week.
To get that, and to catch up on previous episodes, please subscribe to or follow Radio Davos on whatever app you use to hear podcasts, or visit our website wef.ch/podcasts where you can get our whole back catalogue as well as complete transcripts.
And if you like the podcast, please leave us a rating or even a review on your podcast app, and if you want to discuss any of the issues you heard here, join us on the World Economic Forum Podcast Club - look for that on Facebook.
This episode of Radio Davos was written and presented by me, Robin Pomeroy, with Lucia Velasco. Studio production was by Gareth Nolan.
We will be back next week, but for now thanks to you for listening and goodbye.