Scroll down for full podcast transcript - click the ‘Show more’ arrow
As investors pour money into companies developing or deploying artificial intelligence, what are the steps they should be taking to ensure that AI is safe and responsible?
The Responsible AI Playbook for Investors published by the World Economic Forum and pension fund CPP Investments, sets out real-world examples of how investors can - and must - use their position to promote responsible AI.
Judy Wade, Managing Director, Head of Strategy Execution & Relationship Management, CPP Investments
Responsible AI Playbook for Investors: https://www.weforum.org/publications/responsible-ai-playbook-for-investors/
Check out all our podcasts on wef.ch/podcasts:
ポッドキャスト・トランスクリプト
This transcript has been generated using speech recognition software and may contain errors. Please check its accuracy against the audio.
Judy Wade, Managing Director, CPP Investments: AI is going to, it already has enormous benefits, but there are significant risks. And if you want to take advantage of those enormous benefits and mitigate those risks, then you really have to deploy this responsibly. And it is not as difficult as you may think it is to do so.
Robin Pomeroy, host, Radio Davos: Welcome to Radio Davos, the podcast from the World Economic Forum that looks at the biggest challenges and how we might solve them. This week we’re looking at the questions investors should be asking about artificial intelligence.
This managing director at a fund that manages hundreds of billions of dollars of investments has helped write a ‘Playbook for Investors’ to help them navigate AI.
Judy Wade: The consequences of benefits of deploying AI without ensuring that it doesn't hallucinate, that your data is not biased, that your models aren't explained, is actually just a real potential risk to the bottom line.
Robin Pomeroy: AI bias and hallucinations might seem like esoteric terms - but investors should have no delusions - these are real hazards that could affect their investments
Judy Wade: If your data is biased and you're therefore excluding potentially a whole set of customers from your target customer set, that's a long term consequence for you. That means you've cut out of your total available market a whole set of customers that could be terrific.
Robin Pomeroy: Subscribe to Radio Davos wherever you get your podcasts, or visit wef.ch/podcasts where you will also find our sister programmes, Meet the Leader and Agenda Dialogues.
I’m Robin Pomeroy at the World Economic Forum, and with this look at the role of investors in shaping responsible AI...
Judy Wade: There's much more to responsible AI than checking a legal box.
Robin Pomeroy: This is Radio Davos
We are well past the initial hype phase around generative artificial intelligence that swept the world with the release of ChatGPT in November 2022. But we are still in the early phases of this potentially revolutionary technology - and both the benefits and the risks of AI are still to be fully comprehended by most of us.
So as big investors - things like pension funds that are always looking for the best place to put the billions of dollars they manage - buy into AI, and into companies that are increasingly using AI - what is their role in helping the evolution of the tech to be beneficial rather than harmful?
That is the question that the World Economic Forum is seeking to answer in a publication called the Responsible AI Playbook for Investors.
The foreword to the report says the term Responsible AI, otherwise known by the inevitable three letter acronym RAI, should eventually become obsolete, and I quote: “as high-quality, trustworthy and safe AI becomes the norm. Just as we don’t distinguish between 'bridges' and 'bridges that don’t collapse', the qualifier 'responsible' will become an unspoken expectation. Today, we are at the beginning of this era, as new laws and regulations emerge to ensure that all AI applications are responsible.” End of quote. The foreword adds:
“Large investors can and should exercise the influence afforded by their capital to promote the use of RAI [Responsible AI] in their portfolios, in their work with investment partners, and in the ecosystem at large. This white paper offers a playbook for how investors can accelerate the adoption of RAI to help drive value.” End quote.
You can download the playbook from our website - link in the show notes to this episode - but to learn more about it, I spoke to Judy Wade, Managing Director and Head of Strategy Execution and Relationship Management at CPP Investments - that’s the Canada Pension Plan, which co-created the report with the Forum.
I started by asking Judy Wade to tell us more about CPP.
Judy Wade: We're a 630 billion Canadian dollar fund, and we invest on behalf of 22 million Canadians to help them build their financial security for their retirement. We're active investors, so we invest across all asset classes. And we have, I think, about 325-27 investing partners, and we invest in 56 countries. So we're global and we have a broad, broad ecosystem of partners. And our returns over the last ten years have been 9%. So we're quite proud of both our governance approaches and our performance.
Robin Pomeroy: So the reason we're talking today is you authored, with the World Economic Forum, this report that I'm holding called Responsible AI Playbook for Investors. So you are an investor. Your company is investing all that huge amount of money. There's a lot of excitement about artificial intelligence right now. A playbook for responsible AI. What were you trying to achieve with this report?
Judy Wade: So first of all, there is a lot of focus on the large language developers. So OpenAI and Meta. But there hasn't been a lot of focus on the actual deployment of AI in the real economy. And that's quite frankly, where a lot of the both benefit and the potential risks are going to occur.
And as investors, we really felt that there was an outsized role for investors to play in helping accelerate responsible AI, both of those together, and that with WEF we had a fabulous platform to learn and develop what we thought responsible AI should look like for investors.
And that means adoption of AI, again within the real economy, that's helpful, harmless and honest. I think those are three kind of easy pillars you can hang a lot underneath. And while we think that there are these extraordinary benefits of deploying AI in the real economy, doing so without appropriate guardrails creates both significant risk but also maximises benefits.
So Boston Consulting Group showed that if you deploy AI responsibly, you're 30% less likely to have failures in your AI systems than if you do so without.
So that was the first thing where we thought there was an opportunity for investors, broadly defined, to really amplify the opportunities and benefits of deploying AI responsibly.
And second, through this process, we learned that this really resonated with our peers and partners. We had over 100 stakeholders participate in these interviews. And through these working groups we had Sequoia, Norges, Blackrock, and they all felt that as either direct owners and companies or as investors and general partners, that we all had a role to play in learning what it meant to deploy AI responsibly.
And the third thing I think we wanted to do with this playbook is show that that's not that hard. I think that generative AI in particular feels a bit of a black box. I think people are nervous about it. And what we've learned through this process is that there are policies, procedures, playbooks and toolkits out there that people are developing and co-creating together, that it's relatively easy to start on this path of developing AI responsibly.
Robin Pomeroy: So the first thing you mentioned there was companies like yours will be using, I guess all of us are getting used to using, AI, and we need to know what are the risks as well as the advantages of it. But even more importantly, I think, is this idea that you are investing in companies, potentially startups - I'm sure there is lots of investor excitement about where good returns on investment can be found. But I think this report, part of it is saying there's a responsibility here among investors to make sure to be able to use their clout, to use their influence, to make sure that some of this technology is being developed and then deployed in a responsible way. Is that what's happening with this?
Judy Wade: I would maybe put a slight nuance on that, because I think that we are trying to use our clout and influence, and I think others in this ecosystem, to be able to amplify and share learnings. One of my colleagues, I think, said it quite well, which is really given how early we are on this journey, it's really about a value exchange and ideas exchange.
So I think Norges, for example, has laid out very clearly principles that they expect around AI, but they haven't sort of put this down into anything that's punitive, if you will say. So I say we're using our clout and influence to amplify, convene and share what we're all learning on this journey around deploying AI generally, I would say, as well as doing so responsibly and so.
Robin Pomeroy: In a nutshell, maybe you could give us a couple of conclusions from this playbook for investors. What can or should investors be doing on AI safety?
Judy Wade: So the first is we do have to do it ourselves internally. We're still on that journey of developing our responsible AI policies and procedures. We're also learning where the opportunities and benefits of AI are for us. And I think those learnings are generalisable out to the broader ecosystem.
So, for example, as part of this report, we interviewed eight of our banking partners, general partners, a couple of our peers to understand how they were governing AI, how they were looking at identifying where the use cases were. Were they doing this in a very decentralised way across letting every department do it as they may, or they having a more centre led approach?
And those learnings were incredibly helpful for us, and I think they're incredibly helpful in terms of what's the role of investors in doing AI safely, do it with a more centre led approach. Ensure that your models are outputting in our case, and by the way, we really believe in attribution. So the way we're deploying AI, we want to ensure that we can attribute back to the source everything that comes out. That helps create a near-zero hallucination environment.
So all of those things around AI safety, I think you have to start at home.
The second thing that I think what investors can do is investors are often asset owners. So we, for example, have a significant portfolio of companies that we own. And many obviously our GPs [general partners] are asset owners, and they're we have to help work with our portfolio companies to ensure that they are, a, deploying AI. There's large, huge benefits in customer support and marketing and research and software development. So I think, by the way, I'd like to say there's responsible AI. You can't have one without the other, and we can't focus too much on the responsible and not really get started on the AI. And we can't do the AI without thinking how to do it responsibly.
So asset owners have a significant role to play in working with their portfolio companies on where they are on their AI journey. Are they deploying these models responsibly? Are they ensuring there isn't bias in their data or output? And what are the guidelines and guardrails that asset owners are putting in place for their portfolio companies?
And I think we did a very good job in this playbook of laying out what some of those guardrails and expectations could be, again, in a range depending on both the asset owner themselves and also the company.
And then third, we also have general partners. So there we have indirect ownership in portfolio companies. And while some of the same principles apply I think it is really about value exchange and learning with those general partners what are those expectations that we should have around AI and responsible AI?
Robin Pomeroy: It's an interesting paper because it's quite concise and it has these case studies in it, and I like this one. I think this was UBS Asset Management's model for board engagement on responsible AI, with a list of known knowns, known unknowns and unknown unknowns. Because you mentioned AI is seen as a black box very often. It's true that there are risks that we're not quite sure if or when they might arise. But I think by setting it out in this way - and that's just the headings, it actually goes into considerable detail as to what someone who's trying to get to grips with the potential risks of this technology, wherever they might be looking - that's quite a useful checklist.
Why is it in anyone's interest to do this? Because people might just think, if you're an investor, I've got to rush in and invest in this thing. If you're deploying AI, you might think, well, I better get on and deploy it because everyone else is doing it. Why would you say it is in an investor's own interest to really take this seriously?
Judy Wade: Well, investors are about returns. I think we're privileged in the fact that we are obsessed about quarters. But quarters for us are 25 years, not three months. So we care about long term returns. And it's very clear that. And when you consider long term returns, and quite frankly, I think that's true of our GPs, many of our peers and companies, our GPs are on seven year fund cycles. They're not there for a three month return.
We have to be thinking about what we want this output to look like long term, and it's very clear from much of the research that if you're looking at long term returns, you need to be looking at sustainable returns, you need to be looking at products that, for example, if you think where one of the biggest benefits of gen AI is, is in marketing, if your data is biased and you're therefore excluding potentially a whole set of customers from your target customer set, that's a long term consequence for you. That means you've cut out of your total available market a whole set of customers that could be terrific.
So I think for us as investors, we're looking for companies and GPs that are thinking about sustainable, I mean, I'm using that word, I think broadly, returns. And so that's why it matters to us. And I think it was quite resonant for all of the investors that participated in this playbook that just rushing into, I had long without thinking about where the values are and where the risks are, would actually also potentially affect near-term returns, not just long term returns.
Robin Pomeroy: AI, for most of us who haven't been working in it, you know, it's a relatively new field. Big investment companies and organisations, do you take this approach to other areas? I'm thinking we talk about long term risks. Climate change, for example, things like this. I wonder if there's almost a playbook with risks that we've already known about decades before, because to the layperson outside this world, we just imagine money flowing in to an investment that's probably going to pay back as soon as possible, as much as possible. But as you say, you're investing decades ahead kind of time horizon. So does this already exist, this approach this kind of risk assessment?
Judy Wade: Yes. I think, you know, one of the things about CPP Investments is we really strive to be at the forefront of identifying how regulatory, political, social and other changes, particularly disruptive changes, can affect the companies that we invest in.
And so we've applied this thinking in advances in technology like AI, just as we would to things like climate change. You know, for example, we had a policy on sustainable investing, which we redrafted shortly after the launch of ChatGPT to identify the responsible sourcing and deployment of artificial intelligence as a sustainable investing factor. And it put us is that the sort of one of the first institutional investors to do so. But I think for us, it made it relatively easy to think about and align with our portfolio companies and our GPs, because this was something we were already doing as it relates to areas like the environment and other areas like ESG kind of broadly.
Robin Pomeroy: Have you seen the conversation kind of evolve over the last year or two on this? Do you think when you produced that, was it taking some people by surprise? I why do we have to worry about that? As the conversation changed or was ever on super well informed and ahead of the game and realised this was a potential risk years ago?
Judy Wade: Well, I think that the way in which generative AI - AI more broadly has been deployed in our own industry, AI and automation are clearly still, I think we should not forget in all of this, at 85% of today's benefits from AI, automation, inclusive of genAI, still come from traditional AI and automation. And obviously, with generative AI, the consequences and the potential risks are greater. And I do think that the speed with which it has been developed and some of the what I would call quick win adoption, has probably caught people by surprise.
I think the one thing we have to be careful, people are talking a lot about responsible AI and AI, but the actual adoption of it at scale is still fairly far behind.
Morgan Stanley does a quarterly report from chief information officers, chief technology officers, and I think it was in Q1 2024, about 27% of CIOs said that they did not see a significant impact on their priorities and spending in the near term from genAI, and that had gone down by half. So Q1 2023, I think 57% said that would have no real influence, and it went down to 27% in '24.
So you would think that means now it's really kind of there's I think you're rushing headlong in. However, only 1% are deploying anything at scale, and only 7%, up from 2% a year prior, are actually developing pilots and models.
So I think we're still fairly far behind in seeing deployment at scale, and therefore the risks or benefits at scale.
Robin Pomeroy: Why is that? Do you think that is quite surprising?
Judy Wade: Well, I think it's a couple of things. So there are sort of what you might call off the shelf products. So Microsoft Co-Pilot would be a great example. But even getting the benefit of that, is it really applicable across all employees? How do you get through the training and adoption?
We've taken kind of what I would call a bottom up and top down approach. So we do have Microsoft Co-Pilot deployed across the entire firm. We did a fund wide hackathon where every single employee participated in a hackathon using Microsoft Co-Pilot just to demystify it. And we've also done bottom up hackathons by department, really getting people to understand the use.
But it is challenging to change people's day to day habits, to click on the co-pilot and say, help me write a memo. Look at this through my email. So even something that's off the shelf that if you're a Microsoft, you know, customer, you can sort of plug and play. I think Microsoft about on average in the companies that have adopted it, scale it, about 30% of employees are using it actively.
So that's just the off the shelf products. And then you say, well, what about taking these large language models, using them and building our own, you know, products and use cases off of that? That is it's an entirely own investment and significant. So I think the ease of use of ChatGPT. And a lot of companies have their own internal ChatGPT. And it can be great for, you know, sort of quick answers and searches, but is it going to drive competitive advantage? Probably not. And getting to that competitive advantage is not an insignificant investment. It requires your own data, which you should ensure it's, you know, data that is not biased, that you know, and then that these models don't hallucinate you all of a sudden get into all the things where responsible AI matters. But it's not easy to do.
Robin Pomeroy: With this paper, you're recommending investors kind of tread with caution, but you seem very enthusiastic. They should certainly tread, you know, get into AI, but to do it with caution. And this notion of responsible AI, there is a risk that that just becomes that word just means AI. All AI is responsible. It's kind of a branding exercise. Or that doing some of the steps recommended in this paper becomes a box ticking exercise. So we could all go around saying, we've done our best, you know, you can't blame us now if something goes wrong. Is there a risk of that that everyone will brand themselves - I'm the responsible AI officer at this company or I only invest in responsible AI. What would you say to a cynic who might suggest that?
Judy Wade: I'd say there's a risk, just as there's a risk of the AI washing. You know, we're doing stuff. It's one of our priorities. If you look at earning calls and then is it really being deployed outside of, again, maybe the off the shelf products. But I would say based on the working group sessions that we did, and again, that included companies and investors, I think the real understanding that the consequences of benefits of deploying AI without ensuring that it doesn't hallucinate that your data is not biased, that your models are explained, is actually just a real potential risk to the bottom line.
I think if you look at climate change often in many cases we're asking for companies or people to do something now for a benefit that's farther out into the future feels like it's coming closer and closer. It is still been something here in the US that's been somewhat politicised because of that. I think when you look at the S and the G of ESG, you're asking people to do something today where the benefits are harder to quantify. And with AI, you're asking people have to deploy responsibly to do something where those benefits and risks are fairly immediate. They're very easy to quantify in the most places part, and they're immediate. I mean, look at what happened with Bard when Alphabet launched Bard with, you know, some flaws in its training models.
So I think we feel that the responsible may go away, but it's because responsible AI is just good quality control. It's about making a good product.
Robin Pomeroy: Are investors paying attention or is the incentive to pour money into AI just too great, meaning the responsible AI stewardship is too low down the list of priorities? Is the fear of missing out factor, people are rushing in, you seem to be indicating perhaps it's perhaps it's not this kind of risky gold rush that people outside your industry might might think it is.
Judy Wade: We have not seen that and nor have the again and through the World Economic Forum, seen this gold rush again in the real economy. That's not to say out here in the Bay area, there hasn't been a gold rush in investing in AI companies themselves, and there continues to need to be important, focus on how those models are being developed and what those applications are.
But the real opportunity is when they get deployed in the real economy. And there we have not seen that rushing in. And what we've experienced ourselves is that actually all the things that are going to create competitive advantage for us from AI, so using our own proprietary data, ensuring there's attribution. So our investors trust the insights that come back, using it to minimise the risks we would have in our investment decisions. Those are all the same things as deploying it responsibly, right. Our guiding principles that we use in investing, sort of integrity and partnership, are the same principles we're using to drive deployment of AI.
So I don't think there's been a gold rush in the real economy. And I think when companies are really investing because it is, again, not as straightforward as one would think, those things that they're investing in to really get the competitive advantage are the same things you would, for the most part, be investing in to ensure that the product delivers what you want: honesty, helpfulness, etc...
Robin Pomeroy: Can you give us any real world examples, maybe there are some in the paper, of investors implementing responsible AI stewardship.
Judy Wade: Well, you know, I think what you had with UBS was one great example. Norges this is another. So Norges has developed a set of principles and expectations that they have published. So where they have very clear description of board accountability, and they believe the board of directors is accountable for ensuring that companies develop and use AI responsibly. They discuss explainability and transparency. They talk about robust risk management.
I used us as an example of what we have already, and our sustainable investing expectations have included around both the vendors and evaluation of these to ensure that those meet your kind of standards.
Temasek is another one. They see investing in AI as a kind of core priority, and they've taken a very proactive approach on AI. They see it as a kind of key global challenge to sort of think their words or to strike that balance between the innovation and responsibility, and they've laid out how to do that.
There's Radical Ventures, more on the venture side, has a very clear set of expectations. So there are many out there.
I think we as well, in the process of saying, what does this look like in practice? How do you measure it? There's still more work to be done, but I think there's many, many in the community that are either developing, sharing learnings and working on what this means overall for themselves and for the industry.
Robin Pomeroy: Some would say the governance of artificial intelligence is a question for regulators and for governments. Why do you think it's so important for investors to get involved at this stage? Is it because we're waiting for regulators to catch up? Or perhaps is always a case for both those things running side by side?
Judy Wade: Well, so obviously, you know, as our role as an investor is not to make policy or regulatory recommendations, but we really think that they should be run side by side. Because if you think about it, one, the regulations are still quite fluid and they are going to evolve. I think there's 27 states here in the US developing regulations. You obviously have the EU, and so it's absolutely critical that we are not ahead of but in line with those.
But there's much more to responsible AI than checking a legal box. I mean, once again, back to risk mitigation. Implementing responsible AI can help companies identify and mitigate potential risks. I come back to competitive advantage. Doing so responsibly with data that isn't biased with the right models is actually a competitive advantage.
Innovation is really critical, but even outside of the regulatory environment, you want to have that innovation done with safeguards so you're not releasing models that are flawed.
And one thing we haven't talked about is as we think about responsible AI, will only 35% of global consumers trust how organisations are deploying AI. And from a brand and reputation perspective, having a set of principles, being clear about those with your consumers is actually, again, a competitive advantage.
So I think there's much more to responsible AI than ensuring it's inclusive of ensuring you are on top of and staying aligned with regulatory changes, but I think it provides many more benefits than that.
Robin Pomeroy: In a sentence, what would your words of wisdom be to someone coming new to this trying to understand responsible AI and an investor's outlook to it? I've put you on the spot there. But you know, to sum it up, why should I care?
Judy Wade: I think you should care because AI is going to, it already has enormous benefits, but there are significant risks. And if you want to take advantage of those enormous benefits and mitigate those risks, then you really have to deploy this responsibly. And it is not as difficult as you may think it is to do so. And read the Playbook.
Robin Pomeroy: Absolutely, yes. Links in the show notes to this episode. Judy Wade, thanks very much for joining us on Radio Davos.
Judy Wade: Thank you.
Robin Pomeroy: Judy Wade is Managing Director and Head of Strategy Execution and Relationship Management at CPP Investments.
You can download the Responsible AI Playbook for Investors on our website - link in the show notes.
And find many more episodes of Radio Davos about AI wherever you are listening to this, or at wef.ch/podcasts, where you can also check out the Forum’s two other weekly podcasts, Meet the Leader and Agenda Dialogues.
This episode of Radio Davos was presented by me, Robin Pomeroy. Editing was by Jere Johansson. Studio production was by Taz Kelleher.
We will be back next week, but for now thanks to you for listening and goodbye.
Podcast Editor, World Economic Forum
Thomas Crampton
2024年10月21日