Generative AI is, by its very nature, a game-changer. It’s not about automation; it's about amplification. It augments our abilities, pushing the boundaries of what's possible, transforming industries from the inside out.
This is the full audio from a session at the World Economic Forum’s AI Governance Summit 2023, on 15 November, 2023.
Watch the session here: https://www.weforum.org/events/ai-governance-summit-2023/sessions/transformation-on-the-horizon
Gan Kim Yong, Minister for Trade and Industry, Singapore
Jeremy Jurgens, Managing Director, World Economic Forum
Daphne Koller, Founder and Chief Executive Officer, Insitro
Sabastian Niles, President and Chief Legal Officer, Salesforce
Ina Fried, Chief Technology Correspondent, Axios (moderator)
Podcast links:
Check out all our podcasts on wef.ch/podcasts:
Join the World Economic Forum Podcast Club
ポッドキャスト・トランスクリプト
This transcript has been generated using speech recognition software and may contain errors. Please check its accuracy against the audio.
Ina Fried, Chief Technology Correspondent, Axios: Good morning and welcome to the World Economic Forum's AI Governance Summit here in San Francisco. And also to all of those joining via livestream. If you are following along and want to share on social media, please use the hashtag #AI23.
I'm Ina Fried, Chief Technology Correspondent at Axios. It's my pleasure to welcome everyone for what I know is going to be a great discussion
As everyone is keenly aware, it's been about a year since ChatGPT changed all of our lives, but I think we're all still trying to figure out what it means, what it's going to mean.
I'm thrilled to be joined by an eclectic panel, as was discussed backstage. People that I think will give you a few very different glimpses. I think, again, we're all trying to figure out what does it mean for me, what does it mean for my job, what does it mean for my community, what does it mean for my culture? We have a lot to dig into.
Before we begin the discussion part, I'd like to invite Jeremy Jurgens, the Managing Director and Head of the Forum's Centre for the Fourth Industrial Revolution, to deliver a few remarks and then we'll kick off the discussion.
Jeremy Jurgens, Managing Director, World Economic Forum Geneva: Thank you, Ina. Thank you, everybody for joining us here. We have over 250 people assembled for the AI Governance Summit. This is the first time that the World Economic Forum actually convenes a group in a summit around a specific initiative here. And in this case, it's our AI Governance Alliance.
We established this earlier this year following a convening of over 100 top leading experts in April. And out of those discussions, we understood how quickly the technology was moving, the dramatic effect that it would have on the economy, on society, on livelihoods. And how even among the world's top experts, how little it was understood: Both how we might actually guide or even control and put in place the necessary safety mechanisms, as well as how we would govern and take into account how to balance the risks along with the opportunities and what it would take to fully harness and unlock the innovation benefits that might emerge there.
On the basis of those discussions, we published the Presidio Principles. This again attracted attention from a number of not only business leaders, but also policy-makers, civil society groups and academics. And with that we established the AI Governance Alliance.
So today is another step in this journey of helping us understand and go in depth on each of the different domains we had here. So we'll have discussions looking at safety: What is the role, what are the technical pieces, that we need to put in place there. Considering governance, we know we've had discussions, both of these announcements out of the US recently, out of Bletchley Park in the UK, the Hiroshima process and other groups. So a lot of different jurisdictions are navigating this governance framework.
And importantly also, how do we not lose sight of the opportunities that it does present and not only in developed countries? The US is in the lead. China has a very clear role, but also how do we make sure that those benefits extend to developing countries and even other developed countries? They may today not actually have the capacity to put their own models in place to do the necessary training.
So we'll be covering all of these topics over the next two days. With that, I'd like to thank all of you for joining us in these discussions for the contributions that you'll have both here today, as well as in more in-depth technical discussions tomorrow. And I look forward to navigating this a new era together. Thank you.
Ina Fried: Thanks. I think you hit on probably the three most important conversations that we're all trying to have simultaneously. How do we harness this incredibly powerful and incredibly fast-moving technology? How do we recognize its limitations, its risks, its biases? And then how do we reshape ourselves as humans? It's sort of ironic that the technology that we're able to most easily communicate with, the technology that finally speaks our language, may require more change than learning any programming language we ever had to learn.
I do want to tap into the expertise of each of the panellists here. Maybe we'll start with Minister Gan. Minister Gan, talk about how you're thinking of it from a public sector perspective. What does it mean? Singapore has obviously been on the leading edge in a lot of ways of technology and business. How are you looking at what we've all witnessed over the last year?
Gan Kim Yong, Minister for Trade and Industry, Singapore: Thank you very much. First, let me thank all of you for inviting me to this forum. Singapore has a very unique situation, partly because we have a very scarce human capital, and therefore AI actually promises to allow us to augment our human capital. And it also promises to solve many of our challenges of our time.
But at the same time, it also brings about new risks that we will have to manage. The capability of generating content at speed and at skill will create new risk elements and including, for example, possibly the higher risks of misinformation and disinformation with fake news. And it can also be used by elements that have ill intent. It is also an area that we will need to think about how we can encourage the appropriate use of this technology.
But if we use it properly, it has the ability to uplift our economies, our societies and also empower our workforce to do even better.
Our experience is that we want to embrace AI, and Singapore is one of the first nations to have rolled out a national strategy. This was done in 2019 and we continue to invest in building our AI innovation capacity.
But with the advent of generative AI, it's a more exciting era in that we can think of doing things that are seemingly impossible in the past. But it also requires us to pay specific attention in the governance of AI development.
Let me share Singapore's experience in three particular areas.
First, I think to fully embrace the potential of generative AI, we'll need to focus on preparing our businesses and our people. This is very important because you will then minimize the risk of widening the digital divide. So we have to help our businesses transform, equip them with digital tools and at the same time share with them, provide them with the necessary support to harness the potential of generative AI.
But at the same time, we also need to make sure that our workers are able to harness the skills that are needed to be able to do well in taking advantage of this generative AI. So based on recent reports on the future of work, Singaporean workers were found to be one of the fastest in the world to adapt to adopt AI, and at the same time to be able to pick up the skills that are necessary to do well in its implementation. So preparing businesses and people is very critical for us to be able to harness AI.
Secondly, because of the speed at which it is developing and evolving, we will also need to adopt a nimble, flexible and practical approach in the governance of AI development. A one-size-fits-all rules and regulations is going to be very difficult, because by the time you roll the rules and regulation out, technology moves on. So it's important for us to take a different approach, have a nimble and practical way. And that is why Singapore has also rolled out a initiative called AI Verify. This is a testing tool to allow us – the developers and the owners of the AI system – to test, to demonstrate the governance on the AI systems. So these are ways that we need to continue to evolve and to adapt to the new environment.
Thirdly, it is also important for us to come together collectively through a multilateral collaborative platform. Platforms like this allow us to share our views, our experiences so that we can collectively come together and address the challenges and opportunities of AI.
At the end of the day, I think it's important for us to at the same time as we recognize the potential and benefits of generative AI, we must also recognize the research involved. It will be important for us to come together as a global community to address this issue squarely and to collectively steer the development of the technology, so that we are able to advance it for the public good and do so in a safe and secure way.
Ina Fried: Thank you, Minister Gan, and I hope you'll have more opportunities to share where Singapore sees itself and also some of the projects you're working on.
Dr. Koller, I was hoping that since you have this vast knowledge of the technology space and have been working in AI since before all of us realized this was a thing last year … You know, one of the things that struck me is AI is not new. The idea that many of the the places we want to go have been laid out by researchers and science-fiction for decades.
But the technological way we've gotten today has surprised a lot of people, including a lot of people in the field. What does it mean that it's generative AI that has suddenly come to the forefront versus some of these other technologies? You know, a lot of other AI machine learning different things that we're trying to analyze, we're trying to predict – a lot of people have called it autocomplete on steroids. It's amazing the results we get from this technology. But I assume by its very nature there are some shortcomings to what it can do given the way it operates.
Help explain really quickly how we got here. What does it mean that this is the technology, and how do we go forward with it?
Daphne Koller, Founder and CEO, Insitro Inc: First of all, thank you for inviting me here. It's a pleasure to be here. And yeah, I'm what you might call an OG AI person. I have been in the space since the early 90s, as the space was coming into being. And I remember chairing an AI conference, the biggest in the field, and it hit a thousand the year I was the programme chair and we thought that was a big event. So it just shows you how quickly things have evolved.
What seems clear when you look at the arc of the technologies that we're living in the midst of what is an exponential transformation, and that's what made it hard, I think, to perceive and to predict the future. Because when you think about how an exponential curve goes initially, the time that it takes for a doubling, if you will, of capabilities, whatever those capabilities might be, is a long time. And so you then you're kind of on a ambling, slow kind of linear curve and then it suddenly starts to accelerate and you say: Whoa, wait a minute, what happened?
And I think this is where we are now, is at this inflection point where we suddenly realize that we have been on this exponential curve all along. It's just that we hadn't noticed. And I think that exponential curve has been enabled by the convergence of three tidal waves that have all come together at the same time.
One, which I think is actually the most important – and I'll come back to that in terms of where things are going, is the availability of data.
When I started in the field a long time ago, a large dataset was a couple of hundred samples and you felt fortunate to have that. And now people are trending literally on the web in terms of the text and images and increasingly speech and video and so on. And it's that amount of data that allows the kind of models that we're seeing to be to be trained. Without that, all of the methodological innovation would not actually amount to anything. I know because we were there at the time.
The second, of course, is enabled by the amount of data. Now more sophisticated, more complicated models can actually be trained, and they become increasingly more powerful the more data you feed them, because of a combination of how the technology is adapted to dealing with the richness of the data.
And the third, of course, is compute on tap, which we also didn't used to have way back when.
And so those three have come together to enable the the sort of incredible acceleration in the progress that we're seeing now as to why generative AI has been the thing that has taken over, I think there is actually two elements to answer.
The first is just that it's so understandable, so relatable to people. And so if you're there and you're actually having a computer that talks back to you, it feels amazing in ways that other less maybe visceral applications don't. And I think that's part of what we're seeing. But it's only part.
The other part is the somewhat surprising observation – and the thing was a surprise to pretty much anyone – is the extent to which this autocomplete on steroids task is sort of creating almost like a balloon of of model around that that's forced to create in a very realistic way what the next word is going to be.
How much representation that has force the model to learn, how much world representation, how much reasoning is required to do that task really, really well.
And so I think that is an important lesson learned and I think has certainly transcended into how we think about how to solve other problems in terms of the different pieces that need to come together, which is defining a really high-value proxy task like autocomplete combined with enough data to actually make those models learn something that is a meaningful model in a particular domain.
And for those of us like myself who work in data-poor domains where you really need to be thinking all the time about where do you get more data or generate - we print data in what I do - how do you generate enough to leverage that sort of incredible combination of those methods?
Ina Fried: And I think, Dr. Koller, you hit on one of the things that I think is really a key part of this. Which is: These generative models, what they really are or what they accomplish, is taking very complicated tasks that used to require a specialized form of communication, whatever that was. So, for some people that was Photoshop and editing photos. You had to know how to use a very specific tool. In this case, now you can just describe what it is you want.
OpenAI had its event last week. They announced the ability to build custom GPTs, and I go to developer conferences all the time. It's rare that I go home from them and build anything because in the past they required knowledge I didn't have. I went home from that developer conference and was able to build a couple of really useful tools.
And what it reminded me is ChatGPT in a sense is a parlour trick. It's the introduction, it's the tutorial, but the power of these models isn't just reasoning against the whole world. It can be incredibly powerful when you're using that reasoning against a specific set of data. You know, for me it was planning my coverage for next year's Olympics. I wanted to get to you, Sabastian, because Salesforce has been partly at the forefront of how businesses make use of this. Businesses obviously bring a different set of concerns.
You know, it's one thing if I go home and use my data: I'm the IT Manager, I'm the CEO, I'm all of those things. So I get to decide. It's really tricky when we start getting into businesses using their data. Salesforce has spent the year rolling out a bunch of tools, a lot of them in sort of very limited program so you can get a lot of feedback. You know, I've been to all the events where you've announced all the tools. In a lot of the cases, there seems to be a big emphasis on letting these systems generate a lot of information, but not letting them take actions, leaving that to the human.
My suspicion, and you're the chief legal officer so you can probably tell me if I'm right, is there's two reasons for that. One, it's a best practice. We don't know these systems that well. It's a best practice, but also it shifts the liability to the person doing it and not the company providing the software. Tell us about where we are. What have you been able to do in this first year? What are some of the concerns you're still grappling with? You know, from a company that's been at the forefront. Where are things right now in terms of businesses being able to make use of these tools on their data now?
Sabastian Niles, President and Chief Legal Officer, Salesforce, Inc: Sure. Thank you and delighted to be here with all of you.
I suppose I'll come at that in a couple of ways. You know, first, and to echo your comments, AI fundamentally is not new. In a way, though, this exponential transformation could not have come at a more important time.
We think that businesses need AI. Now more than ever. Governments, constituents, stakeholders need AI more than ever. And the broader society we think needs it.
Now, what do you mean by what do we need? Well, we need the benefits. And then how do you achieve those benefits taking into account the relevant risks?
Look at Salesforce: We decided to make a very significant additional investment around AI research probably around 2014. Working with our customers, we really were viewed as we're pioneering the AI era around predictive AI. And then we launched Einstein 2016 and, you know, trillions of predictions, right? What's the next best action? X-Y-Z.
The funny point you're making around we haven't yet fully deployed, you're referring to autonomous activity, autonomous agents. We do see ways, a predictive era, generative era, autonomous era, and then we can talk through what we think are the other sort of future eras.
I would note, you know, when we were sort of laying out what we call our Einstein GPT, one concept was around, could we let people build skills? What are the skills, the actions, the capabilities that a customer, a developer would want to create, using various software or different technologies? More to come on on that front.
I think what I would also say, if we step back, at Salesforce and for many kind of companies and institutions and certainly governments, trust has to come first. Our core values are we lead with trust, trust, customer success, innovation, equality and sustainability.
Asking the right questions now will enable us to create the future that we want to have rather than the future that we may end up with. How do we ensure that a trust-first motion, that these concepts of responsibility, of ethics, of legality, how do you build them into the use, development, deployment of what we think are exponential opportunities in terms of technology? And build it in early?
We believe that this AI revolution, as it were, is a trust revolution, or it needs to be a trust revolution. And we designed what we call the trust layer, listening to our customers, also to regulators and the like. The idea of what the trust layer is, how do we make this safe and effective, taking into account data. When we think about data quality, integrity, security of data; and when we think about mitigating and monitoring risks, but also fundamentally understanding the risk.
I think there have been two items that I've been surprised by as this new phase of generative AI has developed. The first how quickly we all appreciated and embraced that we need a multistakeholder lens on this new technology. And I felt it was very quick: how do you partner academia, civil society, private sector, public sector, human beings, employees, workforce? This idea of needing a multi-stakeholder approach, it's something we've always thought of in the broader Salesforce ecosystem. That business can be the most profound sort of system and drive positive change.
The other element concerns governments and regulation. I do believe, by the way, one of the great opportunities for governments will be: How do you improve the constituent experience? So certainly AI is going to transform how businesses interact with their customers, and in time how businesses interact with other businesses. But what about for all of us just as human beings in our communities? How could you improve the constituent experience using AI and technology? I think that's an incredible opportunity for the public sector.
And so the other surprise – again so multistakeholder lenses being important, and you have to bring in diverse and different sets of voices – is that we need collaboration between the private sector and the public sector, so that we can move really fast but thoughtfully.
When you think of the legal landscape more broadly and the ideal role of law, we need to figure out how to use AI to accelerate the velocity of wise decision-making. Not speed for speed's sake.
I think the other concept I'm going to introduce here, that we're all really excited, is the moonshot opportunities that can arise if we get AI right. Diagnoses for various health conditions, enabling educational outcomes. You kind of go down this list of really exciting ideas.
For myself and actually in terms of what we're also looking at very much at Salesforce, I want to talk about something that's a little bit more boring, but I think much more important when you think of scale. I want to talk about the problems of mistakes in society. I want to talk about the problems of inequalities, where people who don't have access to the best of anything: the best healthcare, the best technology, the best software.
What if we can all focus on it through with the SDGs? With each of those different SDGs, we believe that should be available to everyone, right? What if we use AI to raise the floor, not just go for the moonshot? Of course, everyone's going to invest and focus on that. What if we think about raising the floor, reducing the impact and the likelihood of mistakes occurring that have horrible impacts on people that never really get sort of written about or focused on? Just thinking about this idea of raising the floor in every potential venue and in every potential region, I think it's a tremendous opportunity.
And just the other thing I would ask on this issue of accuracy and reliability and ethics and responsibility, we're all very concerned about hallucinations in these models. The area that I have just not heard enough about: to what extent are these hallucinations a feature rather than a bug? What I mean by that is is there an element about which we're excited, being able to create new content, which then creates a whole host of new ethical issues? We must have accuracy or at least transparency, but where something is just flat-out wrong. Is this hallucination element part of how the technology is able to create and generate new ideas, and how do we deal with that? At Salesforce, when we build on the trust layer and other items, we're trying to grapple with some of those opportunities and challenges too.
Ina Fried: You raise a lot of the issues that we want to dive into and we will in a second. But I think before we get into the risks and the benefits, the other element that is important to address – and Dr. Koller talked about this in referring to it as an exponential amount of change – is just the speed.
We have to take a moment to talk about the speed with which all this is progressing, because I would posit that stuff is actually changing faster than many humans are able to adjust, let alone institutions. How do we deal with a technology that is moving this fast?
I mean, AI and generative AI are often compared to other big shifts in technology: We had the computer revolution, we had the internet, we had smartphones. All these were major shifts and changes. But at least in covering each of those technology waves – and I've been fortunate enough to get to cover each of them – the pace of change wasn't actually that great. They were huge shifts, but it took us quite a while to make those shifts. And the technology underlying those shifts wasn't actually changing that fast, in part because it was hardware.
I have been amazed and continue to be amazed over the past year just how fast this is moving. I will talk to a company about where they're going, and three months later the amount of progress you'll see is dramatic. One example I saw at a recent conference was showing a slide of the same prompt in Midjourney a year ago, and it looked like something that my parents ... it's a Chagall that they might have that was you know very interesting and artistic, but nothing that any of us would call realistic. And then the same prompt today generates something photorealistic. Similarly, with other types of generative AI.
My question – and maybe we'll start with you, Jeremy, since you're managing this for the Forum – is are we capable of keeping pace with this rate of change? Because even if we want to get to all the things that that Sabastian's talking about, we have to not – you talk about mistakes – we have to not make mistakes as humans in what we allow and don't allow.
So I'm curious how all of you were thinking. If anyone disagrees with that – the pace of change isn't that fast – I'd love to hear it. But how do we deal with technology that's changing this fast?
Jeremy Jurgens: That's a great question. It's something we've been focused on quite a bit.
Over the last year, I was playing on Midjourney with my kids, and it was before it was kind of mainstream with the ChatGPT release. I was like: okay, this is pretty cool, amazing, don't need the Photoshop there.
With the release of ChatGPT last fall, all of a sudden people woke up. And it wasn't that AI was new, right? AI had been ongoing and it reminded me a bit of how companies approach the digital transformation in their companies.
There is a period where every company was trying to go through digital transformation and at some point it became mainstream. They took it for granted. They hired a chief digital officer and then they moved on to other things: ESG sustainability. In the last year, they've all been focused on AI, and so there is this kind of high-level element there.
But to come back to how do you address the speed? I think not only within organizations but across society as well, because I think there's a lot of fear that comes with this. We hear about the fear of killer robots and disasters and so on: quite exaggerated. I'm actually much more concerned around just exacerbating the imbalances that are already in the system today around digital safety, around inclusion, around very simple and basic things that we don't even have to worry about, you know, further out down the road there.
And address these, the most important element that we focus on is inclusion. We talked about multistakeholder here, and I can give an example of the work that we've done in India. We've been looking at how you can use AI for agricultural innovation for smallholder farmers in India. Over half the population works in agriculture in India. So roughly double the population of the US, 85% of those farmers are managing a little bit less than two hectares. So relatively small plots and a lot of them have limited access; a lot don't have phones. So it's like, okay, how how do you then leverage the technology for their benefit?
You know, one of the first places we started was actually bringing the farmers into the discussion through farmers' cooperatives. We brought in start-ups. We brought in large Indian companies that are at the forefront of agricultural production. We also brought in foreign multinationals. We brought in governments, we brought in agronomists, etc.
Now that discussion took time. Right? We spent over a year on that. But in the process you mentioned the word trust. The trust actually came from including a much larger ecosystem in the dialogue. So it wasn't just some company coming out and saying, okay, we've got a wonderful solution for you. It'll solve everything. We actually involved the farmers involved throughout the value chain, all the different elements there first.
Ina Fried: I want to press you a little on this because on the one hand, the opportunity is incredible. Like I remember again, in the cellphone revolution giving subsistence farmers … typically, they had these decisions, of what crop to plant, of when to plant, of which market to go to. Basically, the difference between feeding your family and not. And it made a huge difference when you could tell a subsistence farmer in India or in Africa, which market you're going to have to walk to sell your crops on a given day. If you know which markets are going to offer a higher price, again, that's the difference between feeding your family. So having more data and being able to deliver it in a conversational way where the farmer doesn't have to do much more than ask a question is amazing.
But I also want to challenge you. You were meeting for more than a year. I suspect that technology shifted in that time. So how do you have those long-term conversations? How do you build that trust and not have the result be: Here is the best thing we could have done a year ago that has less relevance than you would hope today?
Jeremy Jurgens: Maybe would disagree a little bit on the speed. The speed in the lab is happening quite quickly. The speed in deployment does take more time.
I mean, we exist in very complex political systems, complex economic systems. And just come back to this question of healthcare, for example: Yes, you could get a diagnosis in ChatGPT or something. And the question is, how many of you want your diagnosis read from a combination of credit data, Twitter data, X data, so on versus verified, validated, trusted sources?
We do have agency in this. We have agency in determining when we decide to use the technology. Conscious decisions about when we don't, and we delegate that to someone else. And so I think it's important that as we navigate something that's happening very quickly, we move slowly enough that we're actually taking conscious things, and not just allowing ourselves to be pushed along through that rush. And we'll actually get more benefits from that.
If I come back to the Indian farmers, what we saw as we ran the first pilot with 7,000 chilli farmers was improvement in yields and time to market, reduction in the utilization of fertilisers which have an expense, especially in the face of the recent energy crisis, and increased profitability for each farmer. We're now looking to extend that to other states and we didn't say, okay, let's now just roll it out to the whole country, work with the agricultural ministry, 600,000 farmer cooperatives. We're taking a step-by-step approach. And because the underlying framework is developed in a system that recognizes that the technology will continue to evolve, we'll be able to bring in new developments as they occur.
This is also where I think startups play an important role in those discussions, because they're often moving much more quickly than the large companies. But they don't necessarily have the capacity to immediately scale up their benefits there.
So I'm actually optimistic that if we consciously include the larger ecosystem, that we recognize the different individuals' agency, that we can harness the benefits even as we mitigate the risks along the way there.
Gan Kim Yong: I have a slightly different perspective. I think technology has great promises and can solve quite a lot of problems, but it in itself is not a solution. I think it is just an enabler.
So for example, if I can provide answers to how to improve the crop yield, what kind of fertilizer is the best for this type of crops, and which time of the year is the best time to plant the seeds and to harvest. So technology and AI can help in finding solutions, answers to that.
But to ensure that the farmers have a better quality of life involves many other factors, including socio-economic issues, geopolitical issues. And you need to also make sure that, even with the best crop yield, the logistics and the supply chain has to be in place. So you need to look at different aspects of the problem to be able to solve this issue, to deliver a good outcome for the individuals on the ground.
So I think it is a good tool, but it needs to be used in combination with all the other measures in place to be able to deliver outcomes.
Ina Fried: And you also brought up a point, Jeremy, that I wanted to bring up with the rest of the group, which is: No, I certainly don't want to rely on major medical decisions by typing into ChatGPT. But I think that's one of the fundamental misunderstandings a lot of people have about generative AI: It means I type into ChatGPT and expect a miracle response, whereas I don't necessarily want that. But I do want my doctor who went medical school ten years ago to have access to a tool that's trained on really good data. Not Reddit, not everything that's been said on the internet, but the medical things. And by the way, spoiler alert, if you read my newsletter tomorrow, this will make more sense. But you know, how powerful is the combination? And maybe let's start with you, Dr. Koller, since you're in both AI and healthcare, How powerful is the combination of this interface, these models with very specialised data, particularly in healthcare?
Daphne Koller: That's a great question, and I think actually that interplay between a person and the computer is something that I think we should be leaning more into. And there was a recent study that basically showed that you could deploy the technology entirely on its own. You could deploy that technology in partnership with people. Both of those are legitimate productivity gains. But when you have the partnership between a computer and a person, you actually have a considerably greater efficiency gain. And I think also are able to avoid some of those issues of a loss of trust and hallucinations and some of the other risks that if you just launch of the technology on its own you run into all those risks.
In the field that I'm in, you oftentimes hear people say, oh, we have the first entirely end-to-end AI system for X, Y and Z. And my question is, A, do you really? What does it even mean? And is that a good thing even if you did have it? So that's a place where I would ask questions.
Now to the other aspect of what you asked, I think there is clearly a need to create specialized agents that are trained on the kind of high-quality data: not the Reddits, not the X, not whatever. Where you need to curate the data, make sure that it's actually representing truth. In some cases, you need to generate data. You need to invest in harmonizing, cleaning your data. Let's just say that garbage in, garbage out was always a thing in machine learning. But I think as you make the AI more powerful, it gets better and better at anchoring on falsehoods and inaccuracies in your data and amplifying them.
And so I think making sure that as we create what I would call an app ecosystem on top of whatever generative AI models are out there, we empower those apps with the right kind of high-quality curated data. If you ask me, I would say the value is going to be split quite nicely between the baseline foundation models that create the core representations, whether it's text or images or whatever. And these verticals that we're trained in a very thoughtful way with the right feedback and the right input from humans on high-quality data in a domain-specific way.
Ina Fried: You bring up the power of the technology and the limitation which is: it's going to make incredible insights, but based on the data it has: a lot of these systems have been trained on the whole internet.
When we talk about the risks, I think a lot are tied to the fact that the the the internet, the data, was created by humans and it's processed by machines, but it has all of our biases. It has all of our proclivities to division and, all the other problems that plague our modern world are well represented on the Internet.
How important is it that we now, while we're just in the infancy of this, recognize those limitations? Where does bias rank? Maybe you know Sabastian, you're dealing with all these risks. Obviously you've got to prioritize them. Where does bias fit in, where does income inequality fit in versus, as Jeremy was saying, a lot of talk about the robots taking over? I usually try and suggest people separate those conversations. I think they're both worthy conversations to have. But you can't really talk about these near-term concerns, misinformation, bias, income inequality when you're saying: Oh, but the robots are going to kill us.
So let's have a special time devoted to our fears around the robots killing us. I do think the more we talk about that and the more we recognize it as a possibility, the less likely it's going to be. But I also think we have these other problems that are here. They're present. Misinformation is probably at the top of my list.
But how do you move forward? And then I want to come to Minister Gan on the same question. How do you recognize these risks, work to address them and still move forward? You're not going to get rid of bias, you're not going to get rid of income inequality. How do we move forward? You talked about, you know, lifting the floor. That's certainly one way.
Sabastian Niles: I think it's so important that you raise this, because I think this is exactly the kind of conversation we need to spend more time on and resources on. And focus on these solutions.
What you're highlighting is, we see this sort of this three-pronged lens: You can take trust, responsibility and impact.
You asked me where do these issues of bias, of toxicity, of inequalities fit? They need to rank really high.
So what do you do with it? Well look at Salesforce: the way we looked at it is we said we need to make sure we're always really shifting left on this trust type of conversation, which means as we think of developing our product solutions, partnering with our customers, whether our customers are private sector, governments, non-profits, foundations or what have you, civil society …
Ina Fried: Explain "shift left" because it means one thing to developers. Salesforce is known for shifting left, and that means something totally different. Talk about what you mean in the development sense.
Sabastian Niles: So what I mean is here's what we can't have happen or what in Salesforce we say we cannot let happen. And hopefully everyone here says let's not let it happen. That trust is an afterthought, that anti-bias toxicity issues are an afterthought, that ethics, responsible innovation and grappling with ethical and humane use of these issues is an afterthought that you figure out later after products have been deployed. You know, develop sort of at scale.
Shift left means bring these topics – bias, toxicity, misuse, the problem of humans misusing the technology – really early into your product development life cycle in your design. Whether it's at your organization, whether it's your company, whether it's your government, whether it's your local municipalities' capacity. Having cross-functional say councils as you're thinking about developing technology to developing solutions or determining what are the impacts we're seeking to have very, very early when you're creating your product, impact or inequality roadmaps. Throughout the early stage, not at the very end. Like: Oh, by the way, shouldn't we have started to look at these risks. And then playing catch up on it.
And so what does that mean for us? We said, okay, well, let's build toxicity filters, right? Let's grapple with these bias-related topics. Let's think about how to bridge the old digital divide. How to also focus on enablement, upskilling, reskilling very early on.
We've been partnering with many folks across the globe around how to our Trailblazer platform: It's free content that we've just developed for training and enabling people on all sorts of items. And making sure that whether it's at our schools, whether it's our workforces, upskilling and reskilling, people understand what the opportunity is and then can deploy it really fast.
We do think there are technological solutions to some of these issues, but it also requires companies to decide: Let's build in trust by design, compliance by design, responsibility by design , rather than, again: Oh, someone else will figure it out.
One quick point I also want to raise. You know, you talked about the farmers. I think this raises an even broader and interesting question where the domains or the contexts, you know, then administer is relates to what you're highlighting, What are the domains and context where in order to achieve the benefits for all in an inclusive way that is also thoughtful about risk, do we need to have adoption and embrace of the solution? And if you're in a domain or a context or a use case where to achieve the really important benefits, you must have quality embrace and adoption, then you must do what Jeremy just outlined. Because it's not just someone from up on high that says, okay, now it will occur. You have to have the human being, the beings, the organizations, those that may question, that may be sceptical, that may doubt, may lack trust. That's where you have to work really early and have eternal vigilance around making these solutions. Is technology and the like worthy of the people's trust, worthy of enterprises, worthy of the public sector's trust? And it's really hard because it requires you to grapple with very serious trade-offs.
Ina Fried: And, Minister Gan, maybe that's a good time to come back to you in terms of you are looking at this, AI's here, how does my country take advantage of it? How do I use it to improve the well-being of my citizens economic competitiveness? How do I make sure I'm not introducing a new dependency? I think we've all learned over the last five or six years that who you're getting your technology from matters a lot more than we thought when we thought it was one global economy, and it didn't matter geopolitically. Things have shifted quite a bit. How are you thinking about where the greatest opportunities are, and where do I need to be careful?
Gan Kim Yong: First, let me also address the issue of the risk. I think with any new technology, there's always risk, and particularly technologies like AI or generative AI, there are risks that we are not familiar with and therefore there's always this fear that we will not be able to manage the risks. But I think what we should do is to do our best to identify the risks and to tackle it, mitigate it. Even human decisions, the process also is risky. We also have human biases. We also have our own experience. And even doctors make mistakes. And as a former health minister, I must say that AI actually has great promises in the healthcare arena. For example, it will help to speed up the process of drug discovery.
Ina Fried: I believe Dr Koller is working specifically in that area.
Gan Kim Yong: At a much lower cost. And have fewer mistakes, fewer errors. I think this is very important.
Another area which Singapore is also looking at is how to make use of AI to encourage or to promote public health and the things that you should do: The exercises that you should be undertaking, the type of food that they ought to avoid. And we can make use of AI to customize for the needs of individuals.
At the same time, it is also a possibility for us to embark on precision medicine, to provide customized medicines depending on the make-up of individuals. So gone the days where we had generic medicine that it's a bit of a trial and error. Sometimes this works for you, some it doesn't. Whereas with precision precision medicine, it has a potential to be a more targeted therapy. Singapore is also looking at how we can apply AI and generative AI in the manufacturing context. How can I use artificial intelligence to optimize my manufacturing process, to optimize the management of the entire supply chain, particularly in today's world when we are beginning to operate globally? I think it's very often beyond human capability to be able to manage global operations; with the assistance of AI, it will allow us to do so more efficiently and in a more optimized way. So I think there are many opportunities for us to develop the AI capability and application in use cases. But at the same time, we need to be conscious of the risks that it involves and have a robust governance system, and the system must evolve as the technology evolves. It cannot stay static. Therefore, Singapore has adopted a sandbox concept where we allow the technology to evolve, have a very light touch regulation. But at the same time, we keep a very close watch on the development of the technology, and make adjustments to our rules and regulations as we go along. So this way we allow the development, the evolution of the technology, but at the same time making sure that this application is safe, as much as possible.
And if we discover risk, we must be prepared to adjust our rules and regulations, our governance system as quickly as possible. So we do need to have a flexible, practical and nimble governance system on the use of AI and the development of the AI.
Daphne Koller: I love the examples that you gave, Minister Gan, about the use of the technology in drug discovery and healthcare. I want to use that to highlight a point that I think often gets lost, because the consumer use cases that we all find so immediate and relatable have taken over people's imaginations of what the technology can do. And I think the examples that you highlighted are use cases where the computer actually does things that a human simply cannot do in terms of assimilating tremendously large amounts of complex information and finding the patterns that allow you to … whether it's identifying a new therapeutic intervention that a human will never identify or whether it's being able to finally dissect the complexity of human biology to the point that we can actually identify which patient is going to benefit from which medication versus not, which requires assimilating so much information about human biology, human anatomy, omics levels, genetics, imaging information, and really create a diagnostic and therapeutic path that a human clinician will never be able to get to. And I think those use cases often get lost by the wayside because people are like all that whoa, diffusion. And we can create beautiful images and you no longer need Adobe Photoshop, which is great, but there are use cases that are beyond the realm of human capabilities. I think those are actually going to be the ones that in many ways are going to be the most impactful of all.
Ina Fried: I agree. And I also think that's where our challenges get pushed even further: How do we as humans manage more and more work that's being done above our capability? Again, there's all this talk about AGI. I don't think any of these technologies are going to replace humans. But to address your point, I do think already and increasingly they will be able to do things at a pace and scale that humans can do, but also factoring in way more things than a human brain can process. I had the opportunity a few weeks ago to ride an autonomous vehicle with Toyota, but I also had the opportunity to drive one where it was the human and the car. We were both playing a role, but I wasn't playing the role that we're used to: we're used to direct manipulation. I turn the steering wheel, and all the wheels go that way. In the most advanced system, maybe only the rear wheels do. The computer's capable of saying, Well, the human driver wants to go left, I know the conditions, I should turn the left front wheel this way. Anyway, my point being that there's actually a lot more factors when a computer is in charge that you can take into account, that we've simplified because humans can't do it. Sabastian, how are you thinking about how we manage responsibility when the work being done is – I don't want to say above our pay grade – but more, more factors than we can handle. And in moving into a world where, correct me if I'm wrong if anyone disagrees, we are letting computers take action? I think that's the most exciting part. It's the scariest part. Right now, we're using it to inform human decision-making. It's decision support. But it's going to be taking action. How do we prepare for that world?
Sabastian Niles: Now I'm smiling because at Salesforce we've had for a long time a dedicated – I think this is public information – Salesforce futures. It's a dedicated Salesforce futures capability that we really infuse into all of our kind of discussions around what are all the potential future scenarios that could occur. What do we think is more or less plausible and what are the ones that we want to help right, sort of support or just risk to mitigate, you know, the lens of trust, responsibility and impact? So I think there's three interesting areas that we're focused on, but maybe other organizations and society could be focused on. There's an old line in the business world, but I think it applies to all organizations: culture eats strategy for breakfast.
So if we think about the pace of change and velocity of change, have you built an organization that is able to respond, has built the capability of innovation and a culture that can be fast, that can take in inputs, that can adjust, that can pivot, can take in new information and then make adjustments rather than have a culture that is totally dependent on the past?
I think number two, regarding the sort of the issues that you raised, is how do you think about human capital management? And corporate strategy or government strategy or non-profit strategy, or when you're managing a mix of human beings and what we call autonomous GPT agents? You know, when we launched our Einstein GPT, its different skills and the like, I'll give you a brief little vision, and bring in this issue of how do you manage this broader kind of workforce with hopefully humans who are amplified and augmented with a sort of co-pilot. Partnering with AI to do their jobs better, to have their impacts be more meaningful.
And then also dealing with folks that aren't human, right? Autonomous agents.
But here's the interesting dynamic. Right now, we think about how do people interact with each other? It's the idea that there's a human interacting with a human. And then maybe, there's a human interacting with sort of some AI on the other side. Again, our view, our strong principle and acceptable use policy and the like is disclosure. Be transparent. We should all know right when we're interacting with that. But there will come a time where it's not a human interacting with an AI, it's an autonomous agent interacting with another autonomous agent, and then continuing to interact with autonomous agents. And it's all going back and forth, whether it's in any sort of potential domain, certainly in business: whether it's in sales, whether it's in R&D or whether it's in marketing. Whether it's in procurement, whether it's in all this.
We're already, by the way, seeing the first legal contract - in the NDA space, negotiated solely by agents on each side, overseen, of course, by lawyers who hopefully are paying very close attention. But this whole issue of agents interacting with agents – we talk about the concept a human in the loop a lot. I think you raise an important point. Look at the current general sort of state of the world around in these technologies. It's actually not just human in the loop. Humans are making the decisions. With the benefit of the inputs, AI inputs, the analytics and whatnot. But humans are making the decisions.
This is very important, by the way. You could say maybe this is part of trust, responsibility, impact. How do you ensure that it continues to be the case that we have human oversight? But will there come a time where hopefully, again, a trust-first, trust-infused, responsibility first, ethics first, responsible innovation type of decision-making apparatus is actually a AI-led.
What do we have in a world where AI is making capital allocation decisions? And here, you ask about what's the significance of change and if things are happening really fast, what happens with the law? The reality on the ground is the technology outpaces the law. It outpaces public policy. And in order to achieve the future that we want to have, rather than the one that we may get stuck with, we have to have strong law and sound public policy. But also one where these elements are going to continue to be driven forward by the private sector and to some degree and perhaps to an increasing degree by the public sector, you have to embed in that a voluntary set of actions and hopefully across multistakeholder actions, this sense of responsible guidelines. Acceptable use and trust, so that even as you protect and safeguard and maybe accelerate innovation, you're accelerating responsible innovation. And that people are really grappling with these issues really early.
Ina Fried: Jeremy wants to get in a last word, real quickly.
Jeremy Jurgens: I just want to come in because some of these questions aren't necessarily doing it right. So if you look at financial services, a lot of you have already used a credit card or a payment app today, there's an AI behind that. If you think of high frequency trading, we actually have safety mechanisms in place. So when the agents are talking and basically trading against each other, say, oh, wow, this is actually getting a little bit out of hand. I think what we can do is actually learn from some of the heavily regulated sectors like healthcare, like financial services, and say, now knowing what we know, we'll actually start needing to apply this in more areas than we have previously.
So because AI is going in and being embedded in a lot more domains, we will need safety switches. We'll need to think about what happens when agents talking to agents get out of control. When do we want someone to say, well, hold on, put a pause on that? Do you want to see my whole bank account or just the trade for my Netflix subscription? There's a big difference in when you want a human validation in that system.
And the last thing I would just touch on is roles recommend everybody look in our report from last September: We don't see jobs being displaced so much as specific skills. So we recommend all of you to look inside your organizations and think what are the specific skills or roles that are most ready for automation? And also have that discussion with your team members, with your customers, with your organization. And this will actually help us navigate learning on knowledge, and both successes and mistakes from the past and actually helping prepare for that better future.
Ina Fried: Well, we are going to have to leave it there. Thank you so much to all our panellists for the insights. I know we're going to have a fascinating couple of days at the conference.
If you enjoy discussions like this, quick plug: I do a daily free newsletter for Axios. You can go to aiplus.axios.com. I'm sure we're going to be talking more about this, not just here, but again in Davos in January where last year ChatGPT was the talk of everything. I think this year how we deal with a world that's been changed by ChatGPT will be at the top of the discussion.
I would ask everyone in the room to just stay in your seats. Our next panel is going to be on the new age of gen AI governance. It's going to begin in a couple of minutes after we quickly refresh the stage and then after that session, we'll have our morning break around 10.45. Thank you.
Daniel Dobrygowski and Bart Valkhof
2024年11月21日