Robots have been used in manufacturing for decades, but the rise of physical AI means the machines can do many things that used to be impossible.
In this episode, co-hosted by Kiva Allgood, who heads the Centre for Advanced Manufacturing and Supply Chains at the World Economic Forum, we hear from two experts involved in deploying physical AI in the real world.
ポッドキャスト・トランスクリプト
Tye Brady, Chief Technologist, Amazon Robotics: Physical AI is a relatively new term with really the age of AI combining with robotics.
Robin Pomeroy, host, Radio Davos: Welcome to Radio Davos, the podcast from the World Economic Forum that looks at the biggest challenges and how we might solve them.
This week - artificial intelligence is all virtual, right? Ever heard of physical AI?
Kiva Allgood, head of the Centre for Advanced Manufacturing and Supply Chains, World Economic Forum: So it's automation that can think on its feet. It's really taking artificial intelligence into the physical world.
Robin Pomeroy: Robots have been around in factories helping us make goods for decades - so what difference will it make that they are now smart robots?
Daniel Kuepper, Managing Director & Senior Partner, BCG: We have estimated that physical AI can automate 50% more tasks than traditional robots. My expectation is that we move towards largely self-controlled, autonomous factories of the future by 2050 roughly 70% of all global manufacturing operations are largely autonomous.
Robin Pomeroy: That’s in the future - but physical AI is happening now. We hear from the head of robotics at Amazon on how robots are now working in among a human workforce. And where did he get his inspiration for what sounds like science fiction?
Tye Brady: I loved R2-D2, I loved Star Wars. I was inspired by awesome robots that were out there.
Robin Pomeroy: And with the Annual Meeting in Davos on the horizon, you can imagine that physical AI will be among the talking points.
Kiva Allgood: I think a lot of the discussion at Davos is how do you unpack the power of AI and its impact on your business, on the planet, on people.
Robin Pomeroy: Follow Radio Davos wherever you get podcasts, or visit wef.ch/podcasts.
I’m Robin Pomeroy at the World Economic Forum, and with this look at smart robots…
Kiva Allgood: Physical AI in action.
Robin Pomeroy: This is Radio Davos
Welcome to Radio Davos where this week we're talking about physical artificial intelligence.
Later in the episode we'll hear from the head of robotics at Amazon and from an expert on the issue at the Boston Consulting Group.
But before we hear those I'm joined by a co-host. She's a colleague of mine. We're joining up by the wonders of the internet between Switzerland and the West Coast of the U.S. It's Kiva Allgood who is head of the Centre for Advanced Manufacturing and Supply Chains at the World Economic Forum. How are you Kiva?
Kiva Allgood: I am wonderful. Yes, we're watching the sun set by you and the sun rise by me. It's awesome.
Robin Pomeroy: Yeah, exactly. It's not quite setting yet, but it's that time of year where we've got a few minutes left of the day I think. Brilliant to have you. Thanks for joining us early in the morning in San Francisco.
This is a question I think I asked both of the people I've already interviewed, the ones I mentioned will be hearing later. But the question I asked them is what is physical AI, because it's a phrase I hadn't heard before I think you mentioned we should probably do a podcast about it
Kiva Allgood: From my point of view, you can think of physical AI as having, you know kind of three system types. You've got rules and learning and context, and they often work together, not separately.
So if you take a factory robot on an assembly line, it starts with rule-based precision, right? It's the if-this-then-that logic that ensures that every bolt is tightened exactly right. But this robot isn't blind anymore, that's the beautiful part about physical AI. It's constantly watching and listening through cameras and sensors and if something unexpected happens, say a part is missing or that bolt isn't tight enough or a human steps in its place, it doesn't freeze, it just shifts gears, it changes the context.
So that's based reasoning and it's interpreting what's going on and deciding how to adapt safely and intelligently.
And then the last piece, because remember there's three types, there's rules, learning, and context. Once the situation is resolved, it goes back to doing its routine, and it goes to just tightening the bolts. So in other words, today's robots don't just follow instructions, they understand what's happening around them, they adjust in real time.
And that's physical AI in action. So it's automation that can think on its feet. It's really taking artificial intelligence into the physical world.
Robin Pomeroy: You run the Centre for Advanced Manufacturing and Supply Chains, I guess this is the area where we're seeing AI go physical first.
In the future, we'll all have our, I believe these are already available, actually. I was listening to the Hardfork podcast the other day and they were talking about humanoid butlers. I must admit, it's never appealed to me, but people are working on them. They will be available. Even in months, I don't think they're very good yet from what they were saying. Does work very, very well is in the manufacturing sector, in the sector that you're in.
Kiva Allgood: Yeah, no, real world adoption is happening now, so we get the benefit of walking hundreds of factories every single year across the team and, you know, we get to see the best of the best. So it's not speculative if you think about Amazon, Foxconn, these are lighthouse manufacturers as part of our Global Lighthouse Network and they're already deploying physical AI at scale with measurable productivity, safety, and time-to-value improvements.
I think those last couple words are super important. Yes, robots are cool, we've all had a toy or two, I used to watch the Jetsons. But, you know, it's, to your point, the butler. You know, is nice and interesting, but in an industrial landscape, you have to be safe. You have to provide measurable productivity and, you know real time to value improvements. If you aren't doing those things, if you're not helping a company make money, save money, or you know innovate faster, they're not going to roll out the technology.
So yes, manufacturers, supply chain, logistics, those are all at the forefront of implementing and using AI for good.
Robin Pomeroy: Okay, should we hear the first of our two interviews today? Could you tell us who is Daniel Kuepper?
Kiva Allgood: Yeah, Daniel Kuepper is the Managing Director and Senior Partner at BCG. He's at the forefront of robotics. He gets the opportunity and he and I get to have a lot of conversations that are really super interesting, especially when you think about the topic you just raised, humanoids. But he's out there. He's meeting with all the startups, has a really interesting point of view, but also super hands-on. So you'll love his insights, super great guy as well.
Robin Pomeroy: So from the Boston Consulting Group, this is Daniel Kuepper and I started by asking Daniel what I started asking you Kiva. to define physical AI.
Daniel Kuepper: For me, in the context of manufacturing, physical AI enables robots to perceive, reason, and act in eventually changing environments. And it's all about the convergence of robotic hardware that we know for many years, vision systems or other cellular devices and AI models.
So these physical AI systems aren't just traditional dumb robots. They are machines that can learn and adapt, allowing for pivotal shift in automation.
Robin Pomeroy: So can you give us an idea then? You've obviously got some experience of this. Have you been into factories and seen these things work? How do you describe them to someone who's never seen them?
Daniel Kuepper: Yeah, look, I mean, first of all, I would say there are two major benefits coming along.
Number one, these systems can do substantially more than traditional robots could ever do. We have estimated that physical AI can automate 50% more tasks than traditional robots could do.
And secondly, the automation engineering effort that typically comes along is significantly less than it would have been for traditional robots. 70% fewer engineering hours that are needed to deploy such robotic solutions thanks to setting up the entire system includes the training of the AI in a virtual environment. In short, the implementation is faster and cheaper.
And both the capability of being able to do more on the one hand side and doing it faster and at less cost.
Robin Pomeroy: So what kind of industry sectors are we talking about? And is it that sectors with existing robotics will replace those or upgrade them to AI robotics? Or is it that whole new sectors will use it?
Daniel Kuepper: Yes, in parts, that's correct. Think of applications that we weren't able to automate before, like flexible components, textile, for example, fabrics. That was a task that we were unable to automate. Think of tasks with a higher variability.
I typically differentiate three types of robotics that historically evolved.
First, rule-based robotics that is highly deterministic where every motion is repetitive and predictable. These are the traditional robots, incredibly fast and accurate, but not necessarily very flexible.
Then the second type of robots, I call them training-based robotics. That's where AI comes into play. Robots that learn tasks using methods like reinforcement learning. They can handle tasks with more variability. For instance, a robot can be trained to perform adaptive kitting or sorting, where it must respond to slight differences in how parts or packages come down the line. The training of those skills can now be done in virtual environments in which the robot can perform the task thousands of times before it touches the factory floor. So you basically have them trained to the maximum kind of precision to complex tasks that are non-repetitive. And all what you need for this is compute power.
And then a third one is robots capable of understanding the context. So these systems leverage multimodal large transformer models that we all know from large language models, such as ChatGPT. They will include language and vision to act appropriately in unpredictable situations.
Think of a robot that can take a new instruction in natural language and execute it in a novel environment without explicit reprogramming. In practise, this could mean a robot that upon being verbally told to reorganise a shelf of mixed items, can reason out a plan and do it.
Robin Pomeroy: Do those robots already exist? I mean, they're already doing that?
Daniel Kuepper: Yes, I mean, these context-based robotics, that is a bit of vision. I mean it exists in laboratories, it exists in academia, and I argue they're not yet at a level of maturity where we can largely deploy them in industrial settings.
So ultimately the goal is to have multi-purpose, eventually even general purpose robotics here.
Again, many are experimenting with it. They are not yet at the maturity that we need to run in industrial operations.
Robin Pomeroy: So which industries do you expect to be the early adopters of that kind of technology?
Daniel Kuepper: Early adopters will be, as we have seen in the past, electronics companies. It will be certainly also automotive players in general. Those companies where the pressure to automate is high, while at the same time there were hurdles in place in the path that they couldn't overcome and that this new technology now helps us to overcome.
Robin Pomeroy: Why is this happening now? What was the breakthrough that meant this could happen now? Why didn't this happen 10 or 20 years ago?
Daniel Kuepper: I would say there are multiple aspects, three or four to mention here in particular. So first of all, there was an incredible acceleration of computing power that we have observed over the last eight years, with a factor of 1,000x, outpacing what we would have expected from the Moore's Law by 25 times. Which is impressive.
I vividly recall discussions around 2010 where people were concerned that we won't be able to catch up with Moore's Law. Actually, it's the opposite of what the case over the last couple of years.
Robin Pomeroy: Remind us, Moore's Law kind of predicts that compute will...
Daniel Kuepper: It is predicting an exponential growth of compute power and this is of course limited by the semiconductor devices that we use for it and with the developments over recent years that we have seen.
That's one of the foundational drivers for many kind of frontier technologies but also for robotics and physical AI.
It enabled a second key driver which is closing or narrowing down the simulation to reality gap. That is also super critical for robots because robots can now be trained extensively in virtual environments, which was not feasible just a few years back.
Advances in simulation, digital twins and synthetic data allow robots to practise tasks thousands of times in a photorealistic virtual environment, then transfer that learning the real world, and this dramatically lowers the cost risk and time needed to teach robots new skills even when real data is scarce. That's a second important kind of foundational trend.
And then a third one is Vision-Language-Action models, how they are often referred to, and they let robots interpret complex remands and unfamiliar situations by drawing on broad training. In practise it enabled zero-shot learning. A robot can be given an instruction it's never seen before, like, sort these mixed items by size, and figure out how to do it by reasoning with its AI model.
So that is ultimately also what I was referring to before as the context-based robotics that are enabled by this key underlying tech evolution.
Finally, fourthly, We also have better and cheaper hardware available. So beyond or next to the technical feasibility, also the business case of implementing these technologies is improving.
Robin Pomeroy: What does it look like when a company adopts this kind of technology? They make a decision, they're going to invest the money, they are going to put physical AI into their factories. What's easy and what's really difficult for them to do that? What are the stumbling blocks they have to overcome?
Daniel Kuepper: I would argue most manufacturers are experimenting more broadly with AI and also with physical AI in the meantime. However, many of them fail. And there are multiple reasons for this.
Number one, there is no clear target picture in place. And for me, that means there is no clear articulation and prioritisation of certain robotics use cases along the value that might be created with this. That's one.
Then, secondly, I would always argue a company needs to draft a holistic target picture, it needs to go beyond just an isolated view on robotics, ensure efficient layouts to be in place, to make it all work. And there are also virtual workflows that need to be automated to make it all happening.
And with that in place, my expectation is that we move towards largely self-controlled, autonomous factories of the future by 2050, so by the middle of the century, roughly 70% of all global manufacturing operations are largely autonomous.
And not to forget, in order to succeed, and that is maybe then the second bigger theme to answer your question, we also need to put in place some foundations. Our companies need to go beyond piloting isolated use cases. We really need to implement a technology infrastructure, most notably in IT/OT [Information Technology/Operational Technology] architecture that allows for scaling, and we need to evolve people capabilities all along.
Robin Pomeroy: So let's imagine we're living in this future in 2050, when all the factories are automated. What happens to the jobs? This used to be mass employment in factories and that vision looks like there's actually no people in them anymore.
Daniel Kuepper: I mean, there's a lot of talk around dark factories, or lights-out factories. And to some extent, it is how it will look like, but a job displacement will mostly happen in direct manufacturing jobs. Robots are increasingly taking over physical routine tasks and therefore freeing up humans for higher value work.
Some of the jobs will change, and also some new roles will emerge. And a higher productivity means that the same number of people can produce higher outputs or the same output is produced by a lower number of manufacturing employees. And one of the two we will see.
So there will be a structural transformation that needs to be managed. And eventually and in some sectors and in some regions, it will also come along with significant job cuts, at least in the mid term.
However, shifting the view more towards a macroeconomic perspective, when you ask me, will we see mass unemployment just because of physical AI? My clear answer is no. I mean, we haven't seen that after the first industrial revolution. We haven't seen it after the second. And also we haven't seen it after the third industrial revolution, there have been structural challenges that need to be managed. But when and once physical AI will be broadly impactful, technology drives down cost, thus new employment will emerge. So my overall outlook is very positive here.
Robin Pomeroy: On a human level, have you been into factories and watched some of these robots or even in laboratories or universities? And has anything kind of impressed you or have you seen it all now and nothing impresses you anymore? Has there been a moment where you've looked at something happening and thought, wow, that's the future right there?
Daniel Kuepper: No, I'm almost daily in factories and I'm observing how this evolves. And I've done that for the past 20 years or 25 years in robotics now. So the evolution is incredible. It is happening at an incredible pace and it impresses me every single day.
Now, what is also top of mind of many, many people in these days are humanoid robots. I have two simple reasons that make me believe that it will be difficult to deploy those in manufacturing environments. Number one, it's about system efficiency and the required invests needed to install such a system in a production environment. There are simpler and more cost-effective ways to organise transport and also manipulation of parts. For example, using affordable wheel-based robotic arms. Since factory floors are typically designed without obstacles or stairs for safety reasons, there's usually no practical need for a humanoid form factor.
And the second reason that makes me feel sceptical about humanoids is the need for separation of value creation and logistics. For instance, robotic arms standing idle while a humanoid moves around the factory. In other words, capital costs become unnecessarily high and can be optimised by keeping specialised production robots and logistics robots as distinct systems.
What excites me? Of course, I think we are all excited by general purpose robots. So robots that you can eventually talk to and give a natural language instruction and then the robot is doing what you have asked him to do. So that's definitely exciting. But again, that is currently more in an experiential, more in the research stage.
Robin Pomeroy: And do you think physical AI will first be pioneered and deployed in manufacturing and then may go out to do other things? You think about humanoid robots in science fiction, you think about the robot butler or whatever. You've just set the case for there's no need for that in a factory, but people may wish to have that at home. Do you see this is the first stage and then after manufacturing, other sectors, including domestic work?
Daniel Kuepper: Yes, so with the exception of humanoid form factors, I believe indeed that manufacturing will be among the first users of advanced robotic systems of physical AI for various reasons.
There's a cost and margin pressure that companies need to address. Secondly, there is a labour and skill shortage. We expect labour shortage could be as high as 200 million by 2040. So there's a need also to find or replace people with robotics.
Thirdly, in order to avoid significant inflations, we need to find ways to get more productive and therefore produce in high-cost countries at similar cost levels as we would be able to produce in low-cost countries.
Then fourthly, we need for consistency, consistently high product quality. A robot can operate 24/7 without losing focus or getting tired. That's another key dimension.
And those taken together, including the significantly improved return on invest, the payback period has been decreasing from five to seven years in manufacturing applications for those most advanced systems to one to three years. In the meantime, it will continue to improve. So all of this taken together, I believe manufacturing will be the first or one of the first sectors to implement physical AI.
Robin Pomeroy: Daniel Kuepper, who's a managing director and senior partner at the Boston Consulting Group.
Before we get into the next interview, can you just tell us what work do you do on this and kind of what is the output from it? Because you have these networks of partner companies that are involved in this, but you're publishing reports and research.
Kiva Allgood: So we've been working really diligently with our industry partners, with different governments and ministries, as well as academia to really say and outline, Okay, what is physical AI, in terms and examples that I just gave. You know, it's enabled through breakthroughs around multi-multi-foundational models, reinforcement learning, tactile sensing, autonomous decision making. We define all those in our reports.
So I think for us, what we try to do is create a framework for people to learn, and that framework then gives them the ability to apply and drive impact on a factory floor in the logistics. And again, for us, we're able to do a lot of that research collaboratively across all three of those dimensions. So we really get a holistic view of also what's real and what's hype. And we try to demystify both of those.
But I think the report that we just published really gives leaders the ability to understand the technology and the impact that it has. But it also allows us to understand some of the things that are holding it back.
So we explore, again, if you think about physical AI, and on a manufacturing floor, some of these systems have been around for decades. They're old, you know, not as old as me, but they're old. And it's hard to collect some of the information. So some of things that are holding back physically are lack of data security, 3D spatial intelligence, and generalised dexterity.
And you're starting to see those things advance as well. So we don't just explore the tech itself, we explore the impact that it has on the business, and we give you a framework to really understand also what's holding it back.
Robin Pomeroy: What's the name of that report?
Kiva Allgood: Physical AI: powering the new age of industrial operations.
Robin Pomeroy: Wonderful, a link to that in the show notes of this episode.
Right, let's go to our second, the second of the two interviews for this episode, we're going to Amazon, which everyone knows what Amazon is, the company you order goods from, and it's handling lots of different packages, sorting them, sending them. Tell us about our next guest.
Kiva Allgood: Yeah, Tye Brady, he's the chief technologist at Amazon Robotics. And I think one key highlight here is Amazon's really done a lot of work on publishing the criticality of bringing in automation and robotics, but also for the benefit of the people who work there. So really interested to hear Tye's perspective.
Robin Pomeroy: Well, let's hear Tye. This is me asking him, again, that basic question.
Robin Pomeroy: Tell us about physical AI. Does anyone ever say to you what is physical AI and if they do, how do you answer?
Tye Brady: I do hear that quite a bit. Physical AI is a relatively new term with really the age of AI combining with robotics.
So let me start with how I define a robot. A robot is when we blend together sensing, computing and actuation in a physical embodiment to perform a task across a spectrum of intelligent behaviour. The behaviour can be anywhere from a scripted task to very much being aware of your surroundings.
And what physical AI does is it's kind of bringing the mind to the body of robotics. It's allowing those bodies to be more reactive, more adaptive, more integrated into our everyday environment.
Robin Pomeroy: We're all familiar with robots on production lines, putting cars together, things like that. That's been happening probably for about half a century now, I would guess. Those are, I assume, programmed for the movements and the actual things they're doing. They're not thinking for themselves. They are in one place or they're programmed to move to certain places. How are your robots different from that?
Tye Brady: I've been in the industry for a bit and really robotics started out in the automotive industry. You can think of really large, bulky arms that are doing very fixed patterns of welding.
While cool, for sure, the car parts had to be exactly in the right spot. And if it wasn't where it needed to be at a certain time, then the weld wouldn't happen very well.
But boy, have we come a long way now. So I can give, for example, inside of Amazon, we have a world-class capability in mobility and in manipulation.
And let's just talk about mobility first. I think a great example is our Proteus robot. What it is, it's a collaborative robot that can move containers of packages from a sort chute to any outbound dock truck. So there's a truck waiting in our outbound dock and we want to go to the right truck. Well, what it does is it picks up this container of goods and it moves it completely around people. It's a safety certified robot that can move containers of goods around people, allowing people to do whatever they want to do. Maybe they're moving their own container of goods. It allows us to bring efficiency to the outbound dock process or actually any dock process.
Very adaptive, very in tune with the environment and also in tune people working as a collaborative agent for people.
Robin Pomeroy: As you were speaking, I just googled it there. I'm looking, I believe, at a picture of the Proteus robot.
Tye Brady: Do you see a little green robot with cute little eyes?
Robin Pomeroy: It is green and its cute little eyes. It looks in the picture a bit like my vacuum cleaner. I think it's probably a lot bigger than that because I also see it picking up a big palette of goods. I mean, what kind of size is this thing?
Tye Brady: It's bigger than your Roomba. It's probably a couple feet wide by a couple of feet deep and maybe a foot tall, something like that.
Robin Pomeroy: So how does it work? It is picking up something from position A and taking it to position B to get it on a truck to be delivered. How does it know what it's doing?
Tye Brady: Yeah, so it has a map of the environment, or what the static environment will be. But of course, it's a very dynamic environment. People are moving in that dynamic environment, we put different containers in different spots at different times, that's completely dynamic as well. And what it's doing is it's sensing its environment using a suite of sensors, understanding most importantly where people are and what's the trajectory of people so that it can route its course efficiently to the dock door.
But what's really unique about Proteus, and something I'm very proud of the many women and men who built, I would say, an amazing collaborative robot, is, let's say we're at a cocktail party and Proteus was one of our guests. And it had to go from one side of the room to the other. State of the art today is, if there's a bunch of people in front of you, just stop and wait for all the people to get out of the zone. Then finally it'll speed itself back up and do the job.
And Proteus actually does this a little differently. What Proteus does is it looks for kind of the direction of people, of where they're walking. And it'll slowly make its way from one corner of that cocktail party to the other in a very safe manner.
So what it's doing is that those eyes are telling people, this is what my intent is. I want to go right. I want go left. I'm stopped right now. Maybe it can beep its horn at you, make a little sound, blink its eyes, ask for help. It's trying to constantly just make that path across.
And it's really neat to see in person and it's really incredible to see this at scale when you have a couple of hundred of these running around in collaboration with our employees.
Robin Pomeroy: Presumably, robots can work through the night and in darkness, there's things machines can do that humans can't do as well. Is that a factor for you at this point?
Tye Brady: 100%. We want to eliminate every menial, mundane, and repetitive task inside of our fulfilment processes. Bar none. If I can eliminate all those with robotics, we will and we're working towards that.
We also find that when it comes to especially the repetitive jobs, if we can have a robot do that job, that's better for people.
As a matter of fact, over the past five years, since we've really gone heavy inside of our robotics, you have reduced our recordable injury rate by better than 34%. It really does help. Imagine that instead of having to pick up a 50-pound box and move it from point A to point B, you can have a robot do that. That's the goal we are after, is that we want to blend and complement human capability with our machines.
Robin Pomeroy: I've heard about a robot also called Sparrow. Is that something you're involved in? What can you tell me about that one?
Tye Brady: Yeah, so Sparrow, so you're getting into our aviary set of robots here with Sparrow and Robin and Cardinal.
Sparrow is a picking robot, right? Our strategy is actually very straightforward. We aim to have a world-class capability in six areas, mobility and movement and storage and sortation and perception and also in pack.
And Sparrow represents one of our manipulation robots. And what it does is that it can pick up what we call an ASIN or this is a skew or an object, right, a particular object type. And it can move and consolidate those into bins, right? A very straightforward task. But when you have more than 400 million plus different objects in your inventory, this can be difficult, right. So what Sparrow does, it's constantly doing a consolidation task, right, so you can imagine we have two bins and they're half full. The best data course is we want to have one bin completely full and the other bin empty, right because then you can put more goods into the empty bin. And what Sparrow is doing is it's actually consolidating and bringing those items to make more full bins, picking up a huge variety of objects.
It's quite the task. It's at a world-class level given the variety of objects that we have to pick up.
Robin Pomeroy: Can it see? It's got cameras looking at things or are there other clever ways that it can perceive?
Tye Brady: Yeah, there's a lot of cleverness that our team has put in. So the first is that we have an end effector that is, Amazon developed and end effector that has various suction cups that we can deploy this array of suction cups on demand, depending on the size and the shape of the object in order to help pick up that object. Once we pick the object up, we're also perceiving the object. If we haven't seen that object before, we actually do a scan and we get a model of the object so that, and learn the technique of how we picked it up, that gets propagated to all the other picking robots that we have inside of Amazon.
We're trying to then identify that object. Of course, if there's a barcode, we have scanners that can pick up that barcode pretty quickly. But not everything, like when you go to the grocery store and you're starting to scan your objects across the scanner, sometimes it gets a little wonky because you can't quite find the barcode. We have other techniques as well, what we call multimodal ID, where we'll actually look at the size and the shape of the object. We'll read text. We'll look at colour in addition to trying to find that barcode in order to identify the object.
By the way, that's AI. We're using tonnes and tonnes of AI on top of that in order it to perceive and to identify these objects.
Robin Pomeroy: Talk us through the history of a bit of this and when did you have breakthrough moments? I'm sure you had moments of frustration and of failure and moving on to the next thing. Were there any moments where you kind of cracked it? What were the early stages?
Tye Brady: I could take that question in two ways. Let me just start with, sorry, it's going to be old man alert because it seems like it's been 40 plus years in order to get here. So cracking that secret sauce in robotics has taken a long time. We're not there by any stretch, there's a long way to go.
But when I started in the 70s, I started programming and I fell in love with computers. I saw the power of computers and what computing can do. In the 80s, the personal computer revolution came around, and I got pretty good at programming and understanding how computers work. I started to get into hardware of how can we task computers to do interesting things, particularly in the physical world. I was very fascinated by that.
I loved R2-D2, I loved Star Wars. I was inspired by awesome robots that were out there.
Then the 90s came around and I was actually at MIT, and that's when the internet came. And the internet allowed us to share our learnings with each other and allow us to be better roboticists by building a community, particularly around software, of how can we task these computers to do interesting things.
So okay, so now we're starting to get the semblance of, I would say, robotics to do useful tasks, right?
In the 2000s, of course, we have the kind of the birth of the autonomous car and sensors and actuators are now coming onto the scene at a much lower cost point than they've ever been. So now the body is actually getting a little bit better and now we can, I'd say, have a little bit more utility, but the adaptive nature of our robots was not quite there.
Enter the '10s where we're seeing this rise of AI and machine learning, of course, is becoming very mature and things start to happen.
I joined Amazon in 2015 and when I joined, we just really had our mobile fleet, what we call Hercules, and this is the world's first goods to person fulfilment system. And what that robot would do is it'd simply move these pods of goods on demand to a station. And it was revolutionary, right? It was revolutionary for two reasons.
One is that it changed an entire process path. Instead of a person going out with a cart and picking items into the cart and going up to the next long aisle, picking items and walking many, many long distances to fulfil the goods, the goods can actually come to a person, right? So that was very efficient. In terms of time, in terms of distance, and we can also store about 40% more goods in the same footprint as compared to a manual building.
So that was the one part, the cool part of robotics, which is the utility of it.
And the second is that there was a drive unit that could carry these goods on demand very efficiently in a structured field. And the robotics learnings that we had, kind of building that body up in the structured field, which means it's behind fences and it can move goods, really came along in 10 years.
In the last five years, we've seen just an incredible, I'd say, growth inside of robotics. Not only are we able to move goods, but we can move them in a collaborative environment where we can eliminate the fences. Not only can we identify goods, we can identify many, many goods that we had never seen before and identify new grass strategies thanks to generative AI techniques.
And even in the structured fields when we're moving goods, an algorithm or algorithms that we've used for 10 plus years just last year using generative AI, we fed in our data. Of course, data is the fuel for any AI system. We fed that data into a generative AI algorithm maker if you want to think of that, and it popped out what we call DeepFleet. We just announced this where it's 10% more efficient. So our drive times have been reduced by 10%. So 10 years worth of effort and then we pop it through our generative AI system and now we have this operational AI system controlling our fleet. Really, really incredible.
Now we're here in the 20s. I think we're really starting to get application down right now that we have, I'd say. The mind and the body together, kind of the robust application, and particularly in e-commerce is set up well for this. And the growth that we've seen inside of Amazon has been incredible, right? We've created hundreds of thousands of new jobs, we've created new job types, and we have really expanded our portfolio of robotics to help our employees deliver on their tasks.
Robin Pomeroy: It's so complex what you've built there. The hardware, the software, AI, which for casual users of AI know it has its quirks and does things unexpectedly. That must have happened to you along the way. Do you remember any moments where, or was it just the nuts and bolts that weren't working? Where were the bits where you were banging your head against the wall saying, we're never going to solve this problem?
Tye Brady: Yeah, well, it's constant, constant iteration, right?
So, the manipulation problem is a really, really hard problem. It's almost a holy grail inside of robotics. In Amazon fashion, we don't just take on a couple hundred different parts or even a thousand or even 10,000. We're taking on millions and millions of different objects. And there's so much that you take for granted.
For example, if you just picked up, you know, like a cup of water, there's so much that you just take for granted, like how hard can it be? People have a model of what, this is a cup of water about what the weight should be, how hard should I be gripping it? How many fingers should I by using? Is it slipping out of my hand or not? And to get that in a robotic system is actually a really, really hard job.
So we started that good 10 years ago, really on our manipulation strategy of could we pick up objects in a way that doesn't damage the object, in a that we can identify the object quickly, and in a a way is also learnable by our other machines.
I'll tell you, when we first started, it was a very hard job. And the ironic part is the one of the, and it still continues to be one of the hardest things to pick up, are books. And that's how Amazon actually started. We started with books.
Robin Pomeroy: Why are they so difficult?
Tye Brady: Well, if a book has a sleeve around it, a little protective sleeve, and you pick it up by a cover, you can see the sleeve is just going to rip off and the book will fall. Now we damage the corner. We're not going to sell that, of course. We want everything to be perfect for our customer. When you pick up a book, if you pick them up by the corners itself, you want to pick it up on the binding side or the paper side. If you pick out by the paper, you could damage the paper. You don't want to do any damage on top of it.
It's really hard. So now we think about grasping strategy. The manipulation strategies that do not involve suction, right? So pinchers in particular, tactile feedback.
So we actually just announced a system we call Vulcan. That's our first robot that has a sense of touch. And touch is a really, really awesome feature to bring to robotics. We take it for granted as people, but the capabilities, tactile sensing sensors have not been there. So when we first started, like how do you can even do this without a sense to touch? We were definitely banging our heads against the wall. But what we did is we helped invent a sense of touch in robotics.
Robin Pomeroy: I haven't asked you about the kind of the human computer interface. How do humans communicate with these and how might that happen in the future? Could I walk up to a robot and say, go and pick up that thing over there? Or is there someone in a control room typing something on a keyboard?
Tye Brady: Yeah, no, it's such a great question. So there's kind of where we are today and where we want to go. So let's talk about where we today. We have programmed our robots to be in the human environment and to be very much aware of what people want to do. People are the lead dancer. We follow what they want to do. Our machines kind of clue into the human cues that someone will give.
For example, if you're walking and you have your back to the robot and you're going to take a left, we want to know that you're going to take a left so that we can go right in a volume. If you're coming too close, we want to understand, like, or if you're a little too close we're gonna give you maybe some message. Maybe it's a glance of the eyes that they have. Maybe it a blink of lights. Maybe it is a sound that would come to signal the intent from the machine to the person to say, hey, I need a change to happen.
But it's also I think imperative that we design our machines to be more understandable by people, right? So for people to really understand what's the intent of this machine. How can I use it in a manner that benefits me? And how can it be a partner to me? And I think the work that we've done inside of Amazon speaks for itself in that. Relentlessly to simplify our machines.
We have a tendency inside of robotics for sure to over-complicate things, make things harder than they need to be. It's kind of like the Mark Twain, if I had more time, I'd give you a one-page summary here. I'll give you one-paged letter. It is that's what we do inside, at least inside of Amazon Robotics, is that we convince ourselves that it works in the lab, and then we gather our employee feedback of how could this work better, like all the way from the screen design to how it looks, to how operates and how it behaves. In the actual environment. We take that back in and we iterate our designs in order to make it more simple to use because if it's simple to us and it has true utility, then you use it more. And when you use more, then we get a better design and we get more productivity and we actually continually enhance the environment for employees in terms of safety.
So that's kind of where we are. Now where are we going? Absolutely, anything that is natural for you, whether it's going to be voice, and we have amazing devices inside of Amazon with Alexa that can bring in voice commands at will to understand the intent of a person. Could you take this box and move it over to the corner for me while I go do this other task? There's a lot in that statement. Which box are we talking about? Which corner do we want to go? But it's amazing how people naturally understand the context of which that directive was given.
So I can see us using our AI systems, understanding more of the context and the environment of how that command was given in order to do the task for a person, for sure. Even more body language as well. Like you can point, you can use your hands and point and show where you want things to be. You could teach as well, like this is how I want the task to be done.
These are all things that we're looking at inside of Amazon Robotics and are very bullish on because they extend human capability.
Robin Pomeroy: Tye Brady is the chief technologist at Amazon Robotics.
Kiva, Tye was talking there about robots that are perfectly comfortable in an environment where there's people walking around on the shop floor, the warehouse floor, and it can navigate, these robots can navigate their way around them. I believe your report looks into that kind of technology.
Kiva Allgood: Indeed it does, you know, super exciting times in the sense that we're crossing a threshold where intelligence isn't confined to screens and servers anymore, right? It's actually stepping into the real world.
So physical AI is turning machinery into collaborators. So you're starting to see that human-machine connection. And that's super important. It's now intelligent enough, as I mentioned earlier, to really see its surrounding by listening with cameras and perception and have context-based reasoning and interpret what's going on. And that interpretation allows it to really collaborate with the human in a more effective way.
Robin Pomeroy: You used the phrase, I think, unstructured environment. Is that the buzz phrase for this?
Kiva Allgood: Yeah, fundamentally what that means is the robot isn't anchored to the ground necessarily or things can move in front of it. We've had automated guided vehicles in factories for quite a long time. They typically follow a certain path. Similar to a Waymo now, it has enough perception, and Waymo is an autonomous vehicle, so if you haven't...
Robin Pomeroy: Yes, for those of us who don't live in San Francisco, yeah.
Kiva Allgood: Yes. Fantastic. Highly recommended. It's kind of like a highlight of coming to the city. But you know, there's no driver and they actually do a better job than humans do because they're fully paying attention.
And in this case, robots are the same. So in an unstructured environment, what we really talk about in the report is the technology shift from the standpoint of you now have advances in perception, decision making and dexterity that allow the robots to work around the operators, and they operate in variable, like we talked about, unstructured, which means that it can change. You can have a pallet someplace that you didn't have a palette before. You can a forklift that's moved, and now it's in the way of the robot. Or high-mix environments, where you've got a combination of all of those things.
An unstructured environment is also outside, so you're thinking about mines and places where you got lots of different types of equipment.
Robin Pomeroy: This all sounds marvellous. Of course, one potential downside is the disappearance of jobs. People used to be employed to do this. So I'm sure you're thinking about that a lot in the work you're doing. Have you drawn any conclusions?
Kiva Allgood: No conclusions, but what I do think is there's the hype and there's, you know, the reality.
The reality is that the technology is helping to create new types of jobs.
We have a huge job gap today and people who want to work in manufacturing and logistics, depending upon the report you'd like to read, if you fast forward to 2030, that gap is predicted to be over 10 million.
So automation is really, the human side is important. But it's creating new shifts in those roles and the perception of those shifts. So you now have robot supervisors. You have AI trainers. You have system optimizers. They are getting to know and engage with the technology in a new way.
So the technology demands an investment in skills, an investment and safety, and human machine collaboration.
So what we've also found through our Global Lighthouse Network,
Robin Pomeroy: Remind us, what is the Global Lighthouse Network?
Kiva Allgood: Global Lighthouse Network is a peer community that really celebrates and awards the best manufacturing sites in the world, the premier global kind of community for the best of the best in manufacturing.
What we've learned through the last eight years of use cases and deep study is that when you invest in the people and the skilling of those people, as much as you do the technology, your productivity improves.
So it's not just about getting the latest and greatest technology on the shop floor or in the logistics or the warehouse. It's also making sure that you're investing in the human side. You're creating depth and breadth in the humans that are helping those robots learn.
Robin Pomeroy: I guess the next time we'll meet, Kiva, will be in Davos at the Annual Meeting of the World Economic Forum.
Last year everyone was talking about AI. I think that's been the case for two or three years now. What's going to be the AI topic of conversation that everyone will be talking about in January, do you think?
Kiva Allgood: Well, I think we go through these cycles on all topics. I've been in the space for a long time. It used to be IoT, Internet of Things, and it was the industrial Internet of things. Now all of those things are merging.
I do think AI is the new user interface.
I think a lot of the discussion at Davos is how do you unpack the power of AI and its impact on your business, on the planet, on people.
Because the technology is changing so quickly. So I think a lot of the dialogue at Davos this year is going to be how do you unlock that new intelligent interface? It's not just a user interface, it's an intelligent interface. And with that intelligent interface, what are the insights that you can gain?
Robin Pomeroy: There's going to be a lot to talk about in Davos. There'll be lots going on. Is there anything in particular that you will be launching that people should be paying attention to?
Kiva Allgood: Yes, we're very excited about our new Lighthouse Intelligence platform. So this is an AI powered and driven platform that is capturing all the insights from the last eight years of our Global Lighthouse Network.
We've been able to really take all the data and really understand the impacts across five different dimensions. So this new tool really allows people to go in and explore all those use cases. If you're trying to drive sustainability, you can look at ways to save money and drive productivity in sustainability, and talents, and new product introduction. And the platform allows you to do that behind a screen.
Robin Pomeroy: Kiva Allgood, you're Managing Director of the World Economic Forum and Head of the Centre for Advanced Manufacturing and Supply Chains. Thanks so much for joining us to talk about physical AI.
Kiva Allgood: Thank you, Robin, and I always love our conversation.
Robin Pomeroy: Me too. See you in Davos.
Kiva Allgood: You bet. See you there.
Robin Pomeroy: Thanks to Kiva Allgood and thanks to our guests Daniel Kuepper of BCG and Tye Brady of Amazon Robotics.
You can find the report Kiva mentioned, it's called Physical AI Powering the New Age of Industrial Operations. Just search for that or look at the link in the show notes.
This episode of Radio Davos was written and presented by me, Robin Pomeroy, with editing by Jere Johansson and studio production by Taz Kelleher.
You can find all our podcasts at wef.ch/podcasts, or you can just search Radio Davos wherever you get your podcasts. We'll be back next week, but for now, thanks to you for listening and goodbye.






