Making sense of AI
We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Circa 2017, there was a lot of hype around autonomous driving. If one were to take that at face value, it would mean that by now autonomous driving would have been a reality already. Apparently, that’s not the case and Alex Kendall claims to have known that all along. Still, that did not stop him from setting out then and he’s still working on it today.
Kendall is the cofounder and CEO of Wayve, a company founded in 2017 to tackle the challenge of autonomous driving based on a deep learning approach. Today, Wayve announced a partnership with Microsoft to leverage the supercomputing infrastructure needed to support the development of AI-based models for autonomous vehicles on a global scale.
Wayve was founded following world-class research in deep learning from the University of Cambridge, building their first robot in a house-office garage. Kendall himself has a background in AI, with a Ph.D. in deep learning.
Kendall describes himself as being passionate about building intelligent machines that can really add a lot of value to our lives. For him, he went on to add, that involves building embodied intelligence. That’s not really how most people think about autonomous vehicles. Kendall qualified his statement as follows:
“Co-designing the hardware and software to build systems that have the ability to reason in complex environments — and I think there’s no better place to start than autonomous driving. Autonomous driving is going to be the first widespread example of intelligent machines that really transform the cities we live in”, he said.
That serves well as a gentle introduction to Wayve’s approach, which the company dubs AV2.0 (Autonomous Vehicles 2.0) as opposed to AV1.0, the term Wayve uses to refer to “classical autonomous drivers”.
As argued by the Wayve team in an Arxiv publication, AVs today are designed around the same deliberative robotics architecture, which is an expansion of the sense-plan-act paradigm. The problem is broken down in a few key areas: sensing, scene representation, planning and control.
Wayve’s team believes that the majority of these are sufficiently mature for driving based on the success of respective benchmarks. While further gains may be had, none of these areas will offer a step change to unlock scalable driving. What’s needed to achieve an autonomous future according to Wayve is decomposition: Solving driving with data.
The team drew inspiration from examples such as natural language processing with GPT-3 and games with MuZero and AlphaStar. In these examples, the solution to the task was so sufficiently complex that hand-crafted abstraction layers and features were unable to adequately model the problem. Driving is similarly complex, hence why Wayve argues that it requires a similar solution.
The solution Wayve is pursuing is a holistically learned driver. In other words, an end-to-end deep learning-based approach to autonomous driving. When asked to make a point-to-point comparison between the AV2.0 and AV1.0 architectures, Kendall responded that AV2.0 has just one component, and therefore a comparison is not plausible.
That’s all fine and well, but how does it work in the real world and where does Microsoft come in? According to Kendall, he could see in 2017 that this was the way to go, even if he knew that they were not quite there yet. Fast-forward to 2022 and the setup is now there for everything to fall into place.
“You need to build for what’s possible in five years time, for what’s possible in the future to be in a position to really pioneer this”, Kendall said.
Wayve raised a $200 million series B backed by a prominent group of global financial and strategic investors — including Microsoft — in January 2022. That brought the company’s total funding to $260 million. The company is headquartered in London, with a small office in the San Francisco Bay Area as well. Wayve’s team currently includes just over 150 people.
Machine learning at scale is 90% an engineering challenge and 10% tinkering with algorithms, Kendall added. Besides doing in-house research, much of which is published in top scientific venues, a lot of the effort goes into things such as benchmarks, data infrastructure, visualization systems, simulations and compute.
When it comes to data, Wayve focuses on leveraging video collected via cameras in real-time, with radar data having a complementary role. To train its deep learning models, Wayve collects more than one terabyte of data per minute, Kendall claimed. The company has been working with Microsoft and its Azure cloud since 2020.
Since then, the team has seen an absolute acceleration in performance at a higher scale of training, Kendall said: more data, more compute, more parameters in machine learning models. This, he went on to add, is really starting to push the boundaries of what is possible for any commercial cloud offering today.
“If you think about a lot of the supercomputing technologies that are developed today, a lot of them are around large-scale text or natural language processing. But moving from kilobytes of text data to petabytes or exabytes of video data is really what’s required to make mobile robotics or autonomous driving work at scale with machine learning,” Kendall explained. ” that’s what Wayve and Microsoft are setting out to build.”
The partnership between the two companies goes beyond the typical scenario in which commercial application providers partner with cloud vendors, according to Kendall. In that scenario, cloud vendors usually provide free or discounted access to their infrastructure for their partners. What will happen here, Kendall said, is that Wayve will work with Microsoft to push the boundaries of what’s possible in Azure.
That sounds like a win-win, as Wayve will get to help develop the infrastructure it needs and Microsoft will get to work closely on a use case that helps test and push Azure forward. This fits well in Microsoft’s strategy of hedging its bets when it comes to autonomous vehicles and other high-end applications may benefit as well.
As for real-world deployment, Wayve has a plan and some successes to show for it. The plan focuses on commercial fleets. As Kendall explained, commercial fleets have an extremely large coverage of the world. Wayve has partnerships in place with the Ocado Group, Asda and DPD, three of the largest commercial fleets in the U.K..
Currently, Wayve’s partners help the company access large amounts of training data, via the data collection devices that manually driven fleets are equipped with. Wayve also leverages synthetic data produced in-house to be able to better deal with edge cases – situations not easily encountered in the wild.
In the future, partner fleets will be the first on which Wayve’s AV technology will be commercially deployed. That lets each partner focus on what they do well, Kendall said. It also mirrors the way technological breakthroughs are introduced — first deployed at enterprise scale, then trickling down to consumers.
Wayve’s ambitious goal is to be the first to bring AVs to 100 cities. Recently, a first step towards that goal was taken by the company. Wayve set out to test if their AV2.0 model that was trained in London could generalize its driving intelligence to new cities, with no prior data collection to influence model performance in the new cities.
The model was tested in five cities in the U.K. (Cambridge, Coventry, Leeds, Liverpool and Manchester) over a three-week period in September 2021. The company claims that its autonomous driving system drove over 610km in previously unseen cities without any prior city-specific adaptations, demonstrating all the skills it learned in London.
Generalization is one of the grand challenges of deep learning in general and one that Wayve has identified among the seven grand challenges for learned driving too. The other six are vehicle adaptability, modeling real-world complexity, learning from accessible off-policy data, safety under uncertainty, interpretability of failures and driving rewards.
While none of those is a small feat, what it all comes down to according to Kendall is performance and predictability.
“We need to build intelligent machines that are first and foremost performant, that are safe, that provide value, that provide impact in our lives. Machines that are predictable, don’t do erratic things, are accurate with what they can and can’t do and meet or exceed expectations. That’s really what we need to think about,” Kendall said. “Interpretability [for example] is really important from a development perspective, from a validation perspective, from these kinds of perspectives. I don’t think we strictly need to solve causality and causal reasoning in deep learning to bring this technology to market.”
He went on to explain that causal reasoning is not something human brains nor AV1.0 can provide.
“I think the key thing that we need is a system whereas engineering teams, we, can understand and fault triage and ultimately improve the system so we don’t make the same mistake twice. That’s incredibly important. But the research shows that it’s not strictly important for building trust. If you think about if you go in an aircraft today, in an airline, you don’t get an interpretable understanding of how the aircraft works, but you trust it because it is performing and predictable. And primarily, these are the things that we need to make sure ourselves at scale to see this technology trusted and adopted”, he concluded.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.
Hear from senior executives at some of the world’s leading enterprises about their experience with applied Data & AI and the strategies they’ve adopted for success.
Join AI and data leaders for insightful talks and exciting networking opportunities in-person July 19 and virtually July 20-28.
© 2022 VentureBeat. All rights reserved.
We may collect cookies and other personal information from your interaction with our website. For more information on the categories of personal information we collect and the purposes we use them for, please view our Notice at Collection.