Autonomy is complex. Much more complicated than originally thought. Several manufacturers expected the first self-driving cars to hit the market 3-4 years ago. In fact, Johann Jungwirth of Volkswagen met with Focus Magazine in April of 2016 amongst beanbags, blue suede shoes and skateboards to report the first autonomous vehicles (AVs) will be on the market by 2019. “I could have a car like that take me on vacation. The car drives, I sleep – and in the morning, I go to the ski slopes well-rested.” In the months surrounding that interview, Toyota, GM, BMW and Ford all also announced intended deployments in 2020 and 2021.
And then reality tapped the brakes. Chief Executive Officers started to either reset the expectations of autonomous vehicles, push out expected launches or, in the case of Ford’s Mark Fields, admit their estimates suffered from the same Optimism Bias that plagues 97% of all projects. “We overestimated the arrival of autonomous vehicles,” he famously stated in 2019 to the Detroit Economic Club. Such difficulties with the unpredictable nature of real-life were foretold one year prior by Chris Urmson of Google GOOG in a 2015 TED talk. “Now, [extrapolating logic and algorithms] is all well and good for things that we’ve seen but, of course, you encounter lots of things you haven’t seen in the world before. And so just a couple of months ago, our vehicles were driving through Mountain View and this is what we encountered: … a woman in an electric wheelchair chasing a duck in circles on the road.”
And so getting to market with anything autonomous becomes an executive nightmare: either shoot for the moon and possibly go bankrupt like Starsky Robotics did in 2019, or select a much more narrow target.
The Concept of “Somewhere”
Gameplay has, for decades, created areas where players can drive or navigate with virtual tunnels and defined rules, thereby allowing the designers to understand the boundaries and sculpt the logic. Multiplayer Online Battle Arenas (MOBAs) like League of Legions (2009, 2013) or Fortnite (2017) have absolutely become more complex since the days of Pole Position (1982), in part because of learnings reapplied to the next game.
MORE FOR YOU
Autonomous designs are evolving in a similar fashion. Pre-scripted bus routes. College campuses. Neighborhood grocery routes. All are examples of autonomy being launched in a quasi-arena. “We’re taking the problem of assuring safety at all times anywhere in world – which is an infinite set of problems to solve and synthesize like the taxi or passenger car problem – and we’re reducing that to the ‘somewhere’ problem,” states Gavin Jackson, CEO of Oxbotica, an international provider of autonomous software.
And they are not alone. Seven different manufacturers (e.g., Starship Technologies, FedEx FDX ) have created last-mile autonomous or semi-autonomous vehicles to solve the last-mile delivery issue, but learn at speeds under 4 mph. More than 25 manufacturers are creating robo-shuttles for college, medical, etc. campuses to follow a scripted path with few unknowns. Even those few companies that have announced robotaxi services (e.g., Waymo, Pony.ai, Baidu BIDU ) have confined their offerings to specific areas, e.g., San Francisco, Beijing, Wuhan. For example, the most recent robotaxi announcement by GM’s Cruise is limited to 1/3rd of San Francisco, stays below 30 mph, avoids highways, avoids extreme weather, and operates only during certain daytime hours.
The reason: deploy, learn, repeat.
“At Oxbotica, we go to market in a very intentional way,” states Jackson. “We time-order our market-entry points by the industry we’re serving. BP is perhaps our most diverse customer with …many different vehicles in their various energy plants ranging from refineries to windfarms. But they also do hub-to-hub trucking, airports to aggregate logistics and last-mile delivery. These have a very diverse set of on-road and off-road requirements.”
And so rather than trying to boil the whole ocean at once, providers are tending to create one solution, and then build from there. “In each of these scenarios we use the metaverse to synthesize known and unknown possibilities to give higher degrees of assurance before we drive autonomously in a place, whether that’s a city, mine, farm or refinery,” says Jackson.
Combining The Somewhere’s
After learning in the bounded environment, the system must be able to incorporate those algorithms into new scenarios. “In a finite set of problems, the AV system carries all of its experience on every journey,” says Jackson, “but what of the scenarios where the AV system has less experience? That’s where the unknown possibilities need to be found.”
So part of the greater strategy is learning those unknown edge cases for the relevant environment. Some manufacturers have roadtripped on massive quests and conquered millions of miles of on-road testing and data collection. Ford’s CEO tweeted a not-so-subtle jab at Tesla TSLA last year stating, “BlueCruise! We tested it in the real world, so customers don’t have to,” which simultaneously boasted of Ford’s herculean efforts and cast dispersions on beta testing safety-related systems with customers. Whereas other providers, have found alternate solutions. “The metaverse is a safer way to tutor the AV system and discover those edge cases,” suggests Jackson. “That’s a dramatically shorter cycle when driving ‘somewhere’ on a fixed route vs ‘anywhere’ on a random route, which accelerates time to commercialization.”
After collecting those petabytes of data, the final, not-so-minor step is using that data to incrementally enhance the design. In Urmson’s insightful talk, he spoke of encountering not just duck-chasing ladies, but gesticulating traffic cops, mobile construction crews and first responder vehicles. All of these inputs can vary significantly from country-to-country or geography-to-geography, which requires redesigning and retesting algorithms. Sometimes that’s painstaking, algorithm creation for the new impediment followed by long-term machine learned calibrations and retesting in the field. Interestingly, Oxbotica uses performance data of working AV systems in concert with an Artificial Intelligence engine as a virtual opponent within Oxbotica MetaDriver, its own “metaverse”, to test the system’s AI and attempt to poke holes in their own design. “We’ve measured it as 35,000 times faster than traditional verification and validation,” asserts Jackson.
And, in the end, that’s how these companies avoid a wild goose (or duck) chase and launch a product somewhere.