The big question mark at the center of Tesla’s self&driving Robotaxi
The big question mark at the center of Tesla’s self&driving Robotaxi
Tesla’s unveiling of its Robotaxi tomorrow should finally offer us a better look at CEO Elon Musk’s next big project. But it’s likely we won’t come away knowing much—or not enough—about the technology underpinning the self-driving system.
Tesla relies on neural networks—which mimic the way the brain works in silico—to comprehend the road experience, and take actions much in the same way a human driver would. (Competitor Waymo, for comparison, uses machine learning to get its vehicles to first recognize common parts of the road space, from signs to pedestrians, and then behave in response to those cues.)
The Tesla method is seen as a quicker way to train cars to drive, but also needs vast volumes of data to train the system, which operates as a black box, where you can’t isolate the individual decisions made in the process.
Tesla vehicles are outfitted with eight cameras, according to Deutsche Bank research, rather than the single camera that most production vehicles have. All Tesla vehicles running the Tesla HW3 hardware, which Deutsche Bank estimates is 1.9 million cars in North America alone, also helps the systems learn how to react to road situations. That means Tesla is getting a more intensive and constant stream of data on which to judge their reactions—but it also comes with questions.
Tesla’s decision to rely on camera-only systems and neural networks over more established technology, like LiDAR, a three-dimension world modeling tech, is a key differentiator from competitors, says Jack Stilgoe, a researcher at University College London, who specializes in autonomous vehicles. “Tesla committed a few years ago to a camera only system,” he says. “Lots of people in the self-driving world say, if you’re not using LiDAR, then you’re never going to be as safe, and you’re never going to be as reliable.”
The unique approach taken by Tesla also helps explain why they’ve not brought robotaxis to market while competitors like Waymo have. “Elon Musk has been talking about the imminent availability of self-driving cars for over a decade,” says Paul Miller, vice president and principal analyst at Forrester. “We’re certainly not there yet, but his company and others are hard at work on improving the technologies that will be required to take today’s interesting—but isolated—pilots and turn them into something that we might all see, trust, and even use in our daily travels.”
But there are other worries. Tesla’s “black box” approach raises concerns about transparency, accountability, and safety in the event of crashes. “The AI systems within the car are a black box,” says Stilgoe. “When they go wrong, we don’t know why they go wrong.” That’s in large part because of the method Tesla has taken in developing its self-driving software, which “thinks” on the fly, and doesn’t follow rule-based systems that can be traced. (Neural networks’s decision making is notoriously hard to reverse-engineer.)
The issue isn’t only a problem with robotaxis, but it’s one that could be made worse by the preponderance of vehicles with full automation on the road if self-driving taxis become commonplace. “This has been a problem with Tesla since the death of Joshua Brown in 2016 when the National Transportation Safety Board said part of the problem there is that we didn’t know why the system decided to do what it did,” says Stilgoe. (Tesla has settled a number of lawsuits pertaining to the issue, most recently in April over the 2018 death of a driver that had engaged the car’s Autopilot system. The settlements make it more difficult to uncover information about how and why things go wrong.)
The neural network approach taken by Tesla is designed—if it can be pulled off—to enable full self-driving anywhere, under any conditions. But it’s a much more opaque way of identifying how issues arose when they do inevitably appear. “When one of these things is involved in a crash, a regulator would ask what claims does anybody have over that data?” says Stilgoe. “How do we investigate that crash? How do we make sure that mistakes don’t get repeated again?” Answering those questions is trickier with Tesla’s approach to self-driving than it is with others.
Ironically, Stilgoe and AI ethicists have argued for the need for another “black box” to try and counter the issues that come from nontransparent AI systems: one inherited from the airline industry. Black boxes on planes record every step and decision in the event of a crash, so that what happened can be understood and learned from. “The data is not owned by the manufacturer or the airline,” says Stilgoe. “The data is owned by crash investigators who are able to understand in an independently verifiable way what happened in the seconds leading up to the crash.”
Tesla’s unveiling of its Robotaxi tomorrow should finally offer us a better look at CEO Elon Musk’s next big project. But it’s likely we won’t come away knowing much—or not enough—about the technology underpinning the self-driving system.
Tesla relies on neural networks—which mimic the way the brain works in silico—to comprehend the road experience, and take actions much in the same way a human driver would. (Competitor Waymo, for comparison, uses machine learning to get its vehicles to first recognize common parts of the road space, from signs to pedestrians, and then behave in response to those cues.)
The Tesla method is seen as a quicker way to train cars to drive, but also needs vast volumes of data to train the system, which operates as a black box, where you can’t isolate the individual decisions made in the process.
Tesla vehicles are outfitted with eight cameras, according to Deutsche Bank research, rather than the single camera that most production vehicles have. All Tesla vehicles running the Tesla HW3 hardware, which Deutsche Bank estimates is 1.9 million cars in North America alone, also helps the systems learn how to react to road situations. That means Tesla is getting a more intensive and constant stream of data on which to judge their reactions—but it also comes with questions.
Tesla’s decision to rely on camera-only systems and neural networks over more established technology, like LiDAR, a three-dimension world modeling tech, is a key differentiator from competitors, says Jack Stilgoe, a researcher at University College London, who specializes in autonomous vehicles. “Tesla committed a few years ago to a camera only system,” he says. “Lots of people in the self-driving world say, if you’re not using LiDAR, then you’re never going to be as safe, and you’re never going to be as reliable.”
The unique approach taken by Tesla also helps explain why they’ve not brought robotaxis to market while competitors like Waymo have. “Elon Musk has been talking about the imminent availability of self-driving cars for over a decade,” says Paul Miller, vice president and principal analyst at Forrester. “We’re certainly not there yet, but his company and others are hard at work on improving the technologies that will be required to take today’s interesting—but isolated—pilots and turn them into something that we might all see, trust, and even use in our daily travels.”
But there are other worries. Tesla’s “black box” approach raises concerns about transparency, accountability, and safety in the event of crashes. “The AI systems within the car are a black box,” says Stilgoe. “When they go wrong, we don’t know why they go wrong.” That’s in large part because of the method Tesla has taken in developing its self-driving software, which “thinks” on the fly, and doesn’t follow rule-based systems that can be traced. (Neural networks’s decision making is notoriously hard to reverse-engineer.)
The issue isn’t only a problem with robotaxis, but it’s one that could be made worse by the preponderance of vehicles with full automation on the road if self-driving taxis become commonplace. “This has been a problem with Tesla since the death of Joshua Brown in 2016 when the National Transportation Safety Board said part of the problem there is that we didn’t know why the system decided to do what it did,” says Stilgoe. (Tesla has settled a number of lawsuits pertaining to the issue, most recently in April over the 2018 death of a driver that had engaged the car’s Autopilot system. The settlements make it more difficult to uncover information about how and why things go wrong.)
The neural network approach taken by Tesla is designed—if it can be pulled off—to enable full self-driving anywhere, under any conditions. But it’s a much more opaque way of identifying how issues arose when they do inevitably appear. “When one of these things is involved in a crash, a regulator would ask what claims does anybody have over that data?” says Stilgoe. “How do we investigate that crash? How do we make sure that mistakes don’t get repeated again?” Answering those questions is trickier with Tesla’s approach to self-driving than it is with others.
Ironically, Stilgoe and AI ethicists have argued for the need for another “black box” to try and counter the issues that come from nontransparent AI systems: one inherited from the airline industry. Black boxes on planes record every step and decision in the event of a crash, so that what happened can be understood and learned from. “The data is not owned by the manufacturer or the airline,” says Stilgoe. “The data is owned by crash investigators who are able to understand in an independently verifiable way what happened in the seconds leading up to the crash.”