What Barak Gila’s roughly 15-minute bicycle commute through San Francisco’s Castro and Mission Districts lacks in length, it makes up for in excitement. Gila, a software engineer, knows that on any given day his route will be dense with traffic—some pedestrians and bicycles, but mostly motor vehicles. Those come in every size and permutation possible, including the self-driving cars (also called autonomous vehicles, or AVs) that developers like Google’s sister company Waymo and the General Motors–owned Cruise are testing in the city.
Gila doesn’t mind riding around those. While vehicles from both Waymo and Cruise have been documented exhibiting alarming behavior in the city—swarming, blocking traffic, even rolling through an active firefighting scene—Gila says he’s “never had a self-driving car behave unsafely” around him. And it wasn’t an autonomous vehicle that right-hooked him in May 2021. It was a human-driven Porsche, whose driver told him the car’s blind spot detector hadn’t alerted him to Gila’s presence. Gila had been vigilant and was able to avoid injury, but the episode was a troubling harbinger.
Barak Gila rides in a bike lane in San Francisco. His commute is generally dense with traffic.
An article of faith among proponents of autonomous vehicles is that the vast majority (94 percent is the figure often cited) of traffic crashes are caused by human error. Cyclists make up a relatively small portion of overall road deaths in the United States, but they’re killed at higher rates than vehicle occupants. Aside from a slight dip in 2020 when we drove less early in the pandemic, cyclist fatalities have risen for over a decade, and in 2021 the annual total jumped five percent to an all-time high of nearly 1,000, according to preliminary data from the National Highway Traffic Safety Administration (NHTSA).
But an autonomous vehicle will never be distracted by a text message; nor will it drink and drive or road rage, says Anne Dorsey, a software engineer in Waymo’s behavior division. Removing humans from the driving task, or dramatically reducing their role, could save thousands of lives and countless injuries every year, especially among vulnerable road users like cyclists.
That’s the promise, anyway. But autonomy’s safety benefits aren’t yet proven, and even if they do pan out, a decade of halting development progress suggests that rolling out true self-driving vehicles on a scale that could achieve those gains will likely take far longer than what the AV industry has promised. In the meantime, car manufacturers are pushing forward with advanced driver assistance (ADAS) technology, offering “autonomy lite” features like the ones in that Porsche. But that approach comes with its own issues: Studies and surveys suggest that the misleading marketing and tech terminology is often so confusing that a frightening number of drivers treat their cars as self-driving when they’re not.
Removing humans from the driving task could save thousands of lives. That’s the promise anyway.
As Gila found out, that technology—which ranges from blind spot detection to systems that can handle all driving tasks in limited conditions—is far from foolproof. “The [driver] was relying on it, but it doesn’t work 100 percent of the time,” Gila says. “That false promise of safety can be almost worse than nothing.”
And with bare-bones regulation of driver assistance features as well as the testing of fully autonomous vehicles, it’s difficult to tell which aspects of each approach work and which don’t, raising valid concerns that some of the very features intended to make the roads safer for everyone could be doing just the opposite. Meanwhile, in the search for future technological solutions, we’re ignoring existing—if less sexy—tools that could improve safety today and solve other problems as well.
Gila waves to a (human) driver who yielded to him before making a right turn.
The idea of cars that can drive themselves first emerged in the 1950s; in 1960, GM and RCA partnered on an experimental project that used electrical circuits in a road surface to control steering, acceleration, and braking and demonstrated the technology on a test track. Then, as now, safety was a central selling point: One advertisement from the time reads, “Science promises a future free of traffic accidents.”
That belief is so widely accepted today that even NHTSA agrees that improved public safety “promises to be one of automation’s biggest benefits.” But more than six decades after those early trials, we’re still waiting for that future.
In terms of perception abilities, the Level 4 autonomous vehicles that Waymo and others are testing are already vastly more capable than human drivers. (Level 4, or “high automation,” means the vehicle drives itself while the occupants are hands-off passengers. But it can operate only in limited service areas; full-on, go-anywhere autonomy would be Level 5, which is almost a sci-fi dream.) Level 4 vehicles use a sophisticated array of sensors, including high-definition cameras, microphones, radar, and a kind of powerful laser scanner called LiDAR (light detection and ranging) that can create three-dimensional maps of the driving environment. (For complete definitions of each level of vehicle autonomy, the widely accepted standards are from the Society of Automotive Engineers.)
A Waymo car navigating San Francisco’s hilly streets.
Together, these tools scan the vehicle’s surroundings dozens of times every second. Pointed in every direction, they capture a full 360-degree view around the vehicle (versus a human’s roughly 180-degree vision). All of that data can be stitched together in a kind of panoptic supervision that can see hundreds of feet away, at night, even through some solid objects.
Sensors are only one small part of the perception game. Autonomous vehicles use machine learning, a subset of artificial intelligence that employs algorithms and massive image databases to “teach” the driving software how to properly identify objects that the sensors detect, including all of their variations: a woman on a road bike; a dad pedaling kids in a bakfiets; a child scooting along on a balance bike.
A recent test drive video from Waymo offers a stunning preview of autonomy’s perception capabilities. In the split-screen video, the bottom half is the passenger’s-eye view; the top half is a graphic representation of what the company’s self-driving tech, called the Waymo Driver, “sees” from its sensors. As the vehicle navigates downtown San Francisco, dozens of pedestrians and cyclists show up as bright white silhouettes in the Driver view, clearly identified even blocks away, or partially obscured by parked cars or buildings. A human would struggle to pick out half of them.
Watch How the Waymo Sensors Perceive Cyclists and Pedestrians
Another area where autonomous vehicles have an edge over humans is consistent behavior. “If the Driver responds the same way every time, that can increase safety for cyclists because they know what to expect,” says Dorsey. Consider a four-way stop with human drivers: You never know whether someone will wave you through the intersection even when they have the right of way, or ignore you and cut you off. In the same scenario with a self-driving vehicle, the car should react the same way every time—and not only that car, but every car in the fleet. The cars share the same software, so it’s essentially as if they all have the same driver.
Bare-bones regulation raises concerns that some features intended to make the roads safer could be doing just the opposite.
This fact makes safety scalable, says Clay Kunz, a robotics software engineer in Waymo’s perception division and a former mountain biker turned roadie and commuter. Say a car responds unsafely to a cyclist during testing. A software update can change the behavior of every vehicle with that operating system. “Once you solve the problem, the behavior is distributed across the fleet,” he says.
But proper behavior—which involves judging what other road users are doing and then responding appropriately—is exceptionally difficult because of the massive variety of possible driving scenarios. “Cyclists and pedestrians are challenging because they can be anywhere in the driving space,” says Stephanie Villegas, Waymo’s former lead of structured testing. They can be following traffic rules or going against traffic. They can be in a lane or between lanes, and the difference matters a lot insofar as predicting what they’ll do next. “It’s hard to list all the ways they interact with drivers,” she says.
That’s a challenge even for human drivers, but we have a built-in advantage. “Humans are really good at predicting the intent of other humans based on things like posture and explicit gestures,” says Justin Owens, Ph.D., a research scientist with the Virginia Tech Transportation Institute. “We’re hardwired to do that, with capabilities we’ve evolved over millions of years.” As Sam Anthony, founder and former CTO of the autonomous vehicle software company Perceptive Automata, wrote recently in a Substack post, it takes just a quarter of a second for a human driver to see a pedestrian and process a massive amount of contextual information on the person’s age, attention level, and even emotional state, all of which influences how the driver responds.
Gila checks traffic behind him before navigating a tricky intersection.
Autonomous vehicles don’t have that ability, so developers program vehicle behavior partly by trying to test and catalog every possible interaction. That’s done using a combination of software simulation and on-the-road testing, including at places like Castle, a decommissioned military air base about 120 miles east of Waymo’s headquarters in Mountain View, California. Waymo isn’t the only AV developer testing there, but its 113-acre facility is the largest at Castle, with roads, intersections, street signs, and other elements of a basic cityscape where engineers can test a wide variety of situations in a controlled but real-life setting. That’s especially valuable for what are called edge cases, unusual events that would be either hard to find and/or unsafe to test in public: A cyclist emerging from in front of a box truck, for example, or riding through a cloud of sensor-obscuring debris kicked up by a leaf blower.
Waymo and other companies also test on public streets, most notably in San Francisco and parts of Phoenix. But there isn’t much in the way of independent oversight of that testing. And in at least one instance, the result of that was tragic.
In the search for future technological solutions, we’re ignoring existing—if less sexy—tools that could improve safety today.
At 9:58 p.m. on March 18, 2018, a heavily modified Volvo XC90 sport utility vehicle from Uber’s self-driving division, Advanced Technologies Group (ATG), accelerated northbound along a stretch of North Mill Avenue, a four-lane divided arterial in Tempe, a Phoenix suburb. The vehicle was in autonomous mode, on a testing loop that Uber vehicles had driven roughly 50,000 times.
The vehicle’s sensors scanned the roadway ahead, synthesizing the data against high-definition route maps, and its onboard computers crunched complex algorithms of object detection, prediction, and behavior response to calculate its speed and path. In the driver’s seat sat Rafaela Vasquez, an operator responsible for monitoring the system and taking over in any situation that exceeded the vehicle’s self-driving ability.
Up ahead, Elaine Herzberg, 49, walked a bicycle into the roadway from the raised median, several hundred feet from the nearest crosswalk. Herzberg was a grandmother and had been on the verge of securing an apartment that would have ended a period of homelessness.
The vehicle’s sensors registered Herzberg’s presence in the road from more than a football field away. But over the next 4.5 crucial seconds, the system became confused, bouncing between identifying her as another vehicle, a bicycle, and a category simply called “other.” Each time the classification changed, the vehicle recalculated her expected path and speed or determined that she was not moving at all. Vasquez, meanwhile, was looking down at her cellphone. Herzberg was just a few feet from the safety of the curb as the car approached at the 45 mph speed limit. By the time the Uber’s software recognized the collision risk, at 1.2 seconds to impact, the crash was unavoidable, but it gave no alert to Vasquez, who was still distracted. An audible alert finally sounded at 0.2 second, as the vehicle began to brake. Vasquez retook the wheel just .02 second before the fatal impact at 39 mph.
North Mill Avenue is straight, with unobstructed sight lines and overhead street lighting. Even at night, such a collision should have been easy to avoid. A damning 2019 crash investigation report from the National Transportation Safety Board (NTSB) cited numerous failures by Uber, including the fact that engineers had disabled the car’s standard emergency braking system so it didn’t interfere with the self-driving software, which itself failed to register Herzberg as a pedestrian because she was crossing mid-block “and the system design did not include consideration for a jaywalking pedestrian.”
Despite lengthy criticism of what it called Uber’s “inadequate safety culture,” the NTSB assessed the probable cause of the crash as Vasquez’s distraction and failure to monitor the vehicle and driving environment. Vasquez was the only person ever criminally charged for Herzberg’s death. Although indicted in August 2020, her trial date has been repeatedly postponed and is now set for May 2023. (Vasquez’s lawyer did not respond to a request for comment.) Uber separately settled claims with several of Herzberg’s relatives.
Pedestrians and cyclists show up as bright white silhouettes in the Waymo Driver view. A human would struggle to pick out half of them.
National Transportation Safety Board investigators examining the Uber self-driving SUV that struck Elaine Herzberg.
At the very least, Herzberg’s death should have been a signal to Congress and federal regulators to start paying attention to the autonomous vehicle industry; however, says NTSB chair Jennifer Homendy, “It was not the wake-up call that it really deserved to be.” (The NTSB investigates crashes and makes recommendations but does not make or enforce regulations.)
Arizona suspended Uber’s vehicle testing permit following the crash, and the company voluntarily paused testing elsewhere. Its ATG division was offloaded to Aurora Innovation in 2020 for just 55 percent of its previous year’s valuation. But now, almost five years later, little has changed on the regulatory front.
No federal rules govern public road testing of fully autonomous vehicles. Two bills in Congress that would address regulation at the federal level—the AV START Act and the SELF DRIVE Act—have been circulating since 2017 without legislative action.
Without federal laws (or dedicated funding for regulators such as NHTSA), oversight is largely left to a patchwork of state and local authorities, and the specifics vary widely. In California, companies that are actively testing autonomous vehicles under permit are required to submit collision reports to the state’s DMV within 10 days of any crash that involves property damage, personal injury, or death, identifying who was likely at fault and whether the test vehicle was operating in autonomous or human-driven mode at the time of the collision. In 2022, Waymo reported 71 crashes in the state and Cruise reported 33. (Because fleet sizes differ, these numbers may not reflect crash rates per miles driven; also, some of those reported crashes involved vehicles in human-driven mode). A separate reporting system focuses on what are called “disengagements,” or times when a test vehicle’s self-driving system stops operating for any reason. Companies criticize disengagement as a garbage metric, and the data is available only in raw format, without analysis or context to help the public understand performance or trends. Still, together those reports represent the most transparent and stringent regulation in the U.S.
San Francisco is also a testing ground for Cruise’s self-driving cars.
In Arizona, oversight is considerably more lax: To get a permit, companies need only self-certify that test vehicles comply with Federal Motor Vehicle Safety Standards for passenger vehicles and that the software “is capable of complying” with local traffic laws. No reports are required.
Herzberg’s death is—so far—an outlier. And it’s tempting to think that autonomous vehicles could end those 94 percent of crashes caused by human error. But that widely cited figure refers simply to “the last event in the crash causal chain” and ignores other factors, such as road design, that play key roles in crashes. Assuming that autonomous vehicles would remedy that depends on believing that they would not only end all human driving errors, but also avoid making any of their own. In fact, some experts contend that self-driving vehicles would struggle to get anywhere close: According to a 2020 study of 5,000 serious crashes by the Insurance Institute for Highway Safety (IIHS), autonomous vehicles would avoid only about a third of crashes.
In any partially automated driving situation, the vehicle will encounter a scenario that exceeds its abilities. Unless its driver is ready to take control, there will be a period when no one is driving the vehicle.
Developers of autonomous vehicles rarely release detailed safety data on their own, but the last significant report from Waymo backs up the notion that autonomy’s proponents haven’t proved its safety case. The report, an October 2020 research paper focusing on Waymo’s Phoenix operations in 2019 and most of 2020, found that in Arizona, its vehicles were involved in 75 percent more crashes per million vehicle miles traveled than human-driven vehicles in the state, although the company did not note any serious injuries. Waymo researchers argue that the stats aren’t comparable, noting that the analysis included several “minor contact events” that it claims resulted in no damage, and also pointing to research showing that human-driven crashes—both injury-related events and those involving only property damage—are underreported. But even after adjusting reported crash data numbers with the highest underreporting estimates, the Waymo vehicles were still involved in crashes at the same or higher rates than human drivers, whether you use NHTSA crash report data from the same years or those from the Arizona Department of Transportation. Waymo says it plans to publish more safety data soon, but it did not respond to Bicycling’s request for comment on why adjusting crash data for underreporting bias doesn’t allow for even broad comparisons.
Zoom out more, and the data tells a similar story. Uber’s ATG test fleet had driven more than two million autonomous miles before Herzberg’s death. Waymo claims that it has surpassed 20 million miles total. Altogether, autonomous vehicles in California drove more than four million miles in 2021. That’s tens of millions of miles driven over years of testing, with one death. That may sound impressive, but the most recent fatality statistic for human driving in the U.S. is 1.33 per 100 million vehicle miles traveled. Autonomy literally has a long drive before it can show that it can match, let alone exceed, human safety performance, even such as it is.
And outside of those sporadic data disclosures and California’s reporting system, there are few ways to monitor progress. Without federal regulation, there’s not even a widely accepted benchmark for how safe autonomous vehicles should be to use as a target. “I understand there’s a balance between innovation and regulation, but right now that oversight isn’t happening,” says Homendy, herself a cyclist. “It’s disappointing.”
Gila often encounters self-driving vehicles during his San Francisco commute.
The lack of regulation goes beyond the AV test fleets. While no autonomous vehicles are available to buy today, cars equipped with driver-assistance technologies are widely sold, sometimes marketed as capable of automation far beyond their true abilities, and operated on public roads by drivers who don’t understand the limits of the technology. Those features are largely unregulated as well.
In the auto industry, the clunky term Advanced Driver Assistance Systems, or ADAS, includes everything from Level 0 passive driver alerts (like the blind spot warning system in the Porsche that hit Barak Gila) to Level 2 features that can steer, accelerate, and brake in certain situations, such as highways. (Level 3 “conditional automation” systems exist at the boundary between ADAS and full autonomy, and are so new that the first one, Mercedes-Benz’s Drive Pilot, won’t debut in the U.S. until later this year, likely in Nevada first.) Roughly half of car models sold today have at least Level 1 ADAS—the ability to take control of both speed and steering in some situations—according to Consumer Reports.
Like full autonomy, the promise of these features is to make driving safer, but their efficacy ranges widely. The IIHS says automatic emergency braking, for instance, can reduce the number of rear-end injury crashes with another motor vehicle by 56 percent. But in extensive testing by the IIHS and another major automotive-related nonprofit, the American Automobile Association (AAA), some of those features don’t always perform as they should, especially around smaller vehicles like bicycles.
Last May, AAA tested ADAS systems from Tesla, Subaru, and Hyundai and found a wide range of performance in interactions with cyclists. In tests of a vehicle’s ability to avoid a crash with another vehicle traveling in the same direction, the systems easily avoided all crashes. But when overtaking cyclists, the systems were far less reliable. While no collisions were recorded, detection and response times varied widely and, in one test run, the Hyundai vehicle “avoided” hitting the cyclist test dummy with a separation distance of 0.0 feet. In tests with a cyclist dummy crossing perpendicular to the vehicle path, the Tesla barely avoided crashes in two of five runs, and the Subaru failed to even detect the cyclist in all five runs, hitting the dummy every time.
In a response to a request for comment, Subaru indicated that AAA had tested a previous version of its EyeSight system, which has since been improved with a wider-angle camera system better able to detect peripheral motion. Hyundai’s statement did not answer specific questions but noted that its system is not autonomous and the driver is responsible at all times. It said it’s reviewing the AAA report. Tesla, which disbanded its PR department in 2020, did not respond to several emailed requests for comment.
That inconsistent performance is partly because the systems are designed primarily to prevent low-speed vehicle-to-vehicle crashes; vulnerable road users like cyclists and pedestrians are smaller and harder to detect. AAA, which has been testing various ADAS systems for years, found in 2019 that pedestrian detection systems from four different carmakers failed to function more than half of the time, with especially high failure rates linked to three factors: speeds over 30 mph, small, child-size subjects, and low-light conditions. While car companies like Subaru are regularly updating their systems with better hardware and software, Toyota’s own information on its Safety Sense ADAS suite currently lists two pages of scenarios where its pre-collision feature might fail to detect certain cyclists and pedestrians, including short people (three feet or less), tall people (six foot six or more), people wearing white clothing in daytime, and cyclists riding small bikes or bicycles equipped with large bags, like most cargo models.
More concerning, as those tests and others show, the systems don’t always fail consistently. Ben Bauchwitz, a graduate research assistant at Duke University and a scientist at Charles River Analytics, has investigated Tesla’s ADAS systems extensively. In a trial of various capabilities of Tesla’s Autopilot technology, Bauchwitz found that the systems in three different cars reacted differently to the same test situation, which shouldn’t be the case if they all shared the same software. Worse, sometimes the same car behaved differently in subsequent test runs.
Some types of driver assist may actually worsen driver performance, namely those like Level 2 hands-free highway systems that take over some of the driving task. The problem is called automation complacency: The less of the driving we do, the more likely our attention is to wander. It’s an established phenomenon in disciplines as diverse as health care and astronautics. As Alexandra Mueller, an IIHS researcher who studies ADAS technology, says, “people are not good automation supervisors.”
Better training is not a reliable fix. In Driven, a history of the development of autonomous vehicles, author Alex Davies recounts how Chris Urmson, an AV pioneer and early executive at Waymo (then a Google division called Project Chauffeur) was horrified to find in 2013 that even trained Google employees testing the system grew so confident in its abilities that while it drove 65 mph on the freeway, they worked on their computers, played with their phones, and even, in one instance, fell asleep. (Urmson is now CEO at Aurora Innovation, the company that bought Uber’s ATG unit.) A 2021 study of driver behavior while operating Volvo’s Pilot Assist technology found that the longer drivers used the system, the more likely their attention was to drift from the driving task, including taking both hands off the steering wheel.
And once we’re distracted, it can be hard to reengage effectively. Two recent studies reveal that drivers may need as long as 20 seconds to “stabilize” their driving when a Level 3 ADAS, like the one Mercedes will soon sell, disengages and gives control back to the human.
Confidence in a system can create problems even if a driver is paying attention. It’s referred to as overtrust. “We’ve done research where even when the systems are struggling to handle the driving conditions, overtrust actually encourages people to not participate in a necessary takeover even when a collision is imminent,” says Mueller. In a 2018 study, more than a quarter of test subjects failed to prevent a crash when the system handed control of the vehicle back to the driver. Basically, right up to the point of the crash, drivers are still confident the system will prevent it.
Distraction and overtrust are expressions of what’s called the handover problem. Essentially, in any partially automated driving situation (even conditional automation like a Level 3 system), the vehicle may encounter a scenario that exceeds its abilities, and the driver must take control. Unless that driver is attentive and ready to respond, there is a period—which can last 5 to 10 seconds—when there is no one driving the vehicle.
Without federal regulation, there’s not even a widely accepted benchmark for how safe autonomous vehicles should be to use as a target.
All of this makes the IIHS and AAA decidedly ambivalent about ADAS: Both generally favor driver monitoring and emergency intervention features like automatic emergency braking but are more cautious about features that partly automate driving. But even as advocates call out the risks, carmakers are largely pushing ahead on partial automation. And the way it’s marketed confuses drivers about its capabilities. In 2020, a consortium including AAA, SAE International, and the National Safety Council created a standardized list of names for specific driver-assist features like forward collision warning or lane-keeping assistance. But most carmakers market those features as a suite, using names that invariably suggest competence, even expertise: Co-Pilot 360 (Ford), Drive Pilot (Mercedes-Benz), and Tesla’s long-running Autopilot system. (Ford did not respond to a request for comment, but a Mercedes spokesperson said the company believes Drive Pilot is an appropriate term for a Level 3 system that can perform all normal driving tasks within its operational design domain; driver education, a key component of safe use of ADAS systems, is done at the dealer and customer levels, and the system uses technology such as high-definition maps to prevent its use outside its intended domain.)
More than 160,000 Tesla drivers in North America are beta-testing Tesla’s Full Self-Driving feature on public roads.
Tesla is the most prominent example of this kind of marketing, exemplified by a 2016 video that misrepresented Autopilot’s capabilities, according to the company’s director of Autopilot software at his recent deposition in a civil lawsuit. In addition to Autopilot, Tesla markets a second ADAS suite called Full Self-Driving, or FSD, even though Tesla itself has noted in regulatory filings that FSD is Level 2 driver assistance. Tesla’s systems have key differences from those of industry competitors, most prominently the advanced access to “beta” or prerelease versions of FSD that the company offers to some owners. In doing so, Tesla is essentially relying on untrained Tesla owners—more than 160,000 at last count—to product-test its software on public roads.
But because there is so little federal regulation of ADAS, Tesla isn’t breaking any vehicle safety laws by testing FSD beta in public. For years, the company has freely marketed and sold its ADAS packages in the U.S., at prices of up to $15,000, and Tesla’s legions of fans regularly post video to social media of them using it in unsafe ways, like riding in the back seat while on the highway.
Confusion around ADAS’s real capabilities and limitations compounds the inattention and overtrust problems; a recent IIHS survey found that almost half of Tesla and GM drivers treat their vehicles as fully self-driving. And in 2020 research from AAA, study participants who were trained and tested on a vehicle with an ADAS system called “AutonoDrive” rated the car in survey responses as far more capable in scenarios like actively avoiding collisions than those who were told the same system was called DriveAssist. The AutonoDrive subjects said they would be two to three times more likely than DriveAssist subjects to engage in distracted-driving behavior like eating or having a handheld phone conversation while driving. Finally, in drive testing, 23 percent of subjects in the AutonoDrive group took longer than five seconds to retake vehicle control in an unexpected handoff, versus just six percent of the DriveAssist subjects.
“There are people who think these cars are fully self-driving, fully safe,” says NTSB chair Homendy. And it isn’t just the general public. Last year, Homendy told Bloomberg News that at a conference of state highway officials she was “stunned” to learn that many transportation officials thought autonomous vehicles were available for sale to the public today.
The real-world outcome is hundreds of crashes, some fatal, which have been blamed on distracted drivers who were relying too heavily on driver assistance. Thankfully, regulators are beginning to understand the scope of the problem and take some initial action. Tesla, for instance, is the target of several inquiries and investigations, including by the U.S. Department of Justice (according to Tesla’s own SEC filings), NHTSA, and the California Department of Motor Vehicles, that focus on Autopilot and FSD’s possible role in dozens of crashes, as well as Tesla’s marketing of it.
On February 16, NHTSA announced that Tesla had agreed to a voluntary recall of more than 360,000 cars equipped with FSD software. In a brief post titled “Full Self-Driving Software May Cause Crash,” the agency noted that FSD allows drivers to set it to exceed the speed limit, and FSD can “act unsafe around intersections,” such as failing to stop at stop signs. (Tesla is said to be fixing the vehicles with a software update, but what that involves isn’t yet known.) And, as of January 1, 2023, a new state law in California prohibits manufacturers—not just Tesla, but all carmakers—from advertising or marketing ADAS systems as self-driving.
Tesla is also the defendant in numerous civil lawsuits brought by families of people who have been injured or killed in crashes involving Tesla vehicles in which the systems were operational. The company has denied all allegations—even arguing that its software update for FSD shouldn’t be considered a recall—and is defending itself against both the civil suits and the various investigations.
On February 16, NHTSA announced that Tesla had agreed to a voluntary recall of more than 300,000 cars equipped with FSD software.
Regardless of who manufactures it, ADAS technology is not subject to independent federal testing and approval; automakers don’t have to ask permission to sell it, although as the Tesla FSD recall shows, they may have to beg forgiveness at times. Just as bicycle helmet companies attest that their products meet safety requirements, automakers self-certify that their Level 1 and Level 2 ADAS suites meet Federal Motor Vehicle Safety Standards—which is mostly silent on ADAS-specific technologies. A separate initiative, NHTSA’s New Car Assessment Program, recommends four technologies but mandates none. A substantial majority of automakers voluntarily agreed to include one ADAS technology—automatic emergency braking—on new cars as of September 2022.
One other small step in the right direction came in 2021 when NHTSA issued a first-of-its-kind order requiring carmakers and autonomous vehicle companies to submit incident reports when they learn of serious crashes involving a vehicle with Level 2 or higher technology that was activated for more than 30 seconds prior to the crash. Last June, NHTSA released data from the first year of collection and found that 367 crashes were reported in vehicles actively using advanced driver-assist systems (nearly three-quarters of which were Teslas). The agency cautioned that this was almost certainly an undercount; without in-car telemetry (which Tesla has) or an occupant complaint, companies—and therefore the agency—likely wouldn’t learn about these crashes.
Also of concern are the 130 crashes reported in Level 4 autonomous vehicles in testing, seven of which involved cyclists. In the absence of federal regulation, the public and safety watchdogs are left mostly with autonomous vehicle companies’ annual safety reports, but those (voluntary) documents mostly outline how companies say they perform testing. Waymo’s annual safety report, in contrast to its 2020 research paper, relies heavily on the phrase “rigorous testing,” but includes almost no data. Even the company’s most recent public figure for total autonomous miles driven is from 2021.
“There’s a huge amount of ‘trust us,’” says Ken McLeod, policy director for the League of American Bicyclists. McLeod talks regularly with companies in the AV industry, as well as researchers at key academic institutions like Carnegie Mellon University, where some of the industry’s most prominent engineers studied. But he says industry engagement on cycling varies widely. Last year, the League worked with Argo AI—a startup backed by Ford and Volkswagen—to help create a technical guide for proper AV behavior around cyclists, which Waymo also recently adopted. But some other companies, McLeod says, ignore the League or pay lip service to cyclist safety. “A lot of the voluntary safety self-assessments don’t talk about pedestrian or cyclist safety,” he says, or if they do, it’s in passing, like, “We have great sensors; they can detect cyclists, therefore we’re safe.”
Any-road-any-time Level 5 autonomy may not happen for decades, and maybe never.
Despite a decade of predictions of the imminent arrival of autonomy by proponents and some prominent industry leaders like Tesla CEO Elon Musk, America will not become a self-driving utopia soon, if ever. True, any-road-any-time Level 5 autonomy may not happen for decades, and maybe never.
Now the industry is showing signs of the strain of those expectations. Ford abruptly shut Argo AI down in October. Aurora’s stock price has dropped about 75 percent over the past 12 months, and Cruise lost $500 million in just the second quarter of 2022. After six years of work in Phoenix, Waymo’s autonomous taxi service still only operates in parts of the city, and not in bad weather. Service in a limited area of San Francisco began only last November, and on February 27, CEO Dmitri Dolgov announced the company had started testing in Los Angeles.
Even if autonomy proves its safety argument, the obstacles of scale render its benefits unattainable in the near term. Right now, Level 4 autonomous driving exists only in small fleets; Waymo operates 300 to 400 vehicles in Phoenix, for instance, and just 700 total. Even if companies like Waymo and Cruise succeed with their autonomous taxi business, they may not account for a large enough slice of traffic—at least 20 percent by one recent study—to create noticeable safety improvements. If, tomorrow, every new car sold were fully autonomous, it would be almost four years before even 20 percent of the 276 million registered vehicles in the U.S. were self-driving. Of course, that’s a totally unrealistic scenario; the reality is any safety benefits from autonomy are almost certainly a decade or more away.
Achieving that goal will also be costly. Upgrading and maintaining roads with better pavement, lane markings, and the cellular communication infrastructure needed by autonomous vehicles will require a massive, astronomically expensive overhaul. And a number of studies suggest that even if a fully autonomous vehicle fleet could be achieved, traffic and pollution would actually worsen because the technology would potentially spur more driving and longer commutes. Meanwhile, we’re missing the chance to implement proven, relatively affordable (if unflashy) tools we already have—congestion pricing, better public transit, protected bike lanes—that would deliver on safety and other goals.
“Every day we think autonomous cars will yield a future of safe car dependency, we divert essential attention and resources from the things that can actually help right now,” says Peter Norton, a technology historian in the University of Virginia’s department of engineering and society and the author of Autonorama: The Illusory Promise of High-Tech Driving.
Norton’s argument is that you can’t solve the systemic safety and livability issues endemic to cars—climate change, pollution, and traffic congestion—with better cars, even ones that drive themselves. “The problem should never have been, ‘How do we let drivers get where they want safely and without delay,’” he says. “The question should always have been, ‘How do we help people meet their daily mobility needs?’ When you frame the problem that way, suddenly you have a much bigger menu of tools you can choose from.”
A growing number of people agree—including Barak Gila, who criticizes the “unrealistic techno-optimist perspective” that self-driving cars will solve everything. For the past two years, e-bike sales have outpaced sales of four-wheel electric vehicles, and handily so. New York City’s Citi Bike bike share program notched three successive ridership records in 2022. Colorado recently announced a 10-year transportation funding plan that halts two planned highway expansions and will devote the $100 million saved to transit, bike, and walk networks. And in 2024, residents of Los Angeles will vote on a ballot measure to require the installation of bike and bus lanes on any major road project in the city. If bike lanes are built, they could spur an unprecedented jump in ridership. In a 2016 national survey of cycling attitudes by Portland State University researchers Jennifer Dill and Nathan McNeil, 51 percent of respondents were “interested, but concerned”—that is, they would ride more if they thought it was safe to do so. The goal of that multimodal investment, says Norton, is to give people a real choice in how they travel, something we largely lack now.
Could autonomous vehicles make cycling safer? Maybe we’re asking the wrong question. What if, in our search for ways to make cycling safer, we ask how to make cities cleaner, greener, and more pleasant places to live? Then the answer that emerges might not be to make better cars. Maybe the answer is to make fewer of them, and more of everything else.
As Gila found out, that technology—which ranges from blind spot detection to systems that can handle all driving tasks in limited conditions—is far from foolproof. “The [driver] was relying on it, but it doesn’t work 100 percent of the time,” Gila says. “That false promise of safety can be almost worse than nothing.”