Despite the hype around the potential threat of artificial intelligence to humanity in the future, there is surprisingly little discussion about the current threats it poses. When it comes to self-driving cars, we’re already treading in dangerous waters.

Last March, a North Carolina teenager who had just stepped off a school bus was hit by a Tesla Model Y that was allegedly in Autopilot mode. Autopilot is the name for Tesla’s self-driving software. It has been involved in 736 crashes and 17 fatalities across the country since 2019.

In San Francisco alone, autonomous vehicles have disrupted firefighting duties 55 times this year, according to San Francisco fire chief Jeanine Nicholson.

The reason for these accidents is simple: there are no federal software safety testing standards for autonomous vehicles. This creates a loophole that companies like Elon Musk’s Tesla can exploit.

Lack of autonomous driving regulation 

The National Highway Traffic Safety Administration (NHTSA) regulates the hardware components of vehicles sold in the United States. Meanwhile, individual states are responsible for licensing human drivers. The licensing process involves passing vision, written, and driving tests.

However, there’s no such scrutiny imposed on the artificial intelligence controlling the self-driving cars. In California, companies can obtain a permit to operate driverless vehicles by simply declaring that their vehicles have been tested and deemed safe.

Missy Cummings, a professor and director of the Mason Autonomy and Robotics Center at George Mason University, raises an important question: Who has the authority to license a computer driver—NHTSA or the state?

Ironically, the focus has been on concerns that computers will become too intelligent and take control away from humans. But in reality, computers often lack the intelligence to prevent harm to humans.

The autonomous vehicle companies argue that, despite the well-publicized failures, their software is still superior to human drivers. While this may be true in certain aspects—for example, autonomous vehicles do not get tired, drive under the influence, or engage in distracted driving—we lack the necessary data to make a definitive judgment.

Autonomous cars make different types of mistakes, such as stopping in ways that obstruct ambulances and trapping crash victims.

According to The Salt Lake Tribune, Representatives Nancy Pelosi and Kevin Mullin penned a letter to the NHTSA last month. In the letter, they urged the agency to gather more information on incidents involving autonomous vehicles.

Pelosi and Mullin asked the NHTSA to concentrate its data-collection efforts on accidents that impeded emergency responders. The representatives also emphasized the need for data comparing autonomous driving accidents with human-driven car accidents.

The bizarre malfunctions of autonomous vehicles

However, there’s a problem with relying solely on data collection. AI often makes bizarre errors that data can’t predict.

For instance, one of GM’s Cruise cars collided with a bus last April due to an incorrect prediction of the bus’s movement. GM promptly updated the software following the incident.

Another accident occurred in 2022 when an autonomous car abruptly stopped during a left turn, mistakenly assuming that an oncoming car was going to turn right. This resulted in a collision between the oncoming vehicle and the stationary driverless car, causing injuries to passengers in both vehicles.

In a recent interview with The New York Times, Cummings highlighted the fragility of the computer vision systems in autonomous cars. She argued for vision and performance tests for AI on par with the vision and performance tests that pilots and drivers undergo.

While it may seem mundane, this approach is exactly what safety entails: experts conducting tests and creating checklists. It’s imperative that we commence these efforts without delay.

Image Source: Tsla Chan, https://shorturl.at/bEV19