Skip to main content

Tesla posts exaggerate self-driving capacity, safety regulators say

Beta of Tesla's FSD in a car.
Image used with permission by copyright holder

The National Highway Traffic Safety Administration (NHTSA) is concerned that Tesla’s use of social media and its website makes false promises about the automaker’s full-self driving (FSD) software.

The warning dates back from May, but was made public in an email to Tesla released on November 8.

Recommended Videos

The NHTSA opened an investigation in October into 2.4 million Tesla vehicles equipped with the FSD software, following three reported collisions and a fatal crash. The investigation centers on FSD’s ability to perform in “relatively common” reduced visibility conditions, such as sun glare, fog, and airborne dust.

Please enable Javascript to view this content

In these instances, it appears that “the driver may not be aware that he or she is responsible” to make appropriate operational selections, or “fully understand” the nuances of the system, NHTSA said.

Meanwhile, “Tesla’s X (Twitter) account has reposted or endorsed postings that exhibit disengaged driver behavior,” Gregory Magno, the NHTSA’s vehicle defects chief investigator, wrote to Tesla in an email.

The postings, which included reposted YouTube videos, may encourage viewers to see FSD-supervised as a “Robotaxi” instead of a partially automated, driver-assist system that requires “persistent attention and intermittent intervention by the driver,” Magno said.

In one of a number of Tesla posts on X, the social media platform owned by Tesla CEO Elon Musk, a driver was seen using FSD to reach a hospital while undergoing a heart attack. In another post, a driver said he had used FSD for a 50-minute ride home. Meanwhile, third-party comments on the posts promoted the advantages of using FSD while under the influence of alcohol or when tired, NHTSA said.

Tesla’s official website also promotes conflicting messaging on the capabilities of the FSD software, the regulator said.

NHTSA has requested that Tesla revisit its communications to ensure its messaging remains consistent with FSD’s approved instructions, namely that the software provides only a driver assist/support system requiring drivers to remain vigilant and maintain constant readiness to intervene in driving.

Tesla last month unveiled the Cybercab, an autonomous-driving EV with no steering wheel or pedals. The vehicle has been promoted as a robotaxi, a self-driving vehicle operated as part of a ride-paying service, such as the one already offered by Alphabet-owned Waymo.

But Tesla’s self-driving technology has remained under the scrutiny of regulators. FSD relies on multiple onboard cameras to feed machine-learning models that, in turn, help the car make decisions based on what it sees.

Meanwhile, Waymo’s technology relies on premapped roads, sensors, cameras, radar, and lidar (a laser-light radar), which might be very costly, but has met the approval of safety regulators.

Nick Godt
Freelance reporter
Nick Godt has covered global business news on three continents for over 25 years.
Tesla Autopilot vs. full self-driving: What’s the difference?
A Telsa Model 3 drives along a road.

It's no longer the only company with self-driving cars on the road, but Tesla was one of the first brands to make this innovative functionality available to the public. Thanks to an array of cameras, sensors, and AI technology, most Telsa vehicles are capable of driving themselves to some degree. However, this doesn't mean drivers can take a nap behind the wheel. In fact, none can be used without driver supervision -- and there are some serious limitations to the tech.

Tesla currently offers features known as Tesla Autopilot and Full Self-Driving. But what's the difference between the two? And is one more reliable than the other? Here's everything you need to know about Tesla's Autopilot and Full Self-Driving technology.
Tesla Autopilot

Read more
Cruise’s robotaxi service suspended by California regulator
A Cruise autonomous car.

Autonomous car startup Cruise has run into trouble in California after the state’s Department of Motor Vehicles (DMV) said Tuesday it was suspending its deployment and driverless permits with immediate effect.

The dramatic intervention comes just a couple of months after General Motors-owned Cruise was given permission to operate robotaxi services around the clock, but also follows a number of troubling incidents involving self-driving Cruise cars on the streets of San Francisco, where it’s been carrying out tests on public roads in recent years.

Read more
An autonomous car in San Francisco got stuck in wet concrete
A Cruise autonomous car.

A self-driving car operated by General Motors-backed Cruise got stuck on Tuesday when it drove into a patch of wet concrete.

The incident happened in San Francisco and occurred just days after California's Public Utilities Commission made a landmark decision when it voted to allow autonomous-car companies Cruise and Waymo to expand their paid ridesharing services in the city to all hours of the day instead of just quieter periods.

Read more