What We’re Reading

June 1, 2022

  • The DMV said it would investigate Tesla over self-driving claims. Then, crickets (Los Angeles Times, May 26, 2022)

    Last year, the California DMV opened two investigations regarding Tesla’s Full Self-Driving feature: the first regarding Tesla’s marketing of the feature, and the second regarding the feature’s safety and whether it should be regulated as autonomous under CA law.

    As described in this recent report, “one of the DMV investigations looks at whether Tesla is deceptively marketing the robot-car feature by using the term Full Self-Driving Capability on its online order form and other places on its website. State regulations bar automakers from using marketing language to imply a car is capable of autonomous driving when it is not.” The other investigation centers on other DMV regulations that require companies that test AV driving systems to use a backup driver; Tesla has been exempt from this requirement “as a matter of definitional parsing by the DMV,” which “has said Full Self-Driving is a driver assist system, not an autonomous system.”

    Now, State Transportation Committee Chair Lena Gonzalez is questioning the DMV’s lack of progress in its investigations, as other experts continue to question Tesla’s use of the term “Full Self-Driving” as “misleading” and “given the unfinished ‘beta’ state of its technology.” The DMV has not responded to requests for updates on the status of its investigations.

  • UCI Researchers: Autonomous Vehicles Can Be Tricked Into Dangerous Driving Behavior (University of California at Irvine, May 26, 2022)

    Here, UCI computer science researchers examine and discuss the risk presented by an autonomous driving system’s inability to distinguish between potential roadside hazards. “A box, bicycle or traffic cone may be all that is necessary to scare a driverless vehicle into coming to a dangerous stop in the middle of the street or on a freeway off-ramp, creating a hazard for other motorists and pedestrians,” said Qi Alfred Chen, co-author of the paper.

    The researchers explain that “the vehicle’s planning module is designed with an abundance of caution, logically, because you don’t want driverless vehicles rolling around, out of control. . . . But our testing has found that the software can err on the side of being overly conservative, and this can lead to a car becoming a traffic obstruction, or worse.”

    This general concept saw real-life application this past week when a driverless GM Cruise in San Francisco stopped to yield to an oncoming firetruck, but ended up blocking the path of the firetruck, delaying the fire department’s response to an active fire. As reported by Wired, this latest incident joins two other incidents in San Francisco in which Cruise vehicles, seemingly operating according to plan, nonetheless created potential traffic hazards.

  • Cambridge Mobile Telematics Research into Electric Vehicle Risk Unveils Key Insights into Changes in Road Safety in an EV Future (BusinessWire, May 24, 2022)

    Partner Mike Nelson attended the IIHS-HLDI Charging Into an Electrified Future conference last week, and was fortunate to hear the results of this new research firsthand. As reported here, Cambridge Mobile Telematics (CMT) has compiled data from “the millions of vehicles across the CMT DriveWell® Platform” in an effort to achieve “a better understanding of risk across vehicle platforms.” Among other things, CMT found that Tesla drivers “are nearly 50% less likely to crash while driving their Tesla than” their other vehicle(s), as compared to “Porsche drivers [who] are 55% more likely to crash while driving their Porsche” than their other vehicle(s).

Previous
Previous

The Unintended “Silent Hazard” of Electric Vehicles

Next
Next

A Privacy Rights Road Trip: How Do State Privacy Laws Impact Vehicle Performance Data (VPD)?