It’s a Zoo Out There: Autonomous Vehicles and Animal Detection

In July, Tesla announced it would be updating its advanced driver assistance system with Full-Self Driving (“FSD”) Beta version 10.13. A Tesla owner and FSD Beta tester, known on Twitter as @WholeMarsBlog, leaked the Release Notes accompanying the update. Most notably, the update promised improved handling of left turns, a feature that has previously caused significant setbacks for FSD, and “[i]mproved animal detection recall by 34% and decreased false positives by 8% by doubling the size of the auto-labeled training set.”

We were more curious about the animal detection updates.

First, why is animal detection important? This answer should be obvious, especially to anyone who has encountered a deer on the road.  According to PEW, “An estimated 1 million to 2 million crashes between motor vehicles and large animals such as deer occur every year in the U.S., causing approximately 200 human deaths, 26,000 injuries, and at least $8 billion in property damage and other costs. In rural states such as Wyoming, wildlife-vehicle crashes represent almost 20% of reported collisions.” The issue is so significant that it was addressed recently by Congress as part of the Infrastructure Investment and Jobs Act of 2021, which included hundreds of millions of dollars in funding for projects to reduce wildlife-vehicle collisions.  See, e.g., H.R. 3684 § 11123. A quick caselaw survey also indicates that wildlife-vehicle collisions are not infrequently the subject of litigation.  See, e.g., Curtis v. Klausler, 802 N.W. 2d 790 (Ct. App. Minn. 2011) (considering whether the State’s Wild Animal Immunity Statute precluded a negligence action against the city where a city employee operating a city vehicle was knocked unconscious by a deer running into the car, causing a collision and injuries to another driver).

Second, how does animal detection work? This answer is less obvious.

For a vehicle to operate fully autonomously, object detection, including animal detection, will need to have near-perfect accuracy (see above). A fully autonomous vehicle will need to be equipped with the right combination of sensors, cameras, radar, and/or lidar to detect any animal that a vehicle may come across, including the animal’s size, speed, and even three-dimensional shape. This has presented complications in the development of autonomous vehicles.

Some automakers have developed animal detection technologies with great success, such as Volvo’s Large Animal Detection system, which can accurately detect horses, deer, moose, and caribou. But even this system isn’t perfect—with its biggest obstacle being the kangaroo. While this may not be a big concern for any driver in North America, the kangaroo is one of the most frequent animals involved in vehicle-animal collisions in Australia, according to Australian insurance company NRMA.

Tesla’s ability to detect and classify objects is different from other automakers. Tesla announced last year that it would no longer utilize radar in its vehicles. Instead, the vehicles rely only on cameras and “neural net processing” to create Tesla Vision. Tesla Vision takes a snapshot of an object it detects, such as a horse, and then uploads it to a library database, or the “auto-labeled training set.” Over time, as more Tesla vehicles are driven and as more objects are detected, Teslas should become better at recognizing a wider range of objects.

Many detractors believe that Tesla’s reliance exclusively on a camera-based vision puts the system at a disadvantage. As explained in a NVIDIA blog post:

The three primary autonomous vehicle sensors are camera, radar and lidar. Working together, they provide the car visuals of its surroundings and help it detect the speed and distance of nearby objects, as well as their three-dimensional shape. . . Though they provide accurate visuals, cameras have their limitations. They can distinguish details of the surrounding environment, however, the distances of those objects needs to be calculated to know exactly where they are. It’s also difficult for camera-based sensors to detect objects in low visibility conditions, like fog, rain or nighttime.

Without the use of radar and lidar, Tesla Vision likely cannot reach that near perfect accuracy it needs to reach a fully autonomous status.

Indeed, Teslas have a history of not being able to detect and differentiate animals. Even Elon Musk acknowledged the flaws in Tesla’s animal detection in a tweet earlier this year:

For example, Electrek reported an instance in 2020 when a Tesla Model 3, operating in Autopilot, maneuvered to avoid a collision with a deer in the middle of the road—although it apparently classified the deer as a pedestrian. More recently, in a self-described “semi-science” experiment set up by CarWow, a Tesla completely failed to detect a Labrador-sized stuffed animal and a taxidermy cat in the road.

And what about kangaroos? While the Model 3 apparently has been able to detect a stuffed animal kangaroo, as in the 2020 deer scenario, it improperly classified the stuffed marsupial as a human pedestrian. Nor does the Model 3 seem very adept at detecting living, breathing, and hopping kangaroos—to see it in action, check out this Twitter thread between some Tesla drivers down under.

Copyright Nelson Niehaus LLC

The opinions expressed in this blog are those of the author(s) and do not necessarily reflect the views of the Firm, its clients, or any of its or their respective affiliates. This blog post is for general information purposes and is not intended to be and should not be taken as legal advice.

Previous
Previous

What We’re Reading

Next
Next

What We’re Reading