In total, NHTSA investigated 956 crashes, starting in January 2018 and extending all the way until August 2023. Of those crashes, some of which involved other vehicles striking the Tesla vehicle, 29 people died. There were also 211 crashes in which “the frontal plane of the Tesla struck a vehicle or obstacle in its path.” These crashes, which were often the most severe, resulted in 14 deaths and 49 injuries.
The article does a good job breaking down the issues with Tesla's Auto Pilot, including the fact that it's a misleading title and has some pretty significant flaws giving people a false sense of confidence in its capabilities.
But raw crash statistics are absolutely meaningless to me without context. Is 956 crashes and 29 deaths more or less than you would expect from a similar number of cars with human drivers? What about in comparison to other brands semi-autonomous-driving systems?
Driving is an inherently unsafe process, journalists suck at conveying relative risks, probably because the average reader sucks at understanding statistical risk, but there needs to be a better process for comparing systems than just "29 people died'.
You seem to be pretending that these numbers are an overestimate. But the article makes clear. This investigation is a gross underestimate. There are many, many more dangerous situations that "Tesla Autopilot" has been in.
This is 29 people died while provably under Autopilot. This isn't a statistic. This was an investigation. Your treatment of this number as a "statistic" actually shows that you're not fully understanding what NHTSA accomplished here.
What you want, a statistical test for how often Autopilot fails, is... well... depending on the test, as high as 100%.
https://www.youtube.com/watch?v=azdX_6L1SOA
100% of the time, Tesla Autopilot will fail this test. That's why Luminar technologies used a Tesla for their live-demonstration at CES Vegas, because Tesla was so reliably failing this test it was the best one to pair up with their LIDAR technology as a comparison point.
Tesla Autopilot is an automaton. When you put it inside of its failing conditions, it will fail 100% of the time. Like a machine.
I think they’re looking for number of accidents/fatalities per 100k miles, or something similar to compare to human accidents/fatalities. That’s a better control to determine comparatively how it performs.