Tesla — L2 (Steering) Systems are Dangerous

A report has come out looking at how humans interact with semi-autonomous systems. Especially when steering control is involved. The study demonstrates that people simply cannot reliably handle a system that intermittently takes control of steering. Or even one that would allow you to let go of the wheel for short periods of time. And that is without ANY distractions added to the test.

https://www.researchgate.net/publication/321058854_Is_partially_automated_driving_a_bad_idea_Observations_from_an_on-road_study?

This information piggybacks many other studies (some from the same authors) NASA, Waymo, Toyota, Chris Urmson and others determination that L2/L3 has to be avoided as well. And that the industry should go from L1 straight to L4. That study went further to purposefully distracting the drivers. That to see how long it takes for them to gain the proper situational awareness time to effect the right decision after being distracted. (All of these studies run counter to NHTSA’s 2015 L2/L3 Study.

The reason NHTSA determined L2/L3 could be made safe is because their test determined regaining control occurred when you accessed the steering wheel. They never looked at the situational awareness issues and whether the monitoring and notification systems were effective enough. I have personally asked NHTSA and Virginia Tech, who assisted them, to relook at what they have done and to put a moratorium in place until they do. Neither has agreed to do that nor explained why they should not. As such I believe NHTSA and Virginia Tech’s conduct to rise from negligence to gross negligence).

This issue added to the fact that the use public shadow driving for AI, engineering and test would take one trillion miles to complete, at a cost of over $300B, shows the practice will NEVER lead to an autonomous vehicle. Aerospace level simulation is needed instead. (Tesla would pay less than $300B because they have been able to mislead their customers in to paying them for the honor to make them, their families and the public Guinea pigs).

Here is a key portion of the study.

“With this in mind the authors of this paper argue that a shift in attitude is required to ensure that the role of the driver within automated driving systems is protected. Tesla, along with other vehicle manufacturers, have designed vehicles that can essentially drive themselves most of the time but still require a human driver to monitor its performance and intervene when necessary. This design ethos has led to a situation in which humans are bound to fail and so “driver error” becomes an inevitable outcome (Stanton & Baber, 2002). Systems designers have created an impossible task (Stanton, 2015) — one that requires the driver to remain vigilant for extended periods. The literature openly reports that humans are poor at doing this (e.g. Molloy & Parasuraman, 1996). With this in mind, whilst strategies to help improve Level 2 systems could be explored, it seems more appropriate at this time to accept that the DD and DND roles are the only two viable options that can fully protect the role of the human within automated driving systems. This in turn means that either the human driver should remain in control of longitudinal and/or lateral aspects of control (i.e. one or the other) or they are removed entirely from the control-feedback loop (essentially moving straight to SAE 4).”

For more dentils on the other studies and this issue in general please see my article

Letter to Congress — Handling of minimum standards for Autonomous industry

https://www.linkedin.com/pulse/letter-congress-handling-minimum-standards-industry-michael-dekort/

Written by

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store