NHTSA should impose an immediate “Autopilot” moratorium and report initial investigation findings in 30 days

Michael DeKort
9 min readAug 23, 2021

--

Moratorium

Tesla’s “Autopilot” and associated Automatic Emergency Braking System is so dangerous and widespread, NHTSA should impose an immediate moratorium while they conduct the broader investigation.

Quick Investigation

The reason the investigation should be quick, is that the design and development approach flaws are obvious and systemic. As such it should not take more than a couple weeks to conduct the investigation. Should it take longer than this, as previous investigations are taking over a year now, someone needs to ask NHTSA why. Is there a legitimate reason or are they stonewalling? Or possibly not competent enough to perform the investigation of autonomous systems?

Stationary/Crossing Object Issue/Crashes — Systemic Sensor System Design Flaw

The system routinely ignores stationary/crossing objects. This issue involves a combination of two areas, object classification and recognition and object location. Both now handled by the camera system and deep learning. Prior to removing the radar Tesla utilized radar for object location. It also acknowledged that the system struggled in these areas because of the radar. (Tesla also refuses to use LiDAR. It should be noted however most AV makers use LiDAR for determining position and object depth, not specific object detection, classification, or position. This is because they do not create “tracks” as radars do inherently. Some AV makers are now moving in this direction.) The radar Tesla utilized only had a couple transmitters and receivers. This forced the beam pattern to be wide. As distance increases that beam envelopes the entire road, areas to the sides and all objects within it. This leaves the system to merge objects as well as be unable to determine if they are on the road, next to the road etc. To avoid false breaking the system often ignores the objects. That leaves the cameras to do all the work. (I should note that mode L1+ systems out there use these low fidelity radars and have similar issues. However, they tend to be in L1 ADAS systems where the drivers not only do not cede steering and control of the vehicle.)

Given the plethora of information out there regarding this issue, both of crashes that have occurred and associated driver disengagements, before and after Tesla removed the radar input, it is clear this issue is systemic. If all the disengagement data were checked, NHTSA would likely see the issue is commonplace. Meaning every car out there with AP/FSD would have similar crashes if the drivers did not disengage. All 500k+ of them. This alone should be enough to impose the “autopilot” moratorium. (With respect to AEB, it needs to be determined if there is an inherent flaw in AEB or it only exists when AP is in use.) Beyond that is the ease at which this can be investigated. NHTSA need only acquire the Autopilot disengagement, design, and vehicle system data. As well as the AEB design and system performance data to verify it is systemic. (In addition to Tesla admitting the problem in the press, manuals, and NTSB crash report findings.) Worse case the scenarios can be easily replicated on a test track. Even if the root cause is the deep learning system not recognizing some objects versus the camera system determining their position, the moratorium and ease of investigation still applies. (It should be noted that camera systems struggle with direct light, weather, complex object patterns and 2D objects. This results in objects not being classified, detected or their position not being correct. Recently that included the moon being confused for a yellow light. The car braking as a result.

Note — At one point Tesla said they evaluated the dense array Arbe radar. They chose not to use it. That was a major mistake NHTSA needs to follow up on. Why wasn’t it used. I believe it is because Tesla’s main board has a major processing issue. It cannot ingest another sensor no matter what it is. And keep in mind radars produce tracks vs massive amounts of raw data bits like LiDAR. That means radars create a low processor load. Another argument for this point is Tesla getting rid of the existing radar. While it did not have high fidelity, as I mentioned above, it did have capabilities cameras do not. Tesla said they got rid of the radar because it did not perform well. (This after saying radar was crucial years ago) Why didn’t they just adjust the Kalman filter to minimize the radar’s issues? Again, I think this is because the main board can’t handle it.

Needless Use of Human Guinea Pigs for Development and Testing

Aside from the fatal and systemic sensor system flaw is Tesla’s use of their customers, others in the cars and the public around them, as human Guinea pigs. Tesla and most of the industry utilize machine learning to train the systems to handle scenarios. Machine learning “learns” by experiencing scenarios, trying to drive them correctly, failing, being corrected, and continuing that loop until complete. The repetition required could be hundreds if not thousands of times. And due to the current inability of the systems to infer or think, versus scenario and object matching and associated execution recall, it requires massive amounts of trial-and-error repetition to learn. In addition to that, especially when deep learning is involved, the systems scan objects from inside out and hyper classify them. They do this so they can learn the movement patterns of various object types or assign rules. Examples would be a person jogging and a sign. To apply those movement expectations or rules they have to memorize enough detail about each item to classify them properly. This process has a nasty unintended side effect. It causes the system to get lost or confused when it sees very small differences in new or what it thinks is a new object. Dirt or branches in front of a sign or clothing patterns for example. To handle this the system needs to learn an insane number of objects and their variations. This brings us to why this process is untenable from a time and money perspective. RAND estimated it would take 500 billion miles to reach L4, where the system drives 10X better than a human. Toyota’s Gil Pratt said it would be a trillion miles. My very conservative math, for just vehicles, sensors and drivers and no engineering costs, which is the bulk of the expense, is $300B in 10 years.

Back to safety and the use of human test subjects. For many of the crash scenarios to be learned the system has to experience them. While some scenarios cover others, current simulation technology helps and the system can learn enough from the moment the human disengages to avoid the impact of a crash, many of the scenario threads will have to be experienced in the real-world. That will cause the injury and deaths of thousands of people over time, especially when the threads involve a progression of interactions and steering, braking or acceleration actions. (Imagine complex scenarios in the snow when traction is lost.) The reason most of this is not accomplished in simulation is two-fold. It is believed there is no simulation system good enough to replace the real-world, which I will get to next, and you cannot make up or create enough scenarios in the real-world, especially edge cases. You must use the real-world to stumble on them.

Let me address the latter belief first. The issue here is time and money again. In Tesla’s Industry Day a year ago, Elon Musk used an example of a tractor trailer tractor towing several other tractor trailer tractors as an edge case. While I do not believe that example is rare enough to be an edge case, let’s go with it. Due to machine and deep learning requiring mass amounts of repletion to learn scenarios and objects and the need to learn variations of them, how many lifetimes do you think it would take to learn just Elon’s example? How many eons will go by so a car can stumble on that exact set of objects? Now how many more to stumble on the massive amount of color, position, quantity, environmental and other variations? Now multiply that by all the scenarios and their variations that need to be learned independently to get to L4. It’s insane. So . . . how do we handle this in simulation? First, we still use the real-world. Only much less of it. We still use shadow drivers (who maintain control of the vehicles) to learn the real-world. We then take that data, the plethora of data we have on objects, locations, weather, road patterns, driving patterns, crash data etc, and create scenarios and their variations using scenarios generation and coverage tools and Monte Carlos. Keep in mind we do not need to learn all the scenarios possible. Only enough to demonstrate due diligence and the statistical probability the system is some factor better than a human at driving. Yes, that is a lot of work. But it is doable. Where it is impossible in the real-world.

(It should be noted that progress can be made using the gaming-based systems. The problem occurs when the fidelity or system run time degrades to a point there is negative development and testing. Often that will be hidden and cause false confidence. If not discovered that would lead to real-world tragedies. This is because the AV system will implement a plan that is not close enough to the real-world. That will wind up with timing or veracity flaws in maneuvering, acceleration, or braking. And in the case of sensors, especially when there is complexity and interference, it could be totally wrong.)

The other reason people do not think simulation can replicate enough of the real-world to replace most of it. (And by that, I mean 99.9% or more.) is that they believe there is no existing simulation technology that can handle the processing loading or make models with high enough physics fidelity. (It is well established that the visual engine folks like Unreal and Unity, can make realistic enough visual scenes and objects.) Given the technology being used, they are correct. But that is the rub. The technology being used now is based on gaming-engine architectures and modeling approaches. If aerospace/DoD/FAA level simulation technology is used, this is all resolved. (More on this below. And please ignore the fact that those are planes, or that air travel is not nearly as complex as the streets we drive on. What the model is called is irrelevant. And DoD deals with war games in urban environments that are more complex than what this domain needs because they include electronic warfare.)

Finally, there is a small group within USDOT that gets all of this. It is called VOICES. They are trying to leverage DoD to create a simulation environment to assist the industry to affect the necessary development, testing, and simulation technology paradigm shift. The problem is they are being drowned out by the larger USDOT organization and NHTSA echo chambers.

More detail here. Including how to do this right.

The Autonomous Vehicle Industry can be Saved by doing the Opposite of what is being done now to create this technology

· https://medium.com/@imispgh/the-autonomous-vehicle-industry-can-be-saved-by-doing-the-opposite-of-what-is-being-done-now-b4e5c6ae9237

SAE Autonomous Vehicle Engineering Magazine — Simulation’s Next Generation (featuring Dactle)

· https://www.sae.org/news/2020/08/new-gen-av-simulation

How the failed Iranian hostage rescue in 1980 can save the Autonomous Vehicle industry

· https://imispgh.medium.com/how-the-failed-iranian-hostage-rescue-in-1980-can-save-the-autonomous-vehicle-industry-be76238dea36

USDOT introduces VOICES Proof of Concept for Autonomous Vehicle Industry-A Paradigm Shift?

· https://imispgh.medium.com/usdot-introduces-voices-proof-of-concept-for-autonomous-vehicle-industry-a-paradigm-shift-87a12aa1bc3a

Tesla “autopilot” development effort needs to be stopped and people held accountable

· https://medium.com/@imispgh/tesla-autopilot-development-effort-needs-to-be-stopped-and-people-arrested-f280229d2284

NHTSA Opens Probe on Tesla’s “Autopilot” Crashes with Parked Emergency Vehicles

· https://imispgh.medium.com/nhtsa-opens-probe-on-teslas-autopilot-crashes-with-parked-emergency-vehicles-fc4885a8e055

My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.

Industry Participation — Air and Ground

- Founder SAE On-Road Autonomous Driving Simulation Task Force

- Member SAE ORAD Verification and Validation Task Force

- Member UNECE WP.29 SG2 Virtual Testing

- Stakeholder USDOT VOICES (Virtual Open Innovation Collaborative Environment for Safety)

- Member SAE G-34 / EUROCAE WG-114 Artificial Intelligence in Aviation

- Member Teleoperation Consortium

- Member CIVATAglobal — Civic Air Transport Association

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee

- Presented the IEEE Barus Ethics Award for Post 9/11 DoD/DHS Whistleblowing Efforts

--

--

Michael DeKort
Michael DeKort

Written by Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation

No responses yet