Letter to the NTSB Regarding Uber/Vasquez Safety Board Hearing

Michael DeKort
6 min readOct 19, 2019

--

To whom it may concern,

My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, as the software engineering manager for all of NORAD and as a PM on the Aegis Weapon System. I would like to provide some information I believe is crucial to this investigation and how the government should be handling the development and testing of autonomous vehicles.

Key Autonomous Vehicle Industry Participation

- Lead — SAE On-Road Autonomous Driving Simulation Task Force

- Member SAE ORAD Verification and Validation Task Force

- SME — DIN/SAE International Alliance for Mobility Testing & Standardization (IAMTS) group to create sensor simulation specs

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)

- Presented the IEEE Barus Ethics Award for Post 9/11 DoD/DHS Efforts

First, I want to make the point that Rafaela Vasquez and Elaine Herzberg are both victims. They nor anyone else involved in the Uber or Tesla tragedies to date should have been part of these events. Because they should never have occurred. The reason for this is most of the development and testing process being used by most autonomous vehicle makers, public shadow and safety driving, should be done mostly in (proper) simulation. With safety driving being virtually nonexistent.

There are several fundamental issues here. The process of public shadow and safety driving is untenable for viability, safety and liability reasons. And the simulation technology in this industry, compared to that in DoD and aerospace, is unable to create anything close to a digital twin. This leads to the mistaken belief the vast majority of the real-world development and test activities cannot shift to simulation. (Note-I have provided these concerns to the DoT IG over the past year.)

Viability

The Viability issues involve time and cost. The issues here being that the effort and funding needed to drive and redrive or stumble and restumble on the scenarios needed to reach a safety factor of 10X a human are not tenable. The RAND corporation estimated it would take 500 billion miles to do this. Toyota estimated one trillion miles to create autonomous vehicles. A conservative cost estimate for those trillion miles over a ten-year period is $300B. (These estimates are per manufacturer. I would be glad to provide my formula for this.)

Liability/Safety

The Liability/Safety issues create a situation where human lives are put at risk as the primary means to develop and test these systems. Due diligence is defined as not preferring but actually requiring harm to humans. These issues involve two primary areas involving Safety Driving:

Handover/Fallback

As these systems are in the process of being developed the human Safety Driver often must take over the system, specifically latitude or steering control, with little or no warning, when the systems fail. While training and monitor and alarm systems can make this process more effective and safer there is no approach that can resolve time critical scenarios. With accident scenarios, especially those that involve high speed or quick movement, falling squarely in this category. Another facet of this issue is complacency and the operator falling asleep. A risk that increases as the system becomes more effective. That is due to longer periods where the operator is not utilized and permitted to become overly confident, distracted and lose situational awareness. The Universities of Leeds and Southampton have shown that humans require 3–45 seconds of time to regain enough situational awareness to affect the right maneuver the right way. Many have cautioned against this approach including NASA, Missy Cummings, the head of Robotics at Duke University and even several automakers and autonomous vehicle makers including Ford, Volvo, Waymo and Aurora. (This despite their using the process. The reasons for this will be explained below.)

Accident Scenarios

For the systems to “learn” accident scenarios they must be experienced over and over. Some thousands of times each. For the threads to be tested in their entirety the operator must not disengage. This means the operator must put their lives at risk to test the systems. This will result in thousands of unnecessary injuries and casualties.

Real world and Simulation Capabilities and Expectations

There are two main beliefs that dictate the vast majority of development and testing being accomplished in the real world.

Long tails, Corner or Edge cases can only be found in the Real World

The belief being that humans cannot think of enough scenarios or variations of scenarios to equal what can be found in the real world. The problem with this assumption is that it ignores time and cost. Setting aside the possibilities are infinite, it is simply not possible to spend enough time and money to stumble on all the scenarios deemed necessary to assure measurable due diligence, let alone finding their variations and repeating them once let alone the hundreds or thousands of times required to train the system.

Simulation Capabilities and Expectations

The belief that it is not possible to adequately simulate or model enough facets of real-world development and design to replace the real world to any meaningful degree. To create a complete “digital twin”. Accompanying this belief is the associated belief that the simulation and modeling technology and approaches used in the autonomous vehicle industry are the most advanced regardless of industry. This assumption affirms the belief that it is simply not possible to replace the real world. Leaving Public Shadow and Safety Driving as the primary means to develop and test these systems. Given the significant real-time and model fidelity gaps in the systems and products being used in the industry this belief is, unfortunately, well founded. If AV makers were to try to utilize these systems for most of their development, especially in complex and accident scenarios, the performance gap between the simulation and the real world could be enough to cause planning errors. These errors will result in false confidence and the AV decelerating, accelerating or maneuvering improperly. That could result in an accident or one being worse than need be.

Solution

The primary component of the resolution is to make the industry aware of and utilize DoD/aerospace simulation and modelling technology to build effective and complete digital twins, especially as they relate to physics. This technology remedies all the real-time and model fidelity issues I described above.

Given this, it is now possible to invert and normalize the due diligence paradigm. Risk to human life can now be almost entirely mitigated. (In rare cases Safety Driving would be required it should be run as a structured event. Not unlike a movie set.) This would result in now being able to require manufactures to prove human beings are required as test subjects. Regardless of whether the environment is a test track or the real world. Where simulation cannot be utilized the developer would demonstrate the need for test track use. Where test track use is not adequate the need to utilize the public domain would be proven. This approach would align us with the same approach many industries use today. Including aerospace, DoD and even automotive.

With respect to using simulation being able to find long tails and edge or corner cases. While the number of scenarios is vast, most likely in the millions, possibly billions for perception testing, and the effort will clearly be significant, it is possible to get to a verifiable sigma level or factor better than a human, with the right cross-domain approach. And by utilizing data from a wide array of sources. Those including Shadow Driving, HD mapping, manufacturer data, independent testing, insurance companies, research and historical data from various transportation domains etc. Finally, the simulation and modelling performance or level of fidelity will have to be verified against its real-world master. The process involved here would not be unlike the FAA currently performs using the Part 60 and DERs. I would be glad to provide proof that this technology can do as I stated.

Thank you very much for your time and consideration. Please let me know if I can be of any further assistance.

Michael DeKort

Note — Conflict of Interest — Being aware of the conflict between my message and selling a solution I originally tried to assist the simulation companies in this industry adopt DoD/aerospace technology. As their responses were to wait until their customers figured out the systems were flawed and that realization would likely not come until after real world tragedies occurred, I decided to do this myself and accept the conflict. As a result, I created a company called Dactle. We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based development and testing system with an end-state scenario matrix to address all of the issues I mentioned. We intend to supply all of the scenarios, the scenario matrix tool, the data, the integrated simulation or any part of this system. A true all model type digital twin.

--

--

Michael DeKort
Michael DeKort

Written by Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation

No responses yet