How NHTSA and the NTSB can save themselves and the Driverless Vehicle Industry

Michael DeKort
6 min readSep 8, 2019

--

First let’s dispel several myths

Government must stay out of the way of technology, or it will slow down or even stifle progress

· This is only true when the new technology has no major safety or security components. Simply look at the history or air travel, automobiles and current day cybersecurity. If the features do not increase revenue they will not be added.

When government is involved simply providing guidance or frameworks will do

· See the answer above

· And to it I will add that frameworks and guidance make things worse by giving companies legal air cover to do the least possible.

NTSB determination in Joshua Brown investigation that driverless vehicles should be able to handle scenarios germane to their location

· This one confuses me a bit. On one hand NHTSA recently stopped EasyMile and Babcock Ranch Florida from using elementary school children as Guinea pigs. However, they allow the same children to get in a Tesla, Waymo, Uber etc. If the answer was their hands are tied to where the FMCSA rules/laws apply they should be screaming this out loud and asking for more authority. Or at least warning the world about the problem. But they haven’t. Which tells me they are still partially ignorant and lack courage. For the sake of discussion though let’s go with they don’t get it at all

· The only way these systems can effectively operate in any given location or scenario is to conduct the development and testing of the machine learning to do so. Most of that is not taking place in proper simulation, where it should be, or even test tracks, but in the public domain. Most of that public domain testing requires humans as Guinea pigs, Both inside and around the vehicles. This includes the elderly, children, the handicapped etc. To make matters worse machine learning is extremely inefficient and can take thousands of tries to get it right. Now let’s apply this massively repetitive learning process to avoiding or best handle accident scenarios that cannot be avoided. These accident scenarios must experienced by the system. Which means the safety driver must avoid disengaging and, in many cases, commit suicide. The Brown and Banner accidents are an example of this. And unless they hard code this specific scenario to fix it, which should have happened before the copycat Banner tragedy, expect hundreds if not thousands more of these scenarios, injuries and deaths. Not counting those that occur learning the thousands of other accident scenarios.

· As I said above the only way to affect this process is for the safety driver to avoid disengaging and run through the entire scenario thread. If those monitor and alarm systems were actually effective and kept the drivers engaged, they would most likely avoid the accidents to avoid hurting themselves and others. Since these AV makers cannot have that they set these systems at 4 seconds or longer before alarming. Several seconds is plenty of time to experience these scenarios. (To make matters worse NHTSA conducted a negligent L2/3 handover safety study in 2015. They determined monitor and alarm systems could afford the driver time to regain total control after being distracted. Problem is they purposefully chose to not look at situational awareness issues or if the drivers had time to regain it and do the right thing the right way after taking “total” control.)

Public shadow and safety driving in tenable, will result in L4, will save lives and the safety drivers that have been killed and the thousands that will be killed are necessary to complete the development and testing of this technology

· It is impossible to drive the one trillion miles or spend over $300B to stumble and restumble on all the scenarios necessary to complete the effort. In addition, the process harms people for no reason. This occurs two ways. The first is through handover or fall back. A process that cannot be made safe for most complex scenarios, by any monitoring and notification system, because they cannot provide the time to regain proper situational awareness and do the right thing the right way, especially in time critical scenarios. The other dangerous area is training the systems to handle accident scenarios. In order do that AV makers will have to run thousands of accident scenarios thousands of times. that will cause thousands of injuries and deaths.

What DoT, NHTSA and the NTSB should do to save themselves and this industry

· They need to experience a paradigm shift, get their heads out of the echo chamber, apply common sense and become fully aware of the myths I listed above.

· The solution is to switch 99.9% of the development and testing to proper simulation and to create a process where the use of the public domain or humans in testing is proven necessary. Meaning they need to prove simulation cannot do the job. And when the public domain is necessary, because test tracks were proven inadequate as well, the scenarios should be coordinated. Not unlike a movie set. The wild west involving children, the elderly, handicapped and the general public needs to end.

· NHTSA should also create a minimal set of tests that assure that proper due diligence is remotely close to being done. That would include area/street pattern “happy” path and accident scenarios. As well as perception tests to assure that any object/degraded object that could reasonably be at any location is tested. The reason for this being that some patterns and finishes cause various sensors issues. This would include most fabric patterns from around the world.

· There would be several additional byproducts of this paradigm shift that would greatly help the industry beyond helping them get to L4/5, save the relevant lives, not take lives needlessly, face civil or criminal litigation or go bankrupt. There would now be a level playing field. This will make the companies far less anxious, hype and cut corners.

· Note on Proper simulation — The systems being used by the industry now have significant real-time and model fidelity flaws that prohibit them from being used to create a digital twin or replace the real-world. These gaming or vehicle manufacturing based systems simply have to many timing and model reality flaws. This will create false confidence, Planning flaws and result in real-world tragedies. What is needed is aerospace/DoD simulation technology and systems/safety engineering. All informed and validated by real-world data.

Please find more on this below

NHTSA saved children from going to school in autonomous shuttles and leaves them in danger everywhere else

Using the Real World is better than Proper Simulation for Autonomous Vehicle Development — NONSENSE

Autonomous shuttles are hurting people needlessly — It will get much worse

The Hype of Geofencing for Autonomous Vehicles

SAE Autonomous Vehicle Engineering Magazine-End Public Shadow Driving

My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.

Key Industry Participation

- Lead — SAE On-Road Autonomous Driving SAE Model and Simulation Task

- Member SAE ORAD Verification and Validation Task Force

- Member DIN/SAE International Alliance for Mobility Testing & Standardization (IAMTS) Sensor Simulation Specs

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)

- Presented the IEEE Barus Ethics Award for Post 9/11 Efforts

My company is Dactle

We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.

--

--

Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation