“A lot of regulations are written in blood” — A bad Omen for the Driverless Vehicle Industry

Michael DeKort
5 min readSep 10, 2019

--

“A lot of regulatory and policy and safety initiatives are driven by tragic events such as this,” McAvoy told CNN. “The expression is that a lot of regulations are written in blood.”

This was said by retired US Coast Guard Capt. Kyle McAvoy, a marine safety expert with Robson Forensic who specializes in marine incident investigations. He was commenting on the situation involving the dive boat tragedy in California.

Here is the article — https://edition.cnn.com/2019/09/08/us/california-boat-owner-search-warrants/index.html

Why am I posting this?

Because the autonomous vehicle industry is on the precipice of one of the largest engineering debacles in history. Not unlike Theranos or the air tragedies in the 50s. Those too were caused by hype, arrogance, ignorance, and poor engineering. With the six needless safety driver deaths to date and the need for machine learning to learn thousands of accident scenarios by experiencing them thousands of times each the writing is not only on the way but in bright neon letters. Will the industry see this and do the right thing before the first child or family dies? Or will it take dozens, hundreds or literally thousands of them before sanity rules out and breaks the overcomes the echo chamber?

The vast majority of autonomous vehicle makers use the process of public shadow and safety driving to develop and test these systems. This process is untenable, it will never get close to creating an autonomous vehicle. And it puts people’s lives in danger for no reason. In order to train their machine learning systems in how to execute scenarios they have to experience them over and over and over. Thousands of times each in many cases. Whether that is using imitation learning or reinforcement learning. This means that in order to avoid or best navigate accident scenarios they have to be experienced. That means safety drivers have to literally be Kamikaze drivers. They have to allow the system to get through the scenario threads so engineers know how to apply various methods to lower the error rates until the rate is so low the scenario has been “learned”. This activity involves having accidents. That will cause injuries and deaths. As I said, safety drivers will need to sacrifice their lives, actually commit suicide for this to work. That of course involves the drivers, passengers and the public around them. Men, women, children, families, the elderly, handicapped etc. It is absolutely insane. Worst of all though is the fact that not only does this not have to happen but it is part of a process that can never result in making a legitimate autonomous vehicle and save the lives that tech would save. Resulting in the industry doing the exact opposite of their stated mission. The tech can’t exist, the lives that would save will not be and they are taking lives needlessly in that futile process. This is the whole last 10% or even 2% is the hardest thing. If you cannot learn the relevant accident scenarios this whole effort is useless. And by learn, I mean be as good and then better than a human. Both levels have to be learned for all relevant scenarios. This is a quality issue not a quantity one.

Why is Public shadow and safety driving untenable? — The process being used by most AV makers to develop these systems, public shadow and safety driving, is untenable, has killed six people to date for no reason and will kill thousands more when accident scenarios are learned. It is impossible to drive the one trillion miles or spend over $300B to stumble and restumble on all the scenarios necessary to complete the effort. In addition, the process harms people for no reason. This occurs two ways. The first is through handover or fall back. A process that cannot be made safe for most complex scenarios, by any monitoring and notification system, because they cannot provide the time to regain proper situational awareness and do the right thing the right way, especially in time critical scenarios. The other dangerous area is training the systems to handle accident scenarios which I discussed above. The solution is to use DoD/aerospace simulation technology, informed and validated by real-world data, for most of the development and testing. Prove you are worthy of going into the real world. And for NHTSA and the NTSB to start leading instead of following.

More on my POV here including the solution here

How NHTSA and the NTSB can save themselves and the Driverless Vehicle Industry

Using the Real World is better than Proper Simulation for Autonomous Vehicle Development — NONSENSE

The Hype of Geofencing for Autonomous Vehicles

SAE Autonomous Vehicle Engineering Magazine-End Public Shadow Driving

My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.

Key Industry Participation

- Lead — SAE On-Road Autonomous Driving SAE Model and Simulation Task

- Member SAE ORAD Verification and Validation Task Force

- Member DIN/SAE International Alliance for Mobility Testing & Standardization (IAMTS) Sensor Simulation Specs

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)

- Presented the IEEE Barus Ethics Award for Post 9/11 Efforts

My company is Dactle

We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.

--

--

Michael DeKort
Michael DeKort

Written by Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation