The Deadly and Avoidable Catch-22 of Autonomous Vehicle Development in the Public Domain

Michael DeKort
4 min readApr 4, 2020

--

Catch-22 is a paradox. Where one cannot escape or graduate a situation because of contradictory rules or regulations. On the wiki page there is an excellent quote

In needing experience to get a job…”How can I get any experience until I get a job that gives me experience?” — Brantley Foster in The Secret of My Success.

In Joseph Heller’s Catch-22 Captain Yossarian is part of a WWII bomber squadron whose Colonel keeps upping the bombing run quota to go home. He does this to score points with his superior. (The book and movies are an expose on how the various individuals involved handle that situation.) Yossarian asks the squadron doctor if he can be discharged for being insane. This after he states he cannot handle making more bombing runs. The doctor tells him that statement proves he is not insane. As only an insane person would want to keep flying more dangerous missions. However, the Catch-22 being the Colonel would have no problem allowing that person to fly as many missions as they would like.

(I recommend watching both movies. The original is in one-part and as such is far more intense. Making the angst the characters felt easier to relate to. But the movie is also a bit confusing given it leaps back and forth in time and a lot. The mini-series remake makes things easier to understand.)

How does this apply to the development and testing of autonomous vehicles? In order to learn how not to have accidents or handle the ones that cannot be avoided, the machine learning systems need to experience the associated scenarios over and over so they can be learned. Think about that for a second. This means the human Guinea pig “safety drivers” need to avoid disengaging and allowing the accident scenarios to occur hundreds if not thousands of times each. Becoming literal Kamikaze drivers in many case. If this step is skipped that last “2%” or so will never be completed. That means no system can ever be truly autonomous because they cannot handle the critical scenarios we need them to handle most.

All you folks who think Tesla Autopilot or any other Autonomous Vehicle Maker should limit the ODD (Operational Design Domain) to ODDs that will be successful do not understand how this works. As I said above, machine learning requires experiencing what is not known, being corrected and trying again to become better at it. Then doing that over and over and over.

If you limited these L2/3 systems to working or complete ODDs they would go NOWHERE because the working ODS can only be arrived at by public human Guinea pig “safety driver” repetitive trial and error. Which means lots of injuries and deaths.

Take the Joshua Brown/Jeremy Banner ODD (and every other one where a driver was injured or died). The one with the crossing truck where both people died in a Tesla. The NTSB has suggested Tesla not let the car go into AP until that ODD works. IT CAN’T WORK IF THEY DON’T REPEAT THIS MORE SO THE ODD IS LEARNED ENOUGH TO KNOW IT CAN BE AVOIDED!!!!!!!!!!!!!

PLEASE have the epiphany, and the one where you get gaming tech isn’t able to create anything close to a legitimate digital twin, so we can use DoD simulation technology and fix all of this.

More in my articles here

Proposal for Successfully Creating an Autonomous Ground or Air Vehicle

· https://medium.com/@imispgh/proposal-for-successfully-creating-an-autonomous-ground-or-air-vehicle-539bb10967b1

Autonomous Vehicles Need to Have Accidents to Develop this Technology

Using the Real World is better than Proper Simulation for AV Development — NONSENSE

Simulation can create a Complete Digital Twin of the Real World if DoD/Aerospace Technology is used

Why are Autonomous Vehicle makers using Deep Learning over Dynamic Sense and Avoid with Dynamic Collision Avoidance? Seems very inefficient and needlessly dangerous?

· https://medium.com/@imispgh/why-are-autonomous-vehicle-makers-using-deep-learning-over-dynamic-sense-and-avoid-with-dynamic-3e386b82495e

The Hype of Geofencing for Autonomous Vehicles

My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.

Key Industry Participation

- Lead — SAE On-Road Autonomous Driving SAE Model and Simulation Task

- Member SAE ORAD Verification and Validation Task Force

- Member DIN/SAE International Alliance for Mobility Testing & Standardization (IAMTS) Sensor Simulation Specs

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)

- Presented the IEEE Barus Ethics Award for Post 9/11 Efforts

My company is Dactle

We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.

--

--

Michael DeKort
Michael DeKort

Written by Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation

No responses yet