Will Kamikaze drivers do their part to create Driverless Vehicles?

Michael DeKort
4 min readAug 17, 2019

Let’s review what “safety driving” is at it relates to creating autonomous vehicles. In order to test the full execution of the driverless vehicles machine learning plan the vehicle has to be allowed to execute the full thread of the associated plan. It has to be allowed to affect the longitudinal (braking and acceleration) or lateral (steering) then planning system determined was required. (In shadow driving the safety driver maintains control. The system logs the intent of the panning system.) In cases where accident scenarios are being trained, whether it be to avoid an accident or handle them the best possible, this means the safety driver cannot take over the system or disengage prior to these threads occurring. If they do the learning is cut short. This means the safety driver must literally risk their lives and, in many cases, commit suicide to see this through. They must become Kamikaze drivers. I assume you can see this is problematic right? Especially given there are thousands of accident scenarios that must be run thousands of times each to be learned. How many of these folks will avoid taking any defensive action at all? How many will allow the system to do as it wishes? How many will allow themselves to be injured and killed? To kill others. To kill children and families? And what is the legal, ethical and moral impact here? Let’s assume shadow and safety driving is a viable method to create this technology and the injuries and deaths that ensue are for the greater good, what is the right legal, ethical and moral answer? To sacrifice humans now to save more later. To always avoid any immediate injury and loss of life?

The whole thing is ridiculous, untenable and given six people have already died as safety drivers, it is negligent at best. The solution is to switch 99.9% of this to aerospace/DoD simulation technology and systems/safety engineering. All informed and validated by real-world data. (Not gaming engine based systems as they have significant real-time and model fidelity flaws in complex scenarios.)

(Public shadow and safety driving — The process being used by most AV makers to develop these systems, public shadow and safety driving, is untenable, has killed six people to date for no reason and will kill thousands more when accident scenarios are learned. It is impossible to drive the one trillion miles or spend over $300B to stumble and restumble on all the scenarios necessary to complete the effort. In addition, the process harms people for no reason. This occurs two ways. The first is through handover or fall back. A process that cannot be made safe for most complex scenarios, by any monitoring and notification system, because they cannot provide the time to regain proper situational awareness and do the right thing the right way, especially in time critical scenarios. The other dangerous area is training the systems to handle accident scenarios. In order do that AV makers will have to run thousands of accident scenarios thousands of times. that will cause thousands of injuries and deaths).

More on my POV here

Using the Real World is better than Proper Simulation for Autonomous Vehicle Development — NONSENSE

First of thousands of accidents involving children test subjects in autonomous vehicle development has occurred

Autonomous shuttles are hurting people needlessly — It will get much worse

The Hype of Geofencing for Autonomous Vehicles

SAE Autonomous Vehicle Engineering Magazine-End Public Shadow Driving

My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.

Key Industry Participation

- Lead — SAE On-Road Autonomous Driving SAE Model and Simulation Task

- Member SAE ORAD Verification and Validation Task Force

- Member DIN/SAE International Alliance for Mobility Testing & Standardization (IAMTS) Sensor Simulation Specs

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)

- Presented the IEEE Barus Ethics Award for Post 9/11 Efforts

My company is Dactle

We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.

--

--

Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation