First of thousands of accidents involving children test subjects in autonomous vehicle development has occurred

Michael DeKort
4 min readAug 11, 2019

Article link — https://www.reuters.com/article/us-tesla-russia-fire/tesla-electric-car-catches-fire-after-hitting-a-tow-truck-in-moscow-idUSKCN1V10BB

Unfortunately, the first accident involving safety driving and children has occurred. Luckily, they were not severely injured. That will not last long. As these AV makers train their systems to avoid and best handle accident scenarios, by running them thousands of times each, there will eventually be thousands of people, including children, injured and killed. All of this is not only needless it is for naught. It is not remotely possible to create a legitimate autonomous vehicle using public shadow and safety driving for most of the development and testing. That means these folks are doing the exact opposite of that which they say they want to do. They will never create a legitimate AV, which means the lives that would save will never be saved and they are taking lives needlessly in their eternal and fruitless pursuit.

There is ZERO reason to use humans as Guinea pigs, especially the public, in or around these systems. The use of these systems includes those among us who are most vulnerable. Including the elderly and even blind veterans in the UK. This is only being done to hype capabilities and inspire false confidence, so these systems are “trusted”. NHTSA found this practice so problematic they stopped an EasyMile shuttle in Babcock Ranch Florida. Problem is they limited protecting children to school shuttles. Of course, the same children can get in an Uber. Waymo, Tesla etc. (See my article on this below.)

(It is impossible to drive the one trillion miles or spend over $300B to stumble and restumble on all the scenarios necessary to complete the effort. In addition, the process harms people for no reason. This occurs two ways. The first is through handover or fall back. A process that cannot be made safe for most complex scenarios, by any monitoring and notification system, because they cannot provide the time to regain proper situational awareness and do the right thing the right way, especially in time critical scenarios. The other dangerous area is training the systems to handle accident scenarios. In order do that AV makers will have to run thousands of accident scenarios thousands of times. that will cause thousands of injuries and deaths. The solution is aerospace/DoD simulation technology and systems/safety engineering. (Not gaming engine-based systems as they have significant real-time and model fidelity flaws in complex scenarios).

Please find my articles below that address each of these myths. (As well as a relevant bio)

Using the Real World is better than Proper Simulation for Autonomous Vehicle Development — NONSENSE

NHTSA saved children from going to school in autonomous shuttles and leaves them in danger everywhere else

All the Autonomous Vehicle makers combined would not get remotely close to L4

SAE Autonomous Vehicle Engineering Magazine-End Public Shadow Driving

Common Misconceptions about Aerospace/DoD/FAA Simulation for Autonomous Vehicles

The Hype of Geofencing for Autonomous Vehicles

Autonomous Levels 4 and 5 will never be reached without Simulation vs Public Shadow Driving for AI (This article has links to most of the data my POV is derived from)

My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.

Key Industry Participation

- Lead — SAE On-Road Autonomous Driving SAE Model and Simulation Task

- Member SAE ORAD Verification and Validation Task Force

- Member DIN/SAE International Alliance for Mobility Testing & Standardization (IAMTS) Sensor Simulation Specs

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)

- Presented the IEEE Barus Ethics Award for Post 9/11 Efforts

My company is Dactle

We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.

--

--

Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation