The EU proposes a driverless driver’s test that has significant issues but would still stop Waymo, Cruise, Gatik etc

Michael DeKort
4 min readApr 20, 2022


Here is the link —

Where we in the US not only have ZERO laws or standards for a driverless system to be properly certified or licensed, the EU just put out a pretty good draft. And we not only have no standard we allow the AV maker to determine what to test, how to test it, to certify they passed and provide ZERO information on the test itself or detailed results. They simply self-certify the fox built and tested the hen house properly.

I go into that more in my article here

The dangerous, undefined, undisclosed, self-certification and licensing of driverless vehicles


The EU however, has done a pretty good job of creating that standard and regulation. While there are some issues I feel need to be addressed, it’s good enough to stop a significant amount of hype. Waymo, Cruise, Gatik etc would fail it. And not only fail it, but fail it so spectacularly they could be sued and possibly face criminal charges for fraud and gross negligence. And they wouldn’t be able to hide that due to the requirements around disclosure. US DOT, state DOTs and NHTSA should be embarrassed. The EU sees the fox and is on its tail. The US, enables the fox to a grossly negligent degree.

Here is the link to the draft —

Here are the comments I posted to the site

This is very good. Clearly a lot of work went into the document set. And we in the US have absolutely nothing. Instead letting the fox build and self-certify the hen house with zero disclosure of the test or results.

A couple off important items

-Object classification and detection — Which needs to ensure there is no classification or detection issues especially when those detections or images are not real, degrade from the original or cause confusion

-Remote operation in the public domain may be a last option. But it should be noted that the latency and lack of motion cues make it very dangerous, especially when there is progressive loss of traction

-The scenarios provided seem to assume ML/GL/DL can infer. For all intents and purposes they cannot. Therefore, a massive amount of scenarios and objects need to be tested, as well as their variations. This to ensure some requisite level of memorization (“learning”) has occurred. Especially for crash and edge cases. Or is that covered here? “ (iii) Situations within the ODD where a system may create unreasonable safety risks for the vehicle occupants and other road users due to operational disturbances (e.g. lack of or wrong comprehension of the vehicle environment, lack of understanding of the reaction from the operator/remote operator, vehicle occupants or other road users, inadequate control, challenging scenarios”

The Sensor/Perception and Planning systems should also be validated, for each scenario test case, to ensure final result was not by chance or error.

-I saw the sections on simulation validation. I am not sure they mention model fidelity and real-time accuracy.

I say this because right now there is not a single simulation system used in this industry that has the right capabilities. They all fall short here. Examples. No one models exact sensors nor remotely well enough. And no one can run 16ms real-time or faster, especially in complex scenarios. (Due to the wrong core architecture and inadequate federated modeling) My concern is folks justify less than acceptable capabilities by stating this is as good as it gets. (By use of gaming systems). There will be times very detailed performance curves need to be verified to ensure fidelity. especially of sensors. Examples. When does that boy in the black wool coat disappear? 150M? 155M etc. And the real-time performance must be validated to 16ms especially when scenarios push the gaming architectures. (A combination of scenario density, complexity and ego model/ other model speeds.)

More on my POV here

Cognitive Dissonance and the Driverless Vehicle Industry


The Autonomous Vehicle Industry can be Saved by doing the Opposite of what is being done now


How the failed Iranian hostage rescue in 1980 can save the Autonomous Vehicle industry

My name is Michael DeKort — I am Navy veteran (ASW-C4ISR) and a former system engineer, engineering, and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, a software project manager on an Aegis Weapon System baseline, and a C4ISR systems engineer for DoD/DHS and the US State Department (counter-terrorism). And a Senior Advisory Technical Project Manager for FTI to the Army AI Task Force at CMU NREC (National Robotics Engineering Center)

Autonomous Industry Participation — Air and Ground

- Founder SAE On-Road Autonomous Driving Simulation Task Force

- Member SAE ORAD Verification and Validation Task Force

- Member UNECE WP.29 SG2 Virtual Testing

- Stakeholder USDOT VOICES (Virtual Open Innovation Collaborative Environment for Safety)

- Member SAE G-35, Modeling, Simulation, Training for Emerging AV Tech

- Member SAE G-34 / EUROCAE WG-114 Artificial Intelligence in Aviation

- Member Teleoperation Consortium

- Member CIVATAglobal — Civic Air Transport Association

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee

SAE Autonomous Vehicle Engineering magazine editor calling me “prescient” regarding my position on Tesla and the overall driverless vehicle industry’s untenable development and testing approach — (Page 2)

Presented the IEEE Barus Ethics Award for Post 9/11 DoD/DHS Whistleblowing Efforts



Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation