UK is asking for input for its driverless “safety ambition” — Which includes a first release that is as good as a human less our foibles

Michael DeKort
5 min readAug 22, 2022

--

Link — https://www.gov.uk/government/consultations/self-driving-vehicles-new-safety-ambition

Important Words

“We believe that self-driving vehicles should be held to the same high standard of behaviour as that expected of human drivers. Current law expects human drivers to be competent and careful. Self-driving vehicles should, therefore, be expected to achieve an equivalent level of safety to a competent and careful human driver. This is safer than the average human driver.”

“We are asking for your views on the approach that self-driving vehicles should be expected to achieve an equivalent level of safety to that of a competent and careful human driver.”

My Response

First, when anyone uses the word ambition, framework, guidance etc I get real worried it’s a hedge meant to elicit false confidence. Setting that aside. as this appears to be as good as a human less our foibles, that makes it safer than a human. As such I am good with the step as long as the capabilities are proven prior to fielding, the test and results are made public, and these systems are never used for development or testing. Including their own enhancements, bug fixes, regression testing etc. (The UK announcement doesn’t mention most of this. Which is concerning.)

I believe that testing should be like the EU AV Type Cert less the gaps I highlight below. (From an earlier article on the subject.) I would note that there is no system now anywhere capable of this on the planet.

I believe that testing should be like the EU AV Type Cert less the gaps I highlight below. (From an earlier article on the subject.) I would note that there is no system now anywhere capable of this on the planet.

My Comments on the EU AV Type Certification (draft)

The EU however, has done a pretty good job of creating that standard and regulation. While there are some issues I feel need to be addressed, it’s good enough to stop a significant amount of hype. Waymo, Cruise, Gatik etc would fail it. And not only fail it, but fail it so spectacularly they could be sued and possibly face criminal charges for fraud and gross negligence. And they wouldn’t be able to hide that due to the requirements around disclosure. US DOT, state DOTs and NHTSA should be embarrassed. The EU sees the fox and is on its tail. The US, enables the fox to a grossly negligent degree.

Here is the link to the draft — https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12152-Automated-cars-technical-specifications_en

Here are the comments I posted to the site

This is very good. Clearly a lot of work went into the document set. And we in the US have absolutely nothing. Instead letting the fox build and self-certify the hen house with zero disclosure of the test or results.

A couple off important items

-Object classification and detection — Which needs to ensure there is no classification or detection issues especially when those detections or images are not real, degrade from the original or cause confusion

-Remote operation in the public domain may be a last option. But it should be noted that the latency and lack of motion cues make it very dangerous, especially when there is progressive loss of traction

-The scenarios provided seem to assume ML/GL/DL can infer. For all intents and purposes they cannot. Therefore, a massive amount of scenarios and objects need to be tested, as well as their variations. This to ensure some requisite level of memorization (“learning”) has occurred. And to avoid false confidence. Especially for crash and edge cases. Or is that covered here? “3.5.5.3 (iii) Situations within the ODD where a system may create unreasonable safety risks for the vehicle occupants and other road users due to operational disturbances (e.g. lack of or wrong comprehension of the vehicle environment, lack of understanding of the reaction from the operator/remote operator, vehicle occupants or other road users, inadequate control, challenging scenarios”

-The Sensor/Perception and Planning systems should also be validated, for each scenario test case, to ensure final result was not by chance or error.

-I saw the sections on simulation validation. I am not sure they mention model fidelity and real-time accuracy. I say this because right now there is not a single simulation system used in this industry that has the right capabilities. They all fall short here. Examples. No one models exact sensors nor remotely well enough. And no one can run 16ms real-time or faster, especially in complex scenarios. (Due to the wrong core architecture and inadequate federated modeling) My concern is folks justify less than acceptable capabilities by stating this is as good as it gets. (By use of gaming systems). There will be times very detailed performance curves need to be verified to ensure fidelity. especially of sensors. Examples. When does that boy in the black wool coat disappear? 150M? 155M etc. And the real-time performance must be validated to 16ms especially when scenarios push the gaming architectures. (A combination of scenario density, complexity and ego model/ other model speeds.)

More on my POV here

The Autonomous Vehicle Industry can be Saved by doing the Opposite of what is being done now

· https://medium.com/@imispgh/the-autonomous-vehicle-industry-can-be-saved-by-doing-the-opposite-of-what-is-being-done-now-b4e5c6ae9237

How the failed Iranian hostage rescue in 1980 can save the Autonomous Vehicle industry

My name is Michael DeKort — I am Navy veteran (ASW-C4ISR) and a former system engineer, engineering, and program manager for Lockheed Martin. I worked in aircraft/constructive DoD/aerospace/FAA simulation, the software engineering manager for all of NORAD, a software project manager on an Aegis Weapon System baseline, and a C4ISR systems engineer for DoD/DHS and the US State Department (counterterrorism). And a Senior Advisory Technical Project Manager for FTI to the Army AI Task Force at CMU NREC (National Robotics Engineering Center)

Autonomous Industry Participation — Air and Ground

- Founder SAE On-Road Autonomous Driving Simulation Task Force

- Member SAE ORAD Verification and Validation Task Force

- Member UNECE WP.29 SG2 Virtual Testing

- Stakeholder USDOT VOICES (Virtual Open Innovation Collaborative Environment for Safety)

- Member SAE G-35, Modeling, Simulation, Training for Emerging AV Tech

- Member SAE G-34 / EUROCAE WG-114 Artificial Intelligence in Aviation

- Member Teleoperation Consortium

- Member CIVATAglobal — Civic Air Transport Association

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee

SAE Autonomous Vehicle Engineering magazine editor calling me “prescient” regarding my position on Tesla and the overall driverless vehicle industry’s untenable development and testing approach — (Page 2) https://assets.techbriefs.com/EML/2021/digital_editions/ave/AVE-202109.pdf

Presented the IEEE Barus Ethics Award for Post 9/11 DoD/DHS Whistleblowing Efforts

--

--

Michael DeKort
Michael DeKort

Written by Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation

Responses (1)