Without DoD simulation technology Autonomous Vehicles cannot be created or created Legally

Michael DeKort
7 min readOct 29, 2019

--

I would like to make the case that not only is it impossible to create an autonomous vehicle using public shadow and safety driving, because you can’t spend the time money or survive the injuries and deaths of thousands of needless human guinea pigs, it is illegal to try.

That legal case is comprised of two parts. A legal licensed and qualified driver is not present in time critical handover or accident scenarios. And any avoidance by the safety driver to disengage the system when there is an accident scenario about to occur or one they believe will occur is reckless endangerment.

Background

The current method being utilized by many of the autonomous vehicle (AV) manufacturers for the majority of the development and testing is Public Shadow and Safety Driving. Shadow Driving is the process by which a human operator maintains lateral/steering and longitudinal braking/acceleration control of the vehicle for the purposes of gathering data and testing the intention of the planning and execution or control systems. This is done by logging not only what the autonomous system intends to do, at certain iterations, but what the supporting perception and planning systems states are. (In some cases, the driver may cede longitudinal or braking and acceleration. But not latitude or steering control.) Safety Driving is where the operator cedes all control of the vehicle, especially latitude or steering control, for purposes of testing the autonomous system’s performance through scenario threads. As machine learning (ML) is extremely inefficient and does not yet infer nearly as well as humans, a vast sum of scenarios need to be run hundreds if not thousands of times each for the system to “learn”. Be that through imitation or reinforcement learning. The development and testing processes are currently conduct in three environments: The real world, test tracks and simulation. With the majority being conducted in the real world.

Handover/Fallback — Safety Driving

As these systems are in the process of being developed the human Safety Driver often must take over the system, specifically latitude or steering control, with little or no warning, when the systems fail. While training and monitor and alarm systems can make this process more effective and safer there is no approach that can resolve time critical scenarios. With accident scenarios, especially those that involve high speed or quick movement, falling squarely in this category. Another facet of this issue is complacency and the operator falling asleep. A risk that increases as the system becomes more effective. That is due to longer periods where the operator is not utilized and permitted to become overly confident, distracted and lose situational awareness. The Universities of Leeds and Southampton have shown that humans require 3–45 seconds of time to regain enough situational awareness to affect the right maneuver the right way. Many have cautioned against this approach including NASA, Missy Cummings, the head of Robotics at Duke University and even several automakers and autonomous vehicle makers including Ford, Volvo, Waymo and Aurora. (This despite their using the process. The reasons for this will be explained below.) Since it is impossible to know when a scenario the falls under this critical time period, when it will occur or if it can be avoided the overall process must assume it can at any time and cannot be properly avoided. Given this cannot happen using safety driving a licensed and qualified driver is not driving the vehicle where a normal human driver would not have issues handing the scenario if they were not safety driving.

Accident Scenarios

For the systems to “learn” accident scenarios they must be experienced over and over. Some thousands of times each. For the threads to be tested in their entirety the operator must not disengage. This means the operator must put their lives at risk to test the systems. This will result in thousands of injuries and casualties. To date at least seven people have died as “Safety Drivers”. Six in a Tesla and one in an Uber. How will the autonomous communities react to the same situation? Or when the first child or family is injured or killed? Or when hundreds or thousands are harmed? Since this process requires the safety driver to avoid taking proper action in accident scenarios, actions a human under control would be able to take in many of the scenarios, a licensed or qualified driver is not driving the vehicles in these circumstances.

Illegal Process Overall

Since a legal and qualified human acting as a safety driver is either prevented from driving properly or has to purposefully avoid doing so to test them these systems are not being operated by a licensed and legal driver.

Simulation Capabilities and Expectations

The belief that it is not possible to adequately simulate or model enough facets of real-world development and design to replace the real world to any meaningful degree. To create a complete “digital twin”. Accompanying this belief is the associated belief that the simulation and modeling technology and approaches used in the autonomous vehicle industry are the most advanced regardless of industry. This assumption affirms the belief that it is simply not possible to replace the real world. Leaving Public Shadow and Safety Driving as the primary means to develop and test these systems. Given the significant real-time and model fidelity gaps in the systems and products being used in the industry this belief is, unfortunately, well founded. If AV makers were to try to utilize these systems for most of their development, especially in complex and accident scenarios, the performance gap between the simulation and the real world could be enough to cause planning errors. These errors will result in false confidence and the AV decelerating, accelerating or maneuvering improperly. That could result in an accident or one being worse than need be

Solution

The primary component of the resolution is to make the industry aware of and utilize DoD/aerospace simulation and modelling technology to build effective and complete digital twins, especially as they relate to physics. This technology remedies all the real-time and model fidelity issues I described above.

Given this, it is now possible to invert and normalize the due diligence paradigm. Risk to human life can now be almost entirely mitigated. (In rare cases Safety Driving would be required it should be run as a structured event. Not unlike a movie set.) This would result in now being able to require manufactures to prove human beings are required as test subjects. Regardless of whether the environment is a test track or the real world. Where simulation cannot be utilized the developer would demonstrate the need for test track use. Where test track use is not adequate the need to utilize the public domain would be proven. This approach would align us with the same approach many industries use today. Including aerospace, DoD and even automotive.

With respect to using simulation being able to find long tails and edge or corner cases. While the number of scenarios is vast, most likely in the millions, possibly billions for perception testing, and the effort will clearly be significant, it is possible to get to a verifiable sigma level or factor better than a human, with the right cross-domain approach. And by utilizing data from a wide array of sources. Those including Shadow Driving, HD mapping, manufacturer data, independent testing, insurance companies, research and historical data from various transportation domains etc. Finally, the simulation and modelling performance or level of fidelity will have to be verified against its real-world master. The process involved here would not be unlike the FAA currently performs using the Part 60 and DERs.

Supporting Information

My articles

Autonomous Vehicles Need to Have Accidents to Develop this Technology

Proposal for Successfully Creating an Autonomous Ground or Air Vehicle

· https://medium.com/@imispgh/proposal-for-successfully-creating-an-autonomous-ground-or-air-vehicle-539bb10967b1

Proposal for Successfully Creating an Autonomous Ground or Air Vehicle

Using the Real World is better than Proper Simulation for AV Development — NONSENSE

Simulation can create a Complete Digital Twin of the Real World if DoD/Aerospace Technology is used

The Hype of Geofencing for Autonomous Vehicles

SAE Autonomous Vehicle Engineering Magazine — End Public Shadow/Safety Driving

Relevant Biography

Former system engineer, engineering and program manager for Lockheed Martin. Including aircraft simulation, the software engineering manager for all of NORAD and the Aegis Weapon System.

Key Autonomous Vehicle Industry Participation

- Lead — SAE On-Road Autonomous Driving (ORAD) Simulation Task Force

- Member SAE ORAD Verification and Validation Task Force

- SME — DIN/SAE International Alliance for Mobility Testing & Standardization group to create sensor simulation specs

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee

- Presented the IEEE Barus Ethics Award for Post 9/11 DoD/DHS Efforts

My company is Dactle — We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based development and testing system with an end-state scenario matrix to address all of these issues. We can supply all of the scenarios, the scenario matrix tool, the data, the integrated simulation or any part of this system. A true all model type digital twin. If someone would like to see a demo or discuss this further please let me know.

--

--

Michael DeKort
Michael DeKort

Written by Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation

No responses yet