Proposal for Successfully Creating an Autonomous Ground or Air Vehicle

Michael DeKort
10 min readOct 6, 2019

1. Introduction

The goal of this proposal is to ensure autonomous vehicles (ground or air) are developed and tested in a manner that is as effective, efficient and safe as possible.

2. Background

The current method being utilized by many of the autonomous vehicle (AV) manufacturers for the majority of the development and testing is Public Shadow and Safety Driving (Flying). Shadow Driving is the process by which a human operator maintains lateral/steering and longitudinal braking/acceleration control of the vehicle for the purposes of gathering data and testing the intention of the planning and execution or control systems. This is done by logging not only what the autonomous system intends to do, at certain iterations, but what the supporting perception and planning systems states are. (In some cases, the driver may cede longitudinal or braking and acceleration. But not latitude or steering control.) Safety Driving is where the operator cedes all control of the vehicle, especially latitude or steering control, for purposes of testing the autonomous system’s performance through scenario threads. As machine learning (ML) is extremely inefficient and does not yet infer nearly as well as humans, a vast sum of scenarios need to be run hundreds if not thousands of times each for the system to “learn”. Be that through imitation or reinforcement learning. The development and testing processes are currently conduct in three environments: The real world, test tracks and simulation. With the majority being conducted in the real world.

3. Problem Statement

Overall the issues with the use of Public Shadow and Safety Driving culminate in an untenable and needlessly harmful approach. One that cannot result in anything close to a legitimate autonomous system. And one that takes lives needlessly. The result being that the industry will be doing the exact opposite of what it intends to do. A true autonomous system will never be created, the relevant lives will not be saved and worst of all, thousands of people will be harmed needlessly as the effort perpetually fails.

Three key areas of concern:

· Viability

· Liability/Safety

· Real world and Simulation Capabilities and Expectations

· 3.1 Viability

The Viability issues involve time and cost. The issues here being that the effort and funding needed to drive and redrive, fly and refly, or stumble and restumble on the scenarios needed to reach a safety factor of 10X a human are not tenable. The RAND corporation estimated it would take 500 billion miles to do this. Toyota estimated one trillion miles to create autonomous vehicles. (I am not aware of any studies in the air domain.) A conservative cost estimate for those trillion miles over a ten-year period is $300B. (These estimates are per manufacturer. I would be glad to provide my formula for this.) To ensure the estimate is conservative engineering costs were not considered, only an estimated cost for vehicles, fuel and drivers. Cost from legal liability were also not included.

A key issue that was not originally included in those estimates involve growing concerns about Deep Learning. It appears the approach will produce errors if it is not able to recognize the data it is detecting. Often these are patterns or shadows. In some cases, systems have frozen or reacted improperly. Accidents and deaths have occurred. Regarding viability. Even attempting to resolve this would take a level of effort and funding that is impossible to attain. The problem stems from these systems detecting objects inside out vs outside in. Or a micro detection vs a macro detection approach. The latter being what humans do. Humans play the odds, which are massively in our favor, then use our processing time to rescan for any assumption being invalidated. We do not focus on small parts of things then getting lost or jammed up because of strange patterns or colors. Like the stop sign test, creating false lane markers with two pieces of white tape or freezing AVs with T-shirts having odd patterns and colors on them. Example — when we approach a city street, we assume people are people based on outlines and location. Meaning objects with those general shapes at that location are very likely to be people. We then assume they will not run out in front of us. We then rescan to ensure those very high probability assumptions are not being invalidated. We do not focus on color of skin, clothing etc. While there would clearly be reasons to do some hyper detection that should not be the rule. Doing things this way will keep you from ever finishing and you will harm people for no reason. For example, do you plan to scan all fabric patterns on the planet in various lighting and weather conditions? And group them together to ensure that when grouped tightly they are not an issue? Can you avail yourself of all shadows in all locations to ensure there isn’t a new shadow that causes problems? Shouldn’t Dynamic Sense and Avoid with Dynamic Collision Avoidance be the primary approach with Deep Learning where needed?

· 3.2 Liability/Safety

The Liability/Safety issues create a situation where human lives are put at risk as the primary means to develop and test these systems. Due diligence is defined as not preferring but actually requiring harm to humans. These issues involve two primary areas involving Safety Driving:

Handover/Fallback

o As these systems are in the process of being developed the human Safety Driver often must take over the system, specifically latitude or steering control, with little or no warning, when the systems fail. While training and monitor and alarm systems can make this process more effective and safer there is no approach that can resolve time critical scenarios. With accident scenarios, especially those that involve high speed or quick movement, falling squarely in this category. Another facet of this issue is complacency and the operator falling asleep. A risk that increases as the system becomes more effective. That is due to longer periods where the operator is not utilized and permitted to become overly confident, distracted and lose situational awareness. The Universities of Leeds and Southampton have shown that humans require 3–45 seconds of time to regain enough situational awareness to affect the right maneuver the right way. Many have cautioned against this approach including NASA, Missy Cummings, the head of Robotics at Duke University and even several automakers and autonomous vehicle makers including Ford, Volvo, Waymo and Aurora. (This despite their using the process. The reasons for this will be explained below.)

Note — Issues mentioned above regarding Deep Learning have caused major safety issues including fatalities. Errors can result in unsafe vehicle operation or handover conditions.

Accident Scenarios

o For the systems to “learn” accident scenarios they must be experienced over and over. Some thousands of times each. For the threads to be tested in their entirety the operator must not disengage. This means the operator must put their lives at risk to test the systems. This will result in thousands of injuries and casualties.

o To date at least seven people have died due to “Safety Driving”. Six in a Tesla and one from an Uber. How will the autonomous communities react to the same situation? Or when the first child or family is injured or killed? Or when hundreds or thousands are harmed?

· 3.3 Real world and Simulation Capabilities and Expectations

There are two main beliefs that dictate the vast majority of development and testing being accomplished in the real world.

Long tails, corner or edge cases can only be found in the real world

o The issue here is the belief that humans cannot think of enough scenarios or variations of scenarios to equal what can be found in the real world. The problem with this assumption is that it ignores time and cost. Setting aside the possibilities are infinite, it is simply not possible to spend enough time and money to stumble on all the scenarios deemed necessary to assure measurable due diligence, let alone finding their variations and repeating them once let alone the hundreds or thousands of times required to train the system.

Simulation Capabilities and Expectations

o The issue here is the belief that it is not possible to adequately simulate or model enough facets of real-world development and design to replace the real world to any meaningful degree. To create a complete “digital twin”. Accompanying this belief is the associated belief that the simulation and modeling technology and approaches used in the autonomous vehicle industry are the most advanced regardless of industry. This assumption affirms the belief that it is simply not possible to replace the real world. Leaving Public Shadow and Safety Driving as the primary means to develop and test these systems. Given the significant real-time and model fidelity gaps in the systems and products being used in the industry this belief is, unfortunately, well founded. If AV makers were to try to utilize these systems for most of their development, especially in complex and accident scenarios, the performance gap between the simulation and the real world could be enough to cause planning errors. These errors will result in false confidence and the AV decelerating, accelerating or maneuvering improperly. That could result in an accident or one being worse than need be.

Note about Deep Learning

I would like to understand why Deep Learning folks are hyper detecting most objects. Humans use as little info as possible to determine object identity and assumed movement. We play the odds, which are massively in our favor, then use our processing time to rescan for any assumption being invalidated. We do not focus on small parts of things then getting lost or jammed up because of strange patterns or colors. Like the stop sign test, creating false lane markers with two pieces of white tape or freezing AVs with T-shirts having odd patterns and colors on them.

Example — when we approach a city street, we assume people are people based on outlines and location. Meaning objects with those general shapes at that location are very likely to be people. We then assume they will not run out in front of us. We then rescan to ensure those very high probability assumptions are not being invalidated. We do not focus on color of skin, clothing etc.

While there would clearly be reasons to do some hyper detection that should not be the rule. Doing things this way will keep you from ever finishing and you will harm people for no reason. For example, do you plan to scan all fabric patterns on the planet in various lighting and weather conditions? And group them together to ensure that when grouped tightly they are not an issue? Shouldn’t folks be using Dynamic Sense and Avoid with Dynamic Collision Avoidance augmented with Deep Learning where needed?

4. Solution

The primary component of the resolution is to make the industry aware of and utilize DoD/aerospace simulation and modelling technology to build effective and complete digital twins, especially as they relate to physics. This technology remedies all the real-time and model fidelity issues I described above.

Given this, it is now possible to invert and normalize the due diligence paradigm. Risk to human life can now be almost entirely mitigated. (In rare cases Safety Driving would be required it should be run as a structured event. Not unlike a movie set.) This would result in now being able to require manufactures to prove human beings are required as test subjects. Regardless of whether the environment is a test track or the real world. Where simulation cannot be utilized the developer would demonstrate the need for test track use. Where test track use is not adequate the need to utilize the public domain would be proven. This approach would align us with the same approach many industries use today. Including aerospace, DoD and even automotive.

With respect to using simulation being able to find long tails and edge or corner cases. While the number of scenarios is vast, most likely in the millions, possibly billions for perception testing, and the effort will clearly be significant, it is possible to get to a verifiable sigma level or factor better than a human, with the right cross-domain approach. And by utilizing data from a wide array of sources. Those including Shadow Driving, HD mapping, manufacturer data, independent testing, insurance companies, research and historical data from various transportation domains etc. Additionally, the simulation and modelling performance or level of fidelity will have to be verified against its real-world master. The process involved here would not be unlike the FAA currently performs using the Part 60 and DERs. Finally, there are the issues with Deep Learning as described above. It seems that the right approach would be to use Dynamic Sense and Avoid with Dynamic Collision Avoidance with targeted Deep Learning.

Note–While the paradigm shift is clearly in motion, industry participants, not only the manufactures and developers but consultants as well, often push back on what has been stated here. This is due to an industry wide lack of exposure and associated domain, negative testing and systems engineering experience, as well as political and personal considerations. As witnessed several times over history, including the events that led to the formation of the FAA, people are reticent to admit their fundamental engineering approaches are is flawed. My suggestion here is to have full and open discussions as well as provide demonstrable proof of the approach I have stated here.

Supporting Information

My articles

Autonomous Vehicles Need to Have Accidents to Develop this Technology

Using the Real World is better than Proper Simulation for AV Development — NONSENSE

Simulation can create a Complete Digital Twin of the Real World if DoD/Aerospace Technology is used

The Hype of Geofencing for Autonomous Vehicles

SAE Autonomous Vehicle Engineering Magazine — End Public Shadow/Safety Driving

Relevant Biography

Former system engineer, engineering and program manager for Lockheed Martin. Including aircraft simulation, the software engineering manager for all of NORAD and the Aegis Weapon System.

Key Autonomous Vehicle Industry Participation

- Lead — SAE On-Road Autonomous Driving (ORAD) Simulation Task Force

- Member SAE ORAD Verification and Validation Task Force

- SME — DIN/SAE International Alliance for Mobility Testing & Standardization group to create sensor simulation specs

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee

- Presented the IEEE Barus Ethics Award for Post 9/11 DoD/DHS Efforts

My company is Dactle — We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based development and testing system with an end-state scenario matrix to address all of these issues. We can supply all of the scenarios, the scenario matrix tool, the data, the integrated simulation or any part of this system. A true all model type digital twin. If someone would like to see a demo or discuss this further please let me know.

--

--

Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation