The Dangers of Inferior Simulation for Autonomous Vehicles

Michael DeKort
7 min readSep 7, 2017

First — it is stellar that folks like Waymo and Uber say they are now using more simulation. That the University of Michigan did a study saying it can be used instead of public shadow driving. And that MCity has said that the use of simulation with data and the Monte Carlo process you can cull the work down by over 99%

I have posted quite a lot on how necessary simulation is for autonomous vehicles. The fact of the matter is autonomous levels 4 and 5 will never be reached without aerospace level simulation vs public shadow driving for AI. And Level 2+ and L3 should not be used at all. The reason being that public shadow driving is dangerous and far too expensive in time and money to be viable. (Some also call it safety driving. As It is not safe I don’t use that term. And for those unfamiliar with the terms it means letting the vehicle drive and you monitor it. Control can go back and forth for major capabilities like steering).

Thousands of accidents, injuries and casualties will occur when these companies move from benign and easy scenarios to complex, dangerous and accident scenarios. And the cost in time and funding is untenable. One trillion public shadow driving miles would need to be driven at a cost of over $300B. The industry is under several very false beliefs regarding public shadow driving. Most believe it is the best or only way to get this done. That any injuries or deaths that occur are unavoidable and for the longer term greater good. And that simulation cannot do what is needed. All of that is very, very, very wrong.

Recently Waymo announced it moved to simulation. Then Uber said it is using more simulation. While this is outstanding news it could actually be just the opposite. The reason I say that these folks say they are creating their own simulation. Odds are that is a huge mistake for several reasons. They will either create something that seems great or good enough that really isn’t. Leading them to a false sense of security and real world tragedy down the line. Or they will spend needless amounts of time and money recreating wheels that exist. Or at least major portions of wheels that exist.

This is happening because of the same root cause that made people think AI and public shadow driving were the best or only way to build this technology. While intelligent, hard working , dedicated etc most of the people in this industry come from Twitter and the like. Commercial IT — not aerospace. Meaning they have very little domain engineering experience. Especially in sensors, electro-mechanical systems, actual real-time architectures, exception handling or CMMI L5 aerospace level systems engineering best practices. They and the public believe they make cool stuff, tons of money and will make quick work of it. Especially with the help of AI. The problem and the fact of the matter is they are in way over their heads and AI cannot cover for them enough. Instead of realizing this and seeking help from industries who have been there and done most of what they are trying to do they figure they have to make it themselves because it doesn’t exist or can’t be as good as what they can do.

Let me give you several examples of this:

· XPLane — Xplane is the world’s most popular and technically advanced flight simulation software for consumers. Even pilots love it. Here’s the thing though. It is nowhere near accurate and real-time enough to pass the FAAs Part 60 simulation testing to be rated for most pilot training. They can only secure certification for level 5. Only instrument familiarization. They cannot support visual or motion systems. Their architecture isn’t able to pass the FAA’s real-time system latency criteria. Their model for objects and the environment are not precise enough etc. Most people are unaware of this.

· Grand Theft Auto — Folks think so highly of this game that the University of Michigan used it in a study to prove simulation could replace public shadow driving. They didn’t use any products in the AV industry, nor from aerospace. Just Grand Theft Auto. In GTS’s (Take Two) defense it’s a game. While it could be helpful on some levels it comes up way too short in configurability, entity and environment precision, real-time, 360 awareness and integration capabilities and support for most sensors to be used for most AI, testing or engineering. Most people are unaware of this.

· Sensors and Fusion — DoD and aerospace have been fusing sensors for decades. Various radar (including 3D which is far superior to LiDAR), GPS, FLIR, video based and other systems. (Not LiDAR so much. That should be a clue). They have been using Kalman filters and other priority and probability filters to ensure the best accuracy and redundancy for a long time. Guided missiles use video recognition. While this is not a 1 to 1 correlation the AV folks are spending time, and money recreating wheels and making unnecessary mistakes.

· Public Shadow Driving/L3 — NASA has known and has been trying to tell the AV folks that it is not possible to reliably and consistently create alarm and notification systems that can nudge or force people to pay attention properly and consistently. Ford realized its professional drivers fall asleep and they couldn’t stop it. Clemson and the University of South Hampton determined 7 to 24 seconds is needed to gain enough situational awareness for drivers to be effective after being distracted. Without it people over react or make the wrong decisions. Air tragedies have occurred because of this. Beyond the safety issue is cost in time and money. RAND and Toyota say it will take ONE TRILLION miles to get to L4. RAND stated the obvious — it is impossible. And my very conservative math says that it would cost over $300B. Now factor in the brick wall of doubt and regulation the public, governments, lawyers and the press will put up when they realize they were misled. Misled about the thousands of avoidable casualties coming when the AV folks move from exaggerated benign scenarios to complex, dangerous and crash scenarios and it is easy to see this approach cannot result in autonomy.

As you can see there are several key areas where the effort has been underestimated. In total it is extremely significant. In spite of this these folks keep doubling and tripling up and figure they will just “innovate” away their last bad assumption.

Solution

My recommendation is for this industry to stop, take a breath, put the ego to the side, due their due diligence, fight the urge to not speak up and do the right thing. Find out what has been done and what the options are to get to where you want to go. Instead of assuming no one has done it or that you can do it faster and better, flip it and assume the opposite until you know otherwise. Be your own devil’s advocate and make an objective and educated make/buy decision. Odds are it will be some combination of make and buy. My other suggestion is to look at how aerospace handles best engineering practices and massive complex systems designs. One thing I can tell you from working in both world is commercial IT’s practices, organizational and project structures are nowhere near up to what is needed here. From not having real systems engineers, to stove pipes, to Agile bottoms up approaches, to no exception handling, to use cases and not diagrams etc. You world is usually made for small, quick and not very complex products. This is exactly the opposite.

Finally there is what I call the Scenario Matrix. That is the most important part of this project. It is the scope or requirements set. (If you think you can build up from the bottom using Agile you are nuts. You will take way too long due to missing parallel efforts and wind up with many time and money wasting design changes and restesting.) This is a massive effort that needs to be done in parallel and probably with an object oriented approach. A huge WBS/PBS effort. The core scenarios and their perturbations are vast. You are NOT going to drive around and stumble on all or even most of them. And trying is dumb (One trillion miles and $300B. You would have to drive all over the world, in various conditions, see every environment etc). If it is not in the matrix and it is a core scenario it will probably be missed and result in a problem later.

For more detail on that please see my articles here

Who will get to #Autonomous Level 5 First and Why

· https://www.linkedin.com/pulse/who-get-autonomous-level-5-first-why-michael-dekort

Autonomous Levels 4 and 5 will never be reached without Simulation vs Public Shadow Driving for AI

· https://www.linkedin.com/pulse/autonomous-levels-4-5-never-reached-without-michael-dekort

In closing I want to explain my intervention like approach. I understand the immediate reaction is — who does that guy think here is? We’re not idiots etc. Aerospace/DoD don’t know it all, they make mistakes all the time etc. (Keep in mind it was NASA who saved SpaceX. When they first reviewed their code they failed it for being poorly tested and not handling exceptions well. They saved Elon from himself). I worked in both world’s for over 10 years. Most of what I said is very easy to see if you have done the same. In addition large groups of like-minded folks rarely change their minds unless forced to. I would prefer that first child or family is not killed needlessly to do so. So I take the direct head on approach. I am trying to shake some shoulders. I have talked to enough people to know that most folks are starting to realize the things I am saying are true. (Look at Waymo and Chris Urmson changing their approaches. That is a HUGE sign. Do not dismiss it). But it is very hard to shout above the echo chamber and not pay a price. My hope is to help take some of the burden off folks to speak up. (Tesla engineers stating they left Tesla because they didn’t want to kill people. They are the exception). To whatever degree what I am saying is true, if folks want help I am here. What is my selfish vested interest? First Google me and Comey, Deepwater or IEEE Barus Award. Some people do the right thing because it is the right thing. Would I like to make a living at some point in this industry — yes.

--

--

Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation