Using the Real World is better than Proper Simulation for Autonomous Vehicle Development — NONSENSE

Michael DeKort
4 min readApr 22, 2019

I just watched the Tesla live Autonomy Investor Day. Elon stated that you must drive the real-world to get real-world data especially actual edge cases and long tails. They also stated you cannot get those in simulation. That is an extremely misleading answer on several fronts.

First, no one including me is suggesting that real-world driving stop. Of course it should be used. It is a great way to get data especially odd scenarios. The issue is how much you count on that driving and to separate system training through use of safety drivers from data gathering. While you will find scenarios with real-world data gathering you will miss using expert imagination your will get far more scenarios with experts using existing data and their imagination and software. There are far more combinations that can be created than can ever be experienced. BUT the real answer is to do BOTH. Making this mutually exclusive is nonsense. (Simple examples. With sensors especially cameras object composition matters. Patterns, colors, sizes etc in combination with size, location, weather lighting etc. Clothing catalogs can be scraped to create combinations to test scenarios you will never get close to stumbling on driving around. Not only will you never experience a small percentage of those variations now try factoring in different times of day, weather etc. Here is another. Take any long tail or edge case. Now how are you going to train and test on variations of that??????????????)

Regarding system training. You must experience millions of scenarios including thousands of accident scenarios, thousands of times to train and test the system. The vast amount of those scenarios will never be experiences in the real-world let alone repeat once let alone thousands of times. When those thousands of accident scenarios are run thousands of times each by EACH driverless vehicle makers they will kill thousands of people. And there is the issue of handover. No matter what monitoring and notification system you use you cannot provide time to regain the right level of situational awareness to do the right thing the wright way in most complex and critical scenarios. Most of these are accident cases. That process will kill thousands of people. There is also the massive number of scenarios that must be run and rerun thousands of times. RAND says its 500B equivalent miles at 10X better than a human. Toyota said a trillion miles. A very conservative calculation of the cost to reach a trillion miles is $300B. The only way to resolve this is to use simulation for 99.9% of this or you will never come close to finishing, go bankrupt and kill many people endlessly trying. (Tesla’s fleet driving will never get within 10% of this.)

Finally, the simulation being used must be use aerospace/DoD simulation technology. Not the gaming and non-deterministic technology being used by most simulation vendors now. Those systems have significant real-time and model fidelity issues. Those models being vehicles, tires, roads, sensors the environment etc.) This will come in to play when the planning system assumes the vehicles, tires, roads, sensors etc have performance capabilities outside of the models being used, most of which are generic. This will lead to false confidence and real-world tragedies. That in turn will lead to massive rework. I guarantee you that those people in this industry who say simulation cannot do this or aerospace/DoD simulation technology is not better than the gaming systems, have never seen a DoD simulated urban war game. The scenarios run there are more complex. They not only have to run the same scenarios as the commercial AV makers but they have to deal with vehicles going off the road on purpose and entities shooting at each other. (The simulation would be informed and validated by real world data. Including from Shadow Driving.) More on this in my article below.)

More information here

Simulation can create a Complete Digital Twin of the Real World if DoD/Aerospace Technology is used

SAE Autonomous Vehicle Engineering Magazine-End Public Shadow Driving

The Hype of Geofencing for Autonomous Vehicles

My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.

Key Industry Participation

- Lead — SAE On-Road Autonomous Driving SAE Model and Simulation Task

- Member SAE ORAD Verification and Validation Task Force

- Member DIN/SAE International Alliance for Mobility Testing & Standardization (IAMTS) Sensor Simulation Specs

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)

- Presented the IEEE Barus Ethics Award for Post 9/11 Efforts

My company is Dactle

We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.

--

--

Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation