Simulation can create a Complete Digital Twin of the Real World if DoD/Aerospace Technology is used
It is gratifying to see the autonomous vehicle simulation industry moving toward what I have been passionately saying for several years. Now becoming common speak is the messaging of proper simulation as the only way to mitigate the debilitating issues with public shadow/safety driving so we can get to a legitimate L4 in our lifetime without going bankrupt and harming people for no reason. This simulation needs to be a “digital twin” of the real world, especially as it pertains to the “physics.” Some companies have excellent visuals or vehicle models and others have some associated physics for tire/road interactions and other finite details. The problem is that not one company is offering anything close to a complete and proper system, and there are no simply combined current commercial products that get you close. Worst of all, the automotive industry assumes the technology in place is the best that can currently be delivered. That, in turn, propagates the belief that creating a true real-world digital twin isn’t possible. This is simply not true IF you use the right technology.
What is the Best Approach?
Proper simulation (Software-in-the-Loop SIL) should and can replace 99.9% of public shadow and safety driving. The reason for this being the untenable and needlessly dangerous use of public shadow and safety driving. Regarding shadow driving. It does provide critical data and intention testing, as well as provides key data to inform and validate the simulation.
The problem with shadow driving is not a safety issue, but one of time and cost. You simply cannot use it for most real-world development because you cannot drive and redrive, stumble and re-stumble on enough scenarios, miles etc. to get close to finishing in a reasonable time frame. (With regard to the belief that not enough can be done in simulation or so much more can be done using the real-world. This is a red herring. Perfect or eternity should not be the enemy of 6.7 sigma. The point being that with use of proper simulation and scenario set we can reach a point where the systems are provably better than a human at some X times a human or sigma factor. Again, the real-world being “real” has limited value if you cannot spend the time or money to avail yourself of that eternal data set)
Regarding safety driving I believe it should be eliminated where it involves the public, non-protected or non-professional drivers. This means that when it is used, in the real-world or on test tracks, it is because simulation has proven not to do what needs to be done. And when shadow driving is used, it is made safe, not unlike a movie set. This, by default, will eliminate the need for safety drivers to be human Guinea pigs and Kamikaze drivers (along with their passengers and the public) in learning accident scenarios with damage or death. The practice of experiencing thousands of accident scenarios thousands of times over, expecting the safety driver to not disengage, have the accident, thus causing thousands of injuries and deaths, is a needless, unethical, and immoral practice. (For more on why public shadow driving is untenable see my articles below.)
Why does using Proper Simulation Matter?
This is an issue of matching levels of fidelity to use cases or scenarios. When the right level of fidelity is not used, there will be development flaws. When complex scenarios are run, especially where the performance curves or attributes of any model are exceeded, the Planning system will have a flawed understanding of some aspect or aspects of the real world. The result will be creation of a flawed plan. In far too many scenarios, in the outcome will be creating real-world accidents, not avoiding them or their being worse than they need be. This will usually be caused by some combination of braking, acceleration, or maneuvering being improperly timed or the voracity being incorrect.
The simulation systems that are being utilized today are adequate if you are working on general or non-complex real-world development or testing. However, in order to run complex scenarios, the depth, breadth, and fidelity of the simulation is critical. The Autonomous Vehicle (AV) makers will need to keep track of every model’s capabilities for every scenario to make sure none is exceeded. If AV makers do not do this, especially in complex scenarios, the end result will be the development of false confidence in the AV system. Keep in mind that machine learning does not infer well, not nearly as good as a human as we have a lifetime of learning especially for object detection. Additionally, perception systems right now are far too prone to error. The famous stop-sign-tape test is an example. This means that development and testing must be extremely location-, environment-, and object-specific. You could test thousands of scenarios for a common road pattern in one place or have just a couple object differences, like clothing patterns, and wind up with errors if you think you don’t have to repeat most of that testing in most other locations. This raises the scenario variations into the millions.
What is Proper Simulation?
Real Time — This is the ability for the system to process data and math models fast enough and in the right order, so no tasks of any significance are missed or late compared to the real world. This needs to be extended to the most complex and math-intensive scenarios. And by math intensive, I mean every model must be mathematically precise as well as be able to properly function. That could be thousands running at a time. This is where gaming architectures, which are used by many simulation companies, have significant flaws even with the best of computers. The best way to architect these systems is to build a deterministic or time-based and structured architecture where every task or model can be run at any rate and in any order. Most systems out there are non-deterministic and just let things run. They will say modern computers allow them to do it so fast that the structure I described is not needed. I believe this is wrong. (At an event I attended at Jose De Oliveira, Unity Technology’s engineering manager for autonomy, spoke after me and confirmed my point of view.) The systems that are not deterministic run everything at one time at one specified rate. You can see accommodations made for these issues in gaming. Their play box is less than 60 sq miles and where they can avoid physics or math either by eliminating them, like when you can walk through trees, or dumbing them down as much as possible.
Model Types — The critical model types are: The ego-vehicle, tires, roads, sensors fixed and other moving objects, and the environment. Each of these needs to be virtually exactly like the target it is simulating. (Geo-specific vs geo-typical for example. The best technology can get this down to under five cm of positional accuracy.) This includes not just visual aspects, if they are applicable, but physical capabilities. You need to simulate or model the exact vehicle, sensor, object, etc. Not something like it or a reasonable facsimile. As I said before, this is important because machine learning does not infer well. As I stated above, this means the same road patterns will have to be developed and tested in a wide array of locations, at different times of day, in different weather with different signage, etc. All of this must be extremely articulately modeled both visually and physically. This means modeling how an active senor works in the real-world, not simply showing a visual representation using ray tracing.
Take radar, for example. You must simulate not only the Ego-radar but how the world and other systems interact with it. Every other radar or system emitting RF that would cause clutter or interference must be properly modeled as does how every radar’s signal is being affected by its environment. The reason for this is that the Ego-model’s received signal must be an accurate model of the culmination of all of these factors at any physical location in the environment or scenario. This is where I would like to address vehicle models. The Original Equipment Manufacturers (OEMs), simulation and simulator companies have been creating detailed vehicle models for some time. However, I would caution against assuming they are precise enough in all scenarios — especially in complex scenarios as the simulation companies have not likely instrumented these vehicles in all the relevant scenarios required here to ensure the performance curves are accurate. And keep in mind this is not just a function of the vehicle design data or specs. The model structure itself or the overall system being used could be flawed.
Another example is friction coefficients as they are applied to the road surface and tires. Each tire will experience a different part of the road. Those parts could contain segmented friction values based on the surface composition. It could be dry or wet, be painted, have oil or gravel on it, or any combination thereof. And that combination can be in varying segments under the tread pattern. The models need to properly account for this. Good models can divide these areas into segments of less than a centimeter. (Puddles would be an extension of this. Like the real world, a puddle can cause you to hydroplane or increase friction so much that it pulls your vehicle to the side of the puddle’s location.)
Examples
A Velodyne 128 LiDAR scanning an intersection in various degrees of rain. That LiDARs operation and interaction with exact objects and parts of them needs to be modeled dynamically and in real time and faster. That .23 degree beam at a certain power level progressively interacting with rain drops until it gets to the tire and a polygon returns the reflectivity value for rubber. Only to then progressively interact with rain drops of various density again. (The laser may not survive any part of that rain.)
10 vehicles with the Delphi ESR radar in an exact parking garage. Each radar and the cumulative 1st, 2nd and 3rd reflections, bouncing off exact objects, modeled in and faster than real-time. Or extend that to a packed intersection in NY City with 100 of those radars. The radar returns from every object would include associated RCS and reflectivity values.
Mixed friction coefficients under a tire at any point on the road. An example might be where there is an even 1/3 split under the tread across dry asphalt, a painted line and oil. And at that point a vehicle performs some aggressive operation.
Full-motion Driver-in-the-Loop (DIL) Simulator — When the real-world vehicle is replaced by simulation, you must use one of these devices to properly develop and test the system. This is whether reinforcement or imitation learning is used. The reason for this is humans cannot drive properly nor evaluate proper driving without motion cues or pressure on their bodies or inner ears in several classes of scenarios. The easiest to understand is loss of traction. It is simply not possible to drive or evaluate a system when you are driving in the snow without feeling what is going on. Other examples of this are where there is complex maneuvering, steep grades, running over something, or even bumping or being bumped by something else. Since it is desirable to run simulations faster than real time, the use of this device will be minimal. However, the scenarios in which it should be used for are critical.
(An example of proper real time where ground vehicles are concerned is where there is never more than 16 msec of latency from a driver action to the visual, control, or motion system of a full-motion simulator.)
Ask for Proof of Fidelity
Unfortunately, here is where the industry is at regarding simulation:
· The simulation companies do not know what capabilities are necessary or possible.
· The simulations companies are not utilizing the right or best technology by choice.
Whether unintentional or not, this results in misleading product information being conveyed. That will result in false confidence, flawed Planning, and real-world errors and tragedies. Given this, it is imperative that proof of model fidelity and real-time performance in a wide array of scenarios be provided, reviewed, and confirmed. I know of no simulation company who currently provides this data. (Begging the question: why not if the data is accurate and complete?) This information is critical both in the cases where you want or need to use a true digital twin and where you do not believe you have to do so, but want to ensure you have no negative impacts of that decision.
Some of the ways to validate the models include using the source data like instrumented performance data for vehicle performance, technical data from vendors, High Definition (HD) mapping data, Hardware-in-the-Loop (HIL) testing, satellite data, and, most importantly, data gathered from shadow driving.
Cloud-Based Systems
Cloud-based systems can be treated like local instances about the points I have made here, except for where the DIL simulator is involved. Getting that latency down to 16 msec is probably not going to be possible, especially in complex and loaded scenarios.
DoD/Aerospace Technology is the Solution
First let me address what is usually the immediate reaction upon hearing that DoD technology should be used. DoD does not have to deal with the same complexity as the commercial AV world. That belief is incorrect. The DoD autonomous ground vehicle folks not only have to deal with the same public domain and scenarios as the commercial side, but they must deal with vehicles driving off the roads on purpose, aircraft, folks shooting at each other, and electronic warfare. (That is where the enemy will try to jam, spoof, or overload sensors.) Trust me, the military has it much tougher.
This brings me to the resolution. The fact is that DoD has had the technology to resolve all these issues for well over two decades. And in most cases, like sensors, the target systems are far more complex than anything available in the AV domain today or probably will ever be. Proper and effective, not perfect, digital twins can be created for every model type needed here. And their real-time and federated model architectures can handle any scenario required, independent of complexity, model detail and math, or loading. Now having said this, clearly the effort here is not easy and will take a lot of work. This technology needs data to be tailored to meet the specific needs and targets in this industry. Keep in mind what we are talking about here is the impossible vs the possible, the doable vs the undoable. The current development and testing approaches are not remotely doable in many lifetimes. This makes the value proposition of making the switch brutally obvious from a time, cost, and liability point of view.
(With regard to computing power needed, the architecture being used is so efficient and performs so well that this does not take any special computing assets. In most cases, it will run on the gaming type system being used now. This includes the ability to run much faster than real time when compared to systems that do not use the proper architecture.)
My Solution and Conflict of Interest
I have created a company, Dactle LLC, to utilize DoD/aerospace simulation technology and provide a complete solution. Dactle will provide all the data, scenarios, and associated models and simulation needed to get to a legitimate L4/5: a full across-the-board-all-model-type digital twin. (With proof of course.) We will not throw inadequate simulation or scenario tools over the wall and expect anyone to redirect huge quantities of your personnel and focus to set up and use these support tools. We will take care of the entire turn-key simulation and scenario solution so you can concentrate on the already insanely difficult task of building or validating an autonomous vehicle. If anyone one would like to see a demo, please let me know.
Now regarding the clear conflict of interest, it would seem that I am the only person in this industry who passionately pronounces what all the problems are and, by some miracle, has the only complete and accurate solution. Given that, pushback would seem warranted. However, allow me to give you a little history before you do.
When I started this journey a couple years ago, I understood this conflict could exist. I tried to avoid that by reaching out to the simulation companies in this industry to make them aware of the issues and how to remedy them. Unfortunately, I ran in to two different responses. The IT/gaming/Silicon Valley folks ignored me. The vehicle manufacturing simulation folks paid attention but deferred by saying that when their customers figure out the flaws exist and pay for the fixes; they will redo their systems or make a new version. Keep in mind that customers will likely only know there is an issue when real-world tragedies occur. And that itself will probably require several tragedies before the coincidence is brought forward. As these responses were unacceptable, I decided to create a company and reach out to DoD/aerospace, find the right partner, and take this on myself.
(Why wouldn’t the simulation companies in this space upgrade their systems to use the best technology if they know their systems have significant capability gaps? They realize they would have to tell their customers, stakeholders, and financial backers that their technology is significantly flawed and would have to conduct a major rewrite of their entire system. Something they had no idea should be done, could be done, and know how to get it done. Then they would have to replace all the systems out there or sell a second version and maintain both perpetually.)
Please find more information on my POV in my articles below, including why the use of public shadow and safety driving is untenable.
Using the Real World is better than Proper Simulation for Autonomous Vehicle Development — NONSENSE
Autonomous Vehicles Need to Have Accidents to Develop this Technology
The Hype of Geofencing for Autonomous Vehicles
· https://medium.com/@imispgh/the-hype-of-geofencing-for-autonomous-vehicles-bd964cb14d16
SAE Autonomous Vehicle Engineering Magazine-End Public Shadow Driving
· https://www.nxtbook.com/nxtbooks/sae/ave_201901/index.php
My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.
Key Industry Participation
- Lead — SAE On-Road Autonomous Driving SAE Model and Simulation Task
- Member SAE ORAD Verification and Validation Task Force
- Stakeholder for UL4600 — Creating AV Safety Guidelines
- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)
- Presented the IEEE Barus Ethics Award for Post 9/11 Efforts
My company is Dactle
We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.