“Physics based”, “Digital Twin” and “Real-time” Simulation Terms can be Misleading

Michael DeKort
9 min readJan 1, 2021

--

Ever since the Uber tragedy a couple years ago simulation companies, and AV makers, in this industry have been upping their use of the terms “Physics based”, “Digital Twin” and “Real-time”. If these systems had physics and real-time capabilities to match the incredible visual aspect of the gaming engines, all would be well. But they do not. While some companies have actually increased their capabilities in this area, the vast majority of them are exaggerating these capabilities. Many to the point of misleading people, creating false confidence and serious downstream problems. Many of which few people are unaware of because they do not know there are gaps between the simulation or models and the real-world or that they could be filled if they used different technology. Technology that comes from DoD and aerospace. Worst of all, many of these companies have come to realize these gaps and issues exist. But refuse to acknowledge or fix them because they do not want their customers to know the truth, question their actual understanding of the space, provide remedies on their own nickel, or be set up for liability issues when real-world tragedies occurred. When I have discussed this with simulation companies several have told me they will fix the gaps when their customers figure out there are issues and pay for them to be fixed. The problem with this is those problems will likely not be discovered until a succession of real-world tragedies forces a review of the various model’s performance curves compared to the real-world. When it comes to the use of the terms themselves and the absolute truth the issue is legitimately fuzzy. Depending on the use cases or scenarios all may be fine. However, the point will come when a model or a series of models or real-time performance will differ significantly from their real-world counterpart.

Why does using Proper Simulation Matter?

This is an issue of matching levels of fidelity to use cases or scenarios. When the right level of fidelity is not used, there will be development flaws. When complex scenarios are run, especially where the performance curves or attributes of any model are exceeded, the Planning system will have a flawed understanding of some aspect or aspects of the real world. The result will be creation of a flawed plan. In far too many scenarios, in the outcome will be creating real-world accidents, not avoiding them or their being worse than they need be. This will usually be caused by some combination of braking, acceleration, or maneuvering being improperly timed or the voracity being incorrect.

The simulation systems that are being utilized today are adequate if you are working on general or non-complex real-world development or testing. However, in order to run complex scenarios, the depth, breadth, and fidelity of the simulation is critical. The Autonomous Vehicle (AV) makers will need to keep track of every model’s capabilities for every scenario to make sure none is exceeded. If AV makers do not do this, especially in complex scenarios, the end result will be the development of false confidence in the AV system. Keep in mind that machine learning does not infer well, not nearly as good as a human as we have a lifetime of learning especially for object detection. Additionally, perception systems right now are far too prone to error. The famous stop-sign-tape test is an example. This means that development and testing must be extremely location-, environment-, and object-specific. You could test thousands of scenarios for a common road pattern in one place or have just a couple object differences, like clothing patterns, and wind up with errors if you think you don’t have to repeat most of that testing in most other locations. This raises the scenario variations into the millions.

What is Proper Simulation?

Real Time — This is the ability for the system to process data and math models fast enough and in the right order, so no tasks of any significance are missed or late compared to the real world. This needs to be extended to the most complex and math-intensive scenarios. And by math intensive, I mean every model must be mathematically precise as well as be able to properly function. That could be thousands running at a time. This is where gaming architectures, which are used by many simulation companies, have significant flaws even with the best of computers. The best way to architect these systems is to build a deterministic or time-based and structured architecture where every task or model can be run at any rate and in any order. Most systems out there are non-deterministic and just let things run. They will say modern computers allow them to do it so fast that the structure I described is not needed. I believe this is wrong. The systems that are not deterministic run everything at one time at one specified rate. You can see accommodations made for these issues in gaming. Their play box is less than 60 sq miles and where they can avoid physics or math either by eliminating them, like when you can walk through trees, or dumbing them down as much as possible. (Where full-motion simulators are involved, which almost every AV makers should have but do not, the latency needs to be 16msec or less. Many do not have below 40msec due to the issues mentioned.)

Model Types — The critical model types are: The ego-vehicle, tires, roads, sensors fixed and other moving objects, and the environment. Each of these needs to be virtually exactly like the target it is simulating. (Geo-specific vs geo-typical for example. The best technology can get this down to under five cm of positional accuracy.) This includes not just visual aspects, if they are applicable, but physical capabilities. You need to simulate or model the exact vehicle, sensor, object, etc. Not something like it or a reasonable facsimile. As I said before, this is important because machine learning does not infer well. As I stated above, this means the same road patterns will have to be developed and tested in a wide array of locations, at different times of day, in different weather with different signage, etc. All of this must be extremely articulately modeled both visually and physically. This means modeling how an active senor works in the real-world, not simply showing a visual representation using ray tracing.

Take radar, for example. You must simulate not only the Ego-radar but how the world and other systems interact with it. Every other radar or system emitting RF that would cause clutter or interference must be properly modeled as does how every radar’s signal is being affected by its environment. The reason for this is that the Ego-model’s received signal must be an accurate model of the culmination of all of these factors at any physical location in the environment or scenario. This is where I would like to address vehicle models. The Original Equipment Manufacturers (OEMs), simulation and simulator companies have been creating detailed vehicle models for some time. However, I would caution against assuming they are precise enough in all scenarios — especially in complex scenarios as the simulation companies have not likely instrumented these vehicles in all the relevant scenarios required here to ensure the performance curves are accurate. And keep in mind this is not just a function of the vehicle design data or specs. The model structure itself or the overall system being used could be flawed.

Example

What if there are 10 vehicles with the Delphi ESR radar in a specific parking garage. Is each radar and the cumulative 1st, 2nd and 3rd reflections, bouncing off exact objects, being modeled in and faster than real-time. To include material reflectivity values? What if the scenarios involved a packed intersection in NY City with 100 of those radars? The radar returns from every object would include associated RCS and reflectivity values.

Ask for Proof of Fidelity

It is imperative that proof of model fidelity and real-time performance in a wide array of scenarios be provided, reviewed, and confirmed. This information is critical both in the cases where you want or need to use a true digital twin and where you do not believe you have to do so but want to ensure you have no negative impacts of that decision. You need to review the performance curves of the exact real-world compliment(s) for the models. And if the models all act like the real-world parent or if this is a simulation of a simulation like ray tracing is often used? (While ray tracing can be somewhat effective for LiDAR, it will eventually tap out in complex scenarios because GPUs cannot be deterministic.)

DoD/Aerospace Technology is the Solution

First let me address what is usually the immediate reaction upon hearing that DoD technology should be used. DoD does not have to deal with the same complexity as the commercial AV world. That belief is incorrect. The DoD autonomous ground vehicle folks not only have to deal with the same public domain and scenarios as the commercial side, but they must deal with vehicles driving off the roads on purpose, aircraft, folks shooting at each other, and electronic warfare. (That is where the enemy will try to jam, spoof, or overload sensors.) Trust me, the military has it much tougher.

This brings me to the resolution. The fact is that DoD has had the technology to resolve all these issues for well over two decades. And in most cases, like sensors, the target systems are far more complex than anything available in the AV domain today or probably will ever be. Proper and effective, not perfect, digital twins can be created for every model type needed here. And their real-time and federated model architectures can handle any scenario required, independent of complexity, model detail and math, or loading. Now having said this, clearly the effort here is not easy and will take a lot of work. This technology needs data to be tailored to meet the specific needs and targets in this industry. Keep in mind what we are talking about here is the impossible vs the possible, the doable vs the undoable. The current development and testing approaches are not remotely doable in many lifetimes. This makes the value proposition of making the switch brutally obvious from a time, cost, and liability point of view.

(With regard to computing power needed, the architecture being used is so efficient and performs so well that this does not take any special computing assets. In most cases, it will run on the gaming type system being used now. This includes the ability to run much faster than real time when compared to systems that do not use the proper architecture. Now if you run a massive non-federated model like Adams car model, more CPUs will be needed.)

Much of what I have discussed here is highlighted in the following SAE Autonomous Vehicle Engineering magazine article below. In it Sebastien Loze, the head of simulation with Epic Unreal, supports the POV I have been expressing. If the modeling and real-time approaches were as good as Unreal at the visual aspect, all would be well. (Full disclosure-We have recently received a MegaGrant from Epic Unreal. And yes, I have a conflict of interest here. I compete with the same companies I am critiquing. Before I created my own company to solve the Issues, I tried to help many of them improve their systems. Unfortunately, that didn’t work.)

Simulation’s Next Generation — https://www.sae.org/news/2020/08/new-gen-av-simulation

Please find more information on my POV in my articles below

The Autonomous Vehicle Industry can be Saved by doing the Opposite of what is being done now to create this technology

· https://medium.com/@imispgh/the-autonomous-vehicle-industry-can-be-saved-by-doing-the-opposite-of-what-is-being-done-now-b4e5c6ae9237

Using the Real World is better than Proper Simulation for Autonomous Vehicle Development — NONSENSE

· https://medium.com/@imispgh/using-the-real-world-is-better-than-proper-simulation-for-autonomous-vehicle-development-nonsense-90cde4ccc0ce

My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.

Key Industry Participation

- Lead — SAE On-Road Autonomous Driving SAE Model and Simulation Task

- Member SAE ORAD Verification and Validation Task Force

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)

- Presented the IEEE Barus Ethics Award for Post 9/11 Efforts

My company is Dactle

We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.

--

--

Michael DeKort
Michael DeKort

Written by Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation