Covid-19 would barely impact the Autonomous Vehicle Makers if they used the right Simulation the right way
Covid-19 is accelerating the bankruptcy path this industry is on significantly. The reason for the acceleration and the core problem is the same. Reliance on public shadow and safety driving, augmented by gaming simulation, for most of the development and testing. Versus the use of DoD simulation technology. The current approach is untenable because it is not remotely possible to experience and experience, as well as repeat, a fraction of the scenarios needed to train the machine learning systems. This path also requires human Guinea pigs or “safety drivers” literally sacrifice their lives and others around them so critical scenarios can be learned. These folks do not replace this with simulation because the gaming industry has several core architecture issues. And few know how to properly model an active sensor. The current ramifications of this are laying off safety drivers and those that support them. If over 99% of the development and testing were done in proper simulation there would barely be am impact.
What is a legitimate digital twin?
Every physical object/area is modeled to match the exact and specific real-object visually and in several areas involving physics. Those areas being how they operate and static characteristics like bending moment, tensile strength, radar and LiDAR reflectivity. This is everywhere in the environment. Whether in the terrain, a moving or fixed objects, rain drops, sensors, sun or moon position etc.
Without this level of fidelity L4 cannot be reached. False confidence will be created as well as avoidable tragedies. This will occur because various parts of the simulation will not match the real-world close enough. But the stack won’t know that. It will create a flawed plan that will not match real-world scenarios and needs. Example — I need to perform a series of critical maneuvers, braking, acceleration etc, in bad weather and where the environment is very dense with other objects and sensors. My sensor, vehicle, tire and other models could be misleading the system into creating a bad plan.
A Velodyne 128 LiDAR scanning an intersection in various degrees of rain. That LiDARs operation and interaction with exact objects and parts of them needs to be modeled dynamically and in real time and faster. That .23 degree beam at a certain power level progressively interacting with rain drops until it gets to the tire and a polygon returns the reflectivity value for rubber. Only to then progressively interact with rain drops of various density again. (The laser may not survive any part of that rain.)
10 vehicles with the Delphi ESR radar in an exact parking garage. Each radar and the cumulative 1st, 2nd and 3rd reflections, bouncing off exact objects, modeled in and faster than real-time. Or extend that to a packed intersection in NY City with 100 of those radars. The radar returns from every object would include associated RCS and reflectivity values.
Mixed friction coefficients under a tire at any point on the road. An example might be where there is an even 1/3 split under the tread across dry asphalt, a painted line and oil. And at that point a vehicle performs some aggressive operation.
More in my articles here
Proposal for Successfully Creating an Autonomous Ground or Air Vehicle
Simulation can create a Complete Digital Twin of the Real World if DoD/Aerospace Technology is used
- https://medium.com/@imispgh/simulation-can-create-a-complete-digital-twin-of-the-real world-if-dod-aerospace-technology-is-used-c79a64551647
Gill Pratt, Toyota, Waymo and FiveAI confirmed the Collapse of the Autonomous Vehicle Industry — It can be Reversed with an Ego, Echo Chamber and Engineering Paradigm Shift
Why are Autonomous Vehicle makers using Deep Learning over Dynamic Sense and Avoid with Dynamic Collision Avoidance? Seems very inefficient and needlessly dangerous?
The Hype of Geofencing for Autonomous Vehicles
Remote Control for Autonomous Vehicles — A far worse idea than the use of Public “Safety” Driving
My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.
Key Industry Participation
- Lead — SAE On-Road Autonomous Driving SAE Model and Simulation Task
- Member SAE ORAD Verification and Validation Task Force
- Member DIN/SAE International Alliance for Mobility Testing & Standardization (IAMTS) Sensor Simulation Specs
- Stakeholder for UL4600 — Creating AV Safety Guidelines
- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)
- Presented the IEEE Barus Ethics Award for Post 9/11 Efforts
My company is Dactle
We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.