How Driverless Vehicle Makers Should Prove their Technology Works
In order to prove this technology works and to earn trust the industry, which involves public and government entities, must not only demonstrate that is understands what it takes to do this now but prove it. Since the industry has been doing this the exact wrong way to date it must also acknowledge that and take responsibility for the loss of life, injuries and wasted time and money it has caused. If it requires autonomous vehicles to be 10X better than a human for us to trust them then those involved in bringing them to market especially having started off on the wrong path, must demonstrate the same 10X level of openness, trustworthiness and professionalism going forward.
The Right Process
Development and Testing should be done using 99.9% Simulation
· The reason for this being two-fold. The vast quantity of scenarios involved, and critical safety issues must be address. Regarding the scenarios. The number of scenarios to be learned, tested and validated is immense. And Machine Learning (ML) needs a massive amount of data and repletion to learn. With the most amount of scenario involving the sensor/perception systems ability to clearly discern the world. That includes object location, quantity, size, color, patterns, degradation, lighting intensity and direction etc. (I have heard the number could be a billion.) Beyond that the planning and execution system would have to drive the equivalent of 500B to a trillion miles to stumble and restumble on all the scenarios needed to properly learn the scenarios needed to get to L4. This clearly cannot be done in many lifetimes. Even if all the AV makers worked as one. The other issues involve safety. As a part of that same scenario set thousands of accident scenarios will have to be experienced thousands of times each to learn them. Do we really want the public to endure those accidents and the repercussions of them? The other safety issue involves handover. No driver monitoring and alarm system can provide enough time for the driver to regain the situational awareness needed to do the right thing the right way in critical and complex scenarios, especially accident scenarios.
Simulation should utilize aerospace, DoD and FAA simulation technology and be Informed, Validated and Augmented by Test tracks and the Real-World
· The Simulation technology should have the Highest Levels of Accuracy and Precision — The simulation being used currently by the industry has significant performance issues. This is often caused by the use of gaming technology, non-deterministic architectures and in some cases developers not understanding the precision to which real-time operations and various models need to perform to. Those technical deficiencies will result in the system forming the wrong driving plans, especially when the expectation of the vehicle, tires and road deviate from the models being used now. Many of which are generic. This will cause the system to attempt accelerate, brake or maneuver in a manner that does not match the real-world. Either because the vehicle, tires or road cannot inherently perform as expected or the environment changes what is expected of them. That could be road surface, bad weather etc.
· The Simulation should be Informed, Validated and Augmented by the Real-World — In order for the world to trust simulation it must be shown proof that it accurately simulates the real-world And it can do things that either cannot be done in the real-world or is far to inefficient and dangerous to be used. This would involve leveraging the process the FAA uses to validate its simulation. The test criteria includes producing performance curves or data that clearly demonstrates the models match the real-world. This involves vehicle acceleration, braking, handling and various other performance characteristics. First the maneuvers used to gather the real-world data are validated. Then the performance curves or data is compared side by side with the original. (This includes vehicles, tires, roads, sensors, environmental conditions, fixed and moving objects (people, animals, other vehicles, debris etc). Data comprising the original entity or object would be gathered using various sources and methods. That would include real-world driving, test tracks, satellite and mapping data, internet data and images searches, third party data sources etc.
· Simulation cannot do everything and will need to be augmented by Test Tracks and the Real-World — There will be those scenarios that will require test tracks and even the real-world to be used for development and validation. When determining when and how this process should be utilized the need should be justified. That justification should include justifying the need to not just use test tracks instead of simulation but why the real-world is needed in place of simulation and test tracks. When this is properly justified the real-world activity needs to be controlled in order to be safe, especially if safety drivers are needed. That control should involve not just targeted locations but ensuring they free from anyone or anything that can be damaged or harmed. Not unlike a movie set.
A Full Motion Driver-in-the-Loop Simulator should be used to Replace the Vehicle for Core Scenarios
· In order to replace the vehicle itself for development, testing and validation the simulators being used need to have a full motion system for a small but crucial set of scenarios. The reason for this being the need for the presence or absence of motion cues. In many cases a human cannot driver or evaluate the driving event properly if their inner ears and bodies do not feel the pressure of movement. This includes loss of traction scenarios, steep grades, hitting and object or being hit, evaluating comfort or motion sickness etc. If a full motion system is not used the ML will be learn improperly, just as with the use of imprecise real-time or models described above.
Scenarios and Validation Results need to be made Public and Verified by an Independent Third Party
· Trust is gained through the light of day not shade or darkness, especially when there has been a plethora of shade and darkness (hype) to date. Currently the only data being shared with the public is disengagement and miles driven data. Without the associated scenario data these other metrics are often useless and misleading. The scenario matrix being used for development and testing as well as the real-world entities and objects and associated simulated performance curves should be made public. Where legitimate IP issues are concerned and as a general course of action an independent third party should validate the entire process, data and results. Not unlike the FAA does now. (While the Boeing 737 MAX issues are concerning and the FAA itself could be found to be part of the problem and remedy, the overall quality of the FAAs processes to assess aircraft, pilots and simulators has achieved 6.4 sigma safety, has been excellent for decades and is the still best and most appropriate model to leverage.) While the third party or parties may be commercial entities they should be overseen and coordinated by NHTSA. Who itself would use the FAA as a guide its transformation process. (This to include rectifying their 2015 L3 safety study that determined driver monitoring and alarm systems can make handover safe in all scenarios. That study chose to ignore situational awareness. Either that study should be redone or rescinded and data from NASA, the Universities of Leeds and Southampton used I its place.)
Note — there are many other areas that should be part of this process beyond the performance of the driverless vehicle in driving scenarios. That to include, reliability, cybersecurity, privacy and connectivity.
For additional information please see my other articles
The Autonomous Vehicle Podcast — Featured Guest https://www.autonomousvehiclespodcast.com/
SAE Autonomous Vehicle Engineering Magazine-End Public Shadow Driving
Common Misconceptions about Aerospace/DoD/FAA Simulation for Autonomous Vehicles
The Hype of Geofencing for Autonomous Vehicles