Tesla hits Police Car — How much writing on the wall does NHTSA need?
A Tesla in “Autopilot” just hit a stopped police car and the car it pulled over. Over the years it has hit many other motionless objects. In some cases, killing people. Those objects include firetrucks, tow trucks, street sweeper, barriers, cones and now a police car. This occurs because Tesla and Elon Musk choose to not use LiDAR. And their existing sensor system does not make up for that. Their camera system is not stereo, and the radar cannot discern lateral positioning without movement. But no need. Elon says the system will be full L4 any day now. That is to both meet his ridiculous and reckless promise but also to realize over $500M in “autopilot” funding held in limbo.
The question here is how many deaths does NHTSA need to do their due diligence? Does it need to include the first child or family killed? (A little while back a Tesla hit a tow truck in Russia. Two children were in the car but only slightly injured.) The crux of the issue being it is a myth that public shadow and safety driving are the best way to develop these systems, that process can be successful and the deaths it causes are necessary and for the greater good. The next issue being that simulation cannot create a proper visual and physics based digital twin to replace most of this. While it is correct the simulation technology being used in this industry cannot come anywhere close to this, DoD technology can easily do what is needed here. And the final issue being safety standards cannot or should not be created until the technology is worked out. That is not just wrong but reckless.
Public Shadow and Safety Driving
The process being used by most AV makers to develop these systems, public shadow and safety driving, is untenable, has killed seven people to date for no reason and will kill thousands more when accident scenarios are learned. It is impossible to drive the one trillion miles or spend over $300B to stumble and restumble on all the scenarios necessary to complete the effort. In addition, the process harms people for no reason. This occurs two ways. The first is through handover or fall back. A process that cannot be made safe for most complex scenarios, by any monitoring and notification system, because they cannot provide the time to regain proper situational awareness and do the right thing the right way, especially in time critical scenarios. The other dangerous area is training the systems to handle accident scenarios. In order do that AV makers will have to run thousands of accident scenarios thousands of times. That will cause thousands of injuries and deaths. The solution is to switch 99.9% of this to DoD simulation technology. All informed and validated by real-world data. But more on this to follow. (Currently simulation use is far less than this for development and testing. And gaming engine-based systems which have significant real-time and model fidelity flaws in complex scenarios).
(I assume NHTSA understands some of this or they would not have stopped EasyMile from using young children as Guinea pigs in the Babcock Ranch development school shuttle in Florida. However, as I mention in my article below, they appear to only want to save children in these shuttles not anywhere else including Tesla, Uber and Waymo vehicles.)
Proper Simulation and its Use for Development and Testing
The issue here involves the belief that it is not possible to adequately simulate or model enough facets of real-world development and design to replace the real world to any meaningful degree. To create a complete “digital twin”. Accompanying this belief is the associated belief that the simulation and modeling technology and approaches used in the autonomous vehicle industry are the most advanced regardless of industry. This assumption affirms the belief that it is simply not possible to replace the real world. Leaving Public Shadow and Safety Driving as the primary means to develop and test these systems. Given the significant real-time and model fidelity gaps in the systems and products being used in the industry this belief is, unfortunately, well founded. If AV makers were to try to utilize these systems for most of their development, especially in complex and accident scenarios, the performance gap between the simulation and the real world could be enough to cause planning errors. These errors will result in false confidence and the AV decelerating, accelerating or maneuvering improperly. That could result in an accident or one being worse than need be.
Implementation Safety Regulations while Technology is Evolving
The industry manta, repeated by Dr. Owens, is that no standards should be implemented until the technology settles. This is reckless, counter-productive and misleading. What a system should do or not do has little bearing on how it is done, especially with regard to safety. All these folks are doing is taking advantage of the average person thinking these folks are experts, have their best interests at heart and that the technology is so far above their heads they can’t possibly disagree. Let’s look at some of the Tesla and Uber accident scenarios.
· Going under a semi-trailer truck trailer. (Has happened twice in a Tesla killing Jeremy Banner and Joshua Brown.)
· Hitting barriers (Has killed Walter Huang in a Tesla)
· Hitting large stopped vehicles like firetrucks, tow trucks and street sweeper. (The latter has killed Gao Yaning in a Tesla.)
Now tell me what the technology being used has to do with creating the following extremely high-level safety standards. (Yeah I get there is the “trolley problem. But it is exceedingly rare,)
· All known accident scenarios from studies, DoT/NHTSA, insurance companies and Tesla, Uber tragedies are learned
· Properly detect humans
· Properly detect all objects that can harm humans
· Do not hit humans
· Properly detect large objects in the lane
· Do not drive into oncoming lanes
The primary component of the resolution is to make the industry aware of and utilize DoD/aerospace simulation and modelling technology to build effective and complete digital twins, especially as they relate to physics. This technology remedies all the real-time and model fidelity issues I described above.
Given this, it is now possible to invert and normalize the due diligence paradigm. Risk to human life can now be almost entirely mitigated. (In rare cases Safety Driving would be required it should be run as a structured event. Not unlike a movie set.) This would result in now being able to require manufactures to prove human beings are required as test subjects. Regardless of whether the environment is a test track or the real world. Where simulation cannot be utilized the developer would demonstrate the need for test track use. Where test track use is not adequate the need to utilize the public domain would be proven. This approach would align us with the same approach many industries use today. Including aerospace, DoD and even automotive.
The next component would be to create minimum testable safety standards. These would include object detection and relevant accident scenarios. (Since deep learning is so prone to being fooled by patterns and shadows, some particular to specific locations, the testing would need to include a massive number of objects, locations, environmental conditions and variations of those areas and more.)
(With respect to using simulation to find long tails and edge or corner cases. While the number of scenarios is vast, most likely in the millions, possibly billions for perception testing, and the effort will clearly be significant, it is possible to get to a verifiable sigma level or factor better than a human, with the right cross-domain approach. And by utilizing data from a wide array of sources. Those including Shadow Driving, HD mapping, manufacturer data, independent testing, insurance companies, research and historical data from various transportation domains etc. Finally, the simulation and modelling performance or level of fidelity will have to be verified against its real-world master. The process involved here would not be unlike the FAA currently performs using the Part 60 and DERs.)
Please find more in my articles here
Proposal for Successfully Creating an Autonomous Ground or Air Vehicle
Autonomous Vehicles Need to Have Accidents to Develop this Technology
Using the Real World is better than Proper Simulation for AV Development — NONSENSE
- https://medium.com/@imispgh/using-the-real world-is-better-than-proper-simulation-for-autonomous-vehicle-development-nonsense-90cde4ccc0ce
Simulation can create a Complete Digital Twin of the Real World if DoD/Aerospace Technology is used
- https://medium.com/@imispgh/simulation-can-create-a-complete-digital-twin-of-the-real world-if-dod-aerospace-technology-is-used-c79a64551647
How NHTSA and the NTSB can save themselves and the Driverless Vehicle Industry
NHTSA saved children from going to school in autonomous shuttles and leaves them in danger everywhere else
The Hype of Geofencing for Autonomous Vehicles
My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.
Key Industry Participation
- Lead — SAE On-Road Autonomous Driving SAE Model and Simulation Task
- Member SAE ORAD Verification and Validation Task Force
- Member DIN/SAE International Alliance for Mobility Testing & Standardization (IAMTS) Sensor Simulation Specs
- Stakeholder for UL4600 — Creating AV Safety Guidelines
- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)
- Presented the IEEE Barus Ethics Award for Post 9/11 Efforts
My company is Dactle
We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.