RAND’s “Safe Enough” Driverless Vehicle Industry Report Makes Things Worse
Link to the report — https://www.rand.org/pubs/research_reports/RRA569-1.html
Overall, this report is very disappointing and a missed opportunity. It clearly shows the wild west that is this industry. Which would all be fine if folks were not using human Guinea pigs in the vehicles and around them to test these systems while they figure things out.
RAND is treating these folks with very little relevant systems, test and safety engineers like experts. Like most of this is new and has never been done before. While this is a massive and hard task it is no excuse for the bar being so low. RAND should get out of this echo chamber if it wants to actually do some good.
Scenario and threshold testing assume the outcome was driven by the ML system’s Perception, Planning and Execution subsystems acting properly. That is not necessarily true. The outcome could be complete luck, random and disassociated from internal system performance. Each of these systems performance should be evaluated. (This should be whether checkers are involved or not.)
The “positive trust argument” assumes one can never gather enough data to determine of the system is safer than a human? (Elsewhere the report mentions there is not enough scenarios by scenarios apples/apples data on human driving.) That is ridiculous, intellectually lazy and a cop-out. Yes, it’s really hard to do. But if that cannot be done then the folks working on these systems should never be able to put the systems in the public domain. This is likely the thought of IT folks who have little safety, what if, exception handling, negative testing, or systems engineering experience. While a detailed sigma value or X times a human may not be achievable, a broader range with a minimal value should be. If these folks can’t figure out how to prove it is safe, how are they qualified to build and test the system?
o These statements demonstrate this. How are people who make these statements qualified to do the job? Maybe they should go back to making Twitter.
§ “We can’t evaluate the technologies, so we can only evaluate and trust the companies.”
§ “Safe enough” is not a number — it must be more of a safety-culture thing, all this stuff tied together. If you hit a number, [it should mean that] you didn’t get lucky but used an intentional process; if you guessed wrong, you will fix it.
§ Until there are definitions of metrics, the only thing we can do is reveal the people behind the companies doing the work. This is our communication of safety.
§ [There is] no consensus on metrics. Your previous work served as our bible for the last year and a half. At this point — it would be great if there were consensus, but, in its absence, we would entertain different methods.
The argument that the government instituting measurable minimum safety standards impedes with technology advances is a red herring. Why do I care how the sensor fusion works if the fusion and downstream systems perform correctly? This is the same nonsense argument aerospace used over 50 years ago. It failed. The fact of the matter is that when the government steps in and forces net value added testable safety standards competition increase as tragedies go down. Why? The playing field is leveled. This eliminating most of the hype and taking reckless chances.
This statement shows the nightmare that the IT industry brought to AV “The expectation should not be for zero bugs but to find and fix bugs quickly.”. This is what folks who have little systems engineering experience say. Too many bugs is often indicative or poor engineering due diligence. Winging it vs systems engineering.
The report refers to aviation as an industry that has been through a lot of this. Maybe RAND should have talked to these folks as well as NASA and INCOSE? It’s like witnessing folks building covered wagons, thinking they are great innovators and no one else has done a thing, when right down the street is a Ford dealer.
This entire report assumes public shadow and safety driving for development and testing is viable, necessary and the best or only way to develop and test these systems. And that the vast majority of it cannot be replaced by simulation. None of that is true.
This is an insanely reckless statement — “It is difficult to define a uniform standard for the industry because it is still premature. There are different systems, different use cases, different environments, different ODDs where vehicles are being tested. There is some distance to go before there is enough data in all of those different categories that are then cross-referenced to generate the kind of uniformity needed for “safe enough.” A premature imposition of operational standards could be counterproductive or ineffective. . . . It is better to not establish performance measures up front — it is better if that happens through a sort of evolutionary process.” That “evolutionary process” would be fine on a test track or simulation. NOT when we use human Guinea pigs developing on public roads. This trial and error has and will harm more people needlessly.
In the end I don’t think RAND helped here at all. They acted like most reporters in this industry do. They don’t know any more, nor do they try to know anymore, than the folks they are talking to or the echo chamber’s “conventional wisdom” warrants. This is evident when they mention aerospace but never talk to anyone there. All this does is perpetuate the unsafe practices in this industry and their current slalom ride to failure and bankruptcy. Am “expert” wagon maker evaluating other wagon makers when they know the automobile engineer is right down the street helps no one. It is intellectually, ethically, and professionally lazy and counterproductive
More in my articles here
SAE Autonomous Vehicle Engineering Magazine — Simulation’s Next Generation
The Autonomous Vehicle Industry can be Saved by doing the Opposite of what is being done now
Autonomous Vehicle Industry’s Self-Inflicted and Avoidable Collapse — Ongoing Update
Proposal for Successfully Creating an Autonomous Ground or Air Vehicle
Simulation can create a Complete Digital Twin of the Real World if DoD/Aerospace Technology is used
Simulation Photorealism is almost Irrelevant for Autonomous Vehicle Development and Testing
Autonomous Vehicles Need to Have Accidents to Develop this Technology
Using the Real World is better than Proper Simulation for AV Development — NONSENSE
The Hype of Geofencing for Autonomous Vehicles
SAE Autonomous Vehicle Engineering Magazine — End Public Shadow/Safety Driving
My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.
Key Industry Participation
- Founder SAE On-Road Autonomous Driving Simulation Task Force
- Member SAE ORAD Verification and Validation Task Force
- Stakeholder for UL4600 — Creating AV Safety Guidelines
- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)
- Presented the IEEE Barus Ethics Award for Post 9/11 Efforts
My company is Dactle
We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.