The NTSB frets over human Guinea pigs then chastises and punts to the even more reckless NHTSA
The article and embedded NTSB letter for reference
First, let me summarize where the NTSB gets things wrong
1. The NTSB thinks the public domain should be the primary environment for safety driving
2. Safety driving is necessary
3. Tesla just needs to make the process safer
4. The right simulation cannot replace most of the real-world
5. There is a solution to the paradox they set up = They think AVs should not go into AV mode unless relevant ODD scenarios are covered. The problem is these are systems in development. They can’t get to the capabilities they want unless they fail over and over in those ODDs so the system learns to handle those scenarios
Let’s look at some of the things the NTSB says to NHTSA in the letter
“Section II of the ANPRM describes at length NHTSA’s perception of how prototype ADSs are being tested on public roads. The discussion illustrates NHTSA’s belief that before public road testing is conducted, companies undertake a rigorous engineering and safety analysis, with mitigation strategies in place to address potential risks. However, the NTSB has found that NHTSA’s perception of the safety of ADS testing is probably unrealistic. In the Las Vegas investigation, the NTSB learned that as part of its declaration for importing a vehicle without traditional driving controls (such as steering wheels), the shuttle operator (Keolis North America) stated to NHTSA that drivers (attendants) who had been trained in all aspects of the vehicle’s operation would be in the vehicle whenever it was operating and that they would be positioned where they could take control if necessary.10 The company also reported that the vehicle was fully equipped for manual operation. Nevertheless, the NTSB determined that the shuttle attendant did not have easy access to the manual controller, which limited his ability to take control of the vehicle before the crash.”
And this gem
“The NTSB remains concerned about NHTSA’s continued failure to recognize the importance of ensuring that acceptable safeguards are in place so that vehicles do not operate outside their ODDs and beyond the capabilities of their system designs. As manufacturers advance the development of automated control systems, it is evident that there is a fluid progression of capabilities and that the SAE levels of automation may not adequately reflect how control systems are actually used. Because NHTSA has put in place no requirements, manufacturers can operate and test vehicles virtually anywhere, even if the location exceeds the AV control system’s limitations. For example, Tesla recently released a beta version of its Level 2 Autopilot system, described as having full self-driving capability. By releasing the system, Tesla is testing on public roads a highly automated AV technology but with limited oversight or reporting requirements. Although Tesla includes a disclaimer that “currently enabled features require active driver supervision and do not make the vehicle autonomous,” NHTSA’s hands-off approach to oversight of AV testing poses a potential risk to motorists and other road users.
NHTSA refuses to take action for vehicles termed as having partial, or lower level, automation, and continues to wait for higher levels of automation before requiring that AV systems meet minimum national standards. As a result of its Mountain View crash investigation, the NTSB concluded that NHTSA’s failure to ensure that vehicle manufacturers of SAE Level 2 driving automation systems incorporate appropriate system safeguards to limit operation of these systems to the ODD compromises safety. Policy direction needs to apply seamlessly as AV development proceeds. NHTSA must take regulatory action now to minimize the risks associated with the ODD of all levels of vehicle automation.”
The NTSB said this in their Joshua Brown and Jeremy Banner Tesla crash and fatality investigations as well. Believing the systems should not go into “autopilot” if they cannot handle the scenarios applicable to the Operational Design Domain (ODD). This is a ridiculous comment to make. It shows either the acute incompetence of the NTSB or their acute lack of courage. How can the system handle anything until it learns to do so? And do you know how it leans to do so? By experiencing scenarios over and over and over to learn them through trial, error and neural network adjusting. It is how the human Guinea pig, Kamikaze or Hari-kari process works. So. . . that means the Brown and Banner deaths, as well as the recent crash in Detroit in a similar scenario, are NECESSARY for the process to get to the point the NTSB thinks they should already be at. If the human disengages or the L1/ADAS system stops the threads from occurring, many will not be learned. So. . . the NTSB asking NHTSA to make safety assessments etc mandatory means absolutely nothing. If you stop the crash thread from happening it cannot be learned. So. . . it cannot be avoided in the future. DO YOU SEE THE ISSUE UNTENABLE INSANITY HERE? Beyond this there is the larger and associated myth that the thousands of people that will be injured and killed in this process are necessary and for the greater good. That greater good being attaining L4/5 so these crashes are avoided. That is a myth because the trillion miles needed by each AV maker to get there requires time, money injuries and deaths they cannot sustain.
The next doozy
“The traditional division of oversight, in which NHTSA regulates vehicle safety and the states monitor drivers, may not apply to a developmental ADS. It might not be immediately apparent who controls the vehicle, or whether vehicle control and supervision are shared between the computer (the vehicle) and the human operator. A lack of appropriate policy from NHTSA and the states leaves the public vulnerable to potentially unsafe testing practices.
To ensure that testing of AVs on public roads is conducted with minimal risk, meaningful action from both NHTSA and the states is critical. Additionally, manufacturers must ensure that the design, development, verification, and validation of safety-related underlying electronics and software are reliable and safe for the conditions a vehicle is designed to encounter.”
Folks, when a development process requires mistakes to be made so they can gradually be corrected, and humans have to allow for this to happen, IT CANNOT BE SAFE BY DESIGN. You cannot both experience the crash AND AVOID IT. Therefore, the problem is developing in the public domain NOT how that is done.
Finally, this Coup de gras, where the NTSB chastises NHTSA for waiting to take action until after injuries or deaths have occurred
“NHTSA has informed the NTSB that it plans to ensure the safety of lower levels of driving automation systems through its enforcement authority and a surveillance program aimed at identifying safety-related trends in design or performance defects, and not through regulations.19 This approach is misguided because it relies on waiting for problems to occur rather than addressing safety issues proactively. For an acceptable level of safety to be achieved, a robust surveillance program must be in place, so that safety-related vehicle defects can be identified in a timely manner.”
In the end the NTSB is a frustrating multi-faceted helpful, complicit, and hypocritical mess. But, they are on higher ground here since NHTSA is far worse off. NHTSA can’t even manage consternation over sacrificing humans needlessly. Something NHTSA under Dr. Owens is all too glad to do. Under the ridiculous guise that they don’t want to slow down technological advancement by imposing safety rules. Of course, this can be remedied with the right simulation. You know who gets this? Not the NTSB or NHTSA. But a very small group in USDOT who created “VOICES”. (The only part they miss is that the simulation technology needed exists. No development is needed.)
VOICES information — https://usdot-voices.atlassian.net/wiki/spaces/VP/overview
More details here
The Autonomous Vehicle Industry can be Saved by doing the Opposite of what is being done now
SAE Autonomous Vehicle Engineering Magazine — Simulation’s Next Generation (featuring Dactle)
USDOT introduces VOICES Proof of Concept for Autonomous Vehicle Industry-A Paradigm Shift?
NHTSA’s Framework for Automated Driving System Safety is a Massive Missed Opportunity
My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.
Industry Participation — Air and Ground
- Founder SAE On-Road Autonomous Driving Simulation Task Force
- Member SAE ORAD Verification and Validation Task Force
- Member UNECE WP.29 SG2 Virtual Testing
- Stakeholder USDOT VOICES (Virtual Open Innovation Collaborative Environment for Safety)
- Member SAE G-34 / EUROCAE WG-114 Artificial Intelligence in Aviation
- Member CIVATAglobal — Civic Air Transport Association
- Stakeholder for UL4600 — Creating AV Safety Guidelines
- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee
- Presented the IEEE Barus Ethics Award for Post 9/11 DoD/DHS Whistleblowing Efforts
My company is Dactle
We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.