Are the Autonomous Vehicle Industry and NHTSA on higher ground than Boeing and the FAA?

Michael DeKort
12 min readJan 12, 2020

--

I am sure most people are well aware of the debacle that is the 737 MAX. Just today there are new articles out highlighting internal emails that have finally been provided by Boeing. They show the incredibly selfish, cowardly, unethical, immoral, negligent and grossly negligent rot that is now Boeing. (Juxtapose this with articles on the recently fired CEO Denis Muilenburg keeping $81.7M in compensation. Keep in mind that Boeing offered only $100M to the families of the 346 people who dies. Do the math on that. Could Boeing and its board be any more tone-deaf or corrupt? Clearly, we need laws limiting individuals and corporate protection from civil and criminal prosecution.)

As someone who has been championing the FAA and the incredible work it had done up until the 737 MAX debacle, I find this all especially disappointing. I used to routinely mention the 6.4 sigma safety rating in 2017. And that it was the safest year on record in the US with no deaths from an air tragedy. I also I trumpeted the FAA as the example of what NHTSA should do to assure autonomous vehicles are designed and tested safely. I would often use the simulation certification process and the associated QTG test procedures as a model for how I believe that development and test effort should be conducted as well as qualifying the simulation and simulators that should be used.

Before I ask you to contemplate who is on higher ground and give you my determination on the matter, I would like to lay out some key facts and thoughts on what information I am using to make my determination. I chose to use ignorant, negligent and grossly negligent as my descriptors because human life is involved. Within the context of the latter two descriptors please include unprofessional, unethical and immoral by default. (I have done my best to get the facts right regarding the 737 MAX. If I have something in error or not represented well, please tell me what it is and provide a citing I can use to validate the information, so I can remedy my mistake. I also want to note that unlike some I am not including the pilots as part of the root cause. Under the circumstances I find that to be a completely unfounded and insulting determination.)

Boeing and the FAA — 737 MAX Summary

· The core problem with the 737 MAX stemmed from Boeing wanting to improve the fuel economy of the jet and minimize the cost and time to do so. To accomplish this Boeing added a larger engine to the plane. Due to its size the engine had to be mounted forward and higher to make room for it under the wing. That changed the flight characteristics of the aircraft by forcing the nose to go up in flight. This could lead to a stall. Many people say this design choice should never have happened and a new aircraft should have been built. Rather than build a new aircraft Boeing chose to mitigate the flight attitude issue by creating an automatic and automated process to counter the stalls. It was called MCAS (Maneuvering Characteristic Augmentation System). As this system would engage without the pilot’s initiation, control (and knowledge) it is considered a form of automation. Not unlike lane keeping for ground vehicles at a very high level. To make matters worse Boeing declared the engine and MCAS modification as a small change. This to avoid the cost and time involved in following rigorous engineering, testing and training processes. This included updating flight simulator training, training manuals and checklists. It also included minimizing the information Boeing had to provide to the FAA. Along the way the engineers neglected to use redundant sensors for the MCAS system. This is the flaw that broke the camel’s back. Despite the engine mod, which should not have occurred, I believe Boeing’s engineers would have handled the engineering and testing properly of the MCAS solution had their arms not been tied behind their backs. I could be wrong about this. The two tragedies that occurred were caused by that lone sensor failing. That forced the nose down to compensate for the additional nose up attitude change the faulty angle of attack sensor caused. This in turn was compounded by the aircraft wanting to be nose up due to the engine mod. The MCAS was overreacting to a stall condition that did not actually exist. Because the pilots had no idea the MCAS system existed they became surprised by the plane forcing the nose down without their input. (Had a second or redundant sensor been in place this would likely not have happened.) This caused the pilot and aircraft to fight each other causing the plane to undulate up and down. Eventually this caused the planes to crash.

· With regard to the FAA. They were clearly mislead and not fully informed by Boeing. Making matters worse is the FAA having to rely on industry to volunteer information and in many cases self-police and certify because there is not enough funding and manpower for the FAA to be more involved. However, it also appears they were not proactive enough regarding the relevant oversight that should have been done here. Many have said they have become too cozy with the industry. Finally, there is the FAA’s position regarding part 107 waivers to develop and test autonomous UAV/UAS in the public domain. Unfortunately, the FAA and the aerospace industry has fallen under the spell of the IT, gaming and Silicon Valley folks working on autonomous ground vehicles. You can see more on this in my articles below.

Boeing and the FAA — My POV regarding their performance — 737 MAX and Autonomy Issues

Boeing

· I believe Boeing as a company and several of its personnel are grossly unprofessional, unethical, immoral and criminally negligent.

· I do not believe they set out to hurt anyone. I believe this is evidenced by them believing the odds of a tragedy were low. What appeared to make that much more likely was not designing in the redundant sensor. Having said this what may have been negligence grew to gross negligence. And given the emails that appears to have occurred before the first crash.

· The accidents that occurred were no benefit at all to Boeing. The reason I mention this will become clear later in the autonomous vehicle section.

· I believe that none of the engineering involved was beyond the capability of Boeing’s engineers. I believe their competence and relative best practices were largely neutered by management’s directive the change be designated as small.

FAA

· Overall, I believe the FAA was duped and kept in the dark by Boeing. However, the FAA made that far too easy. This makes them negligent. Whether or not it crosses over into gross negligence I am not sure. I would need to know they became aware prior to the crashes.

· With regard to their new part 107 waiver process and how they are handling the development and testing of autonomous systems. They appear to be headed down the same bad road NHTSA is on. They are allowing if not enabling public shadow and safety flying vs the use of proper simulation. A process that is untenable and harms people as human test subjects for no reason. That being DoD/aerospace-based simulation technology. The part that is especially troubling here is these folks should know better given they have used that technology for decades. As I have reached out to some of these folks it appears the issue is the folks involved here come from the same IT, gaming and Silicon Valley world as their ground vehicle counter parts. They probably have no idea what the other side can do. Or they are being misled by the gaming-based simulation folks. Something I think is fairly possible given the misleading use of words like “digital twin” and “physics” by many of them. Given all of this I find their current behavior to be negligent and unethical. If they are aware of what is right and wrong though — they would be grossly negligent. I want to end here by saying I believe this issue has the potential to be far worse than the 737 MAX tragedies. Not only could more be dead but each one of those would be procedural human Guinea pigs. (For more on these issues and how I believe they should be resolved is please see my first article below.)

The Autonomous Vehicle Industry and NHTSA — Autonomous Vehicle Design and Testing Summary

· The core issues here revolve around the myth that public shadow and safety driving can create a legitimate autonomous vehicle and that the lives the process takes are necessary and for the greater good. That it is impossible to drive the trillion miles or spend $300B to stumble and restumble on all the scenarios necessary to complete the effort. In addition, the process harms people for no reason. The first issue is handover. The process cannot be made safe for most complex scenarios because the time to regain proper situational awareness and do the right thing, especially in time critical scenarios. cannot be provided. Another dangerous area is learning accident scenarios. AV makers will have to run thousands of accident scenarios thousands of times to accomplish this. That will cause thousands of injuries and deaths. The next issue is deep learning which is often fooled by patterns and shadows. The final issue being that the gaming-based simulation being used by the industry now cannot replace the real world because it has significant technical capabilities that keep it from being a legitimate physics based all model type digital twin. (Please see more on this in my first article below.)

The Autonomous Vehicle Industry and NHTSA — My POV regarding their performance — Autonomy

The Industry

· I believe that at this point most of the industry is somewhere between ignorant, negligent and grossly negligent. Unlike Boeing engineers most of the folks working on these systems, while clearly intelligent, as not experts on most of the technical areas or required design approaches involved. Most of this is new to them from a domain and engineering approach POV. This is because much of what is being done here is new and extremely hard. No one knows how to do most of it well yet. The other issue being that most of them come from IT not aerospace and defense, for example. While those industries would not find this extremely challenging as well, they would have several advantages. IT uses an Agile or ground up approach. They do not have much top down systems engineering experience especially on complex systems. They also do not have much experience with exception handling or “what-if” engineering. And finally, most of the vehicle and sensor systems are new to them. Of course, they make it far, far, far tougher by choosing to rely on public shadow and safety driving, deep learning and poor (gaming based) simulation. Which brings me to the negligent and grossly negligent part. The whole process relies on making humans in and outside the vehicles test subjects or Guinea pigs. The vast majority of development and testing is done not in simulation or on test tracks but in the public domain. (By most I do not mean miles driven. I mean where the core or critical development and testing is done.) These activities rely on the driver “safety driving” which means they cede steering or lateral control to let the system drive, monitoring it so they can take over if needed. In addition to that accident scenarios, especially those that cannot be avoided, must be experienced by the system over and over so the machine learning can learn to handle them. Both processes literally require accidents occur. That means humans will have to be injured or fatally injured as a result. Elon Musk and the former head of NHTSA (now the Chief Safety Innovation Officer for Zoox) have admitted there will be injuries and deaths. They and most of the industry believes lives lost now will save more later. They believe this is the best or only way to develop and test these systems. In many cases that literally means safety drivers must sacrifice their lives, to not disengage and take over a flawed system, so the machine learning can experience the thread. That due diligence is often messy and is being done. Not unlike when someone is harmed in drug trials. The problem of course is that none of this is true. (More on this in my first article below.) The kicker here being that the industry thinks this path is unavoidable because simulation cannot create enough of a physics based all model type digital twin. While this is true of the gaming-based systems currently being used, it is not true regarding DoD/aerospace simulation technology. This brings me to my determination on whether the industry and people work in it are ignorant, negligent or grossly negligent. To the degree they understand or even suspect what I just stated and go along with it they are somewhere in that continuum. My feeling is we are probably evenly distributed at this point but clearly moving to the right, unfortunately. Having said this I believe there are hundreds of emails inside the industry, some at every AV maker, that resemble what we are seeing from Boeing. Sure would be nice if someone brought those to light now.

NHTSA

· NHTSA largely mirrors my description above though probably lagging since they unfortunately take their cues from industry. My belief is that most of the industry. There are a couple of events or areas I would like to mention specifically. (For each of these I will include links that explain each further.)

The 2015 L3 Handover Study (Mark Rosekind)

o NHTSA saved children from going to school in autonomous shuttles and leaves them in danger everywhere else — https://medium.com/@imispgh/nhtsa-saved-children-from-going-to-school-in-autonomous-shuttles-and-leaves-them-in-danger-4d77e0db731

Falsifying data to make Tesla’s “autopilot” performance look better

o http://www.safetyresearch.net/Library/NHTSA_Autosteer_Safety_Claim.pdf

o https://www.thedrive.com/tech/26455/nhtsas-flawed-autopilot-safety-study-unmasked

Not establishing any safety standards or testing — Turning a blind eye to the deaths that have occurred and enabling more

o NHTSA is Enabling the Crash of the Driverless Vehicle Industry and More Needless Human Test Subject Deaths — https://medium.com/@imispgh/nhtsa-is-enabling-the-crash-of-the-driverless-vehicle-industry-and-more-needless-human-test-28a6e7becefc

More information can be found here

This is the key article I referred to several times above — Proposal for Successfully Creating an Autonomous Ground or Air Vehicle

· https://medium.com/@imispgh/proposal-for-successfully-creating-an-autonomous-ground-or-air-vehicle-539bb10967b1

Autonomous Vehicles Need to Have Accidents to Develop this Technology

· https://medium.com/@imispgh/autonomous-vehicles-need-to-have-accidents-to-develop-this-technology-2cc034abac9b

FAA making a grave error in granting autonomous drone and aircraft waivers to develop in the public domain

· https://medium.com/@imispgh/faa-making-a-grave-error-in-granting-autonomous-drone-waivers-to-develop-in-the-public-domain-9e4292fbbcba

Simulation can create a Complete Digital Twin of the Real World if DoD/Aerospace Technology is used

· https://medium.com/@imispgh/simulation-can-create-a-complete-digital-twin-of-the-real-world-if-dod-aerospace-technology-is-used-c79a64551647

Why are Autonomous Vehicle makers using Deep Learning over Dynamic Sense and Avoid with Dynamic Collision Avoidance? Seems very inefficient and needlessly dangerous?

· https://medium.com/@imispgh/why-are-autonomous-vehicle-makers-using-deep-learning-over-dynamic-sense-and-avoid-with-dynamic-3e386b82495e

Tesla hits Police Car — How much writing on the wall does NHTSA need?

· https://medium.com/@imispgh/tesla-hits-police-car-how-much-writing-on-the-wall-does-nhtsa-need-8e81e9ab3b9

NHTSA Uber Determinations are a Tragedy for Everyone

· https://medium.com/@imispgh/nhtsa-uber-determinations-are-a-tragedy-for-everyone-717dd302d930

My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.

Key Industry Participation

- Lead — SAE On-Road Autonomous Driving SAE Model and Simulation Task

- Member SAE ORAD Verification and Validation Task Force

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)

- Presented the IEEE Barus Ethics Award for Post 9/11 Whistleblowing Efforts

My company is Dactle

We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.

--

--

Michael DeKort
Michael DeKort

Written by Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation

No responses yet