Milgram could have just watched NHTSA, the FAA, Boeing and the Autonomous Vehicle industry

Michael DeKort
4 min readDec 29, 2019

--

In the past year the Boeing 737 MAX and Autonomous Vehicle debacles have exposed these industries, every level of government oversight, most of the press and even the public to a Milgram Experiment on a grand scale. For those who are not aware of this experiment it was a post WWII effort, conducted by Stanley Milgram, to see how easy it is to convince a person to sacrifice their morality and ethics in order to harm a person when they perceive someone in charge thinks it’s necessary. And by necessary, I mean not necessarily for any important purpose. As each of these debacle unfolds over time, we see how much easier the process is when the subjects are exposed to people around them. Whether that be at home, work, social media, the new etc. As well as when the outcome was supposed to be the exact opposite. To protect or save lives, not take them. (In fairness the experiments were set up to make it clear someone had a high chance of dying. In the Boeing and AV cases the likelihood is statistically far less. Though I would suggest that is not enough of a differentiator to invalidate my point.)

In the case of Boeing we see every couple of weeks how pathetic Boeing’s leadership was and still is. People were alerted to the issue years ago and did nothing. And now we see the FAA, congress and the DoJ still playing patty-cakes. Boeing nor anyone has been charged or fined by any governmental body. And the FAA IG remains silent. Lots of strong language but not any real courage or conviction among any of them to hold people accountable.

In the Autonomous Vehicle industry, the situation is far worse because the problem is not widely accepted yet and the situation has, unfortunately, not become dire enough yet for most people to get what is really happening let alone drive change. While seven people have died needlessly as needless human Guinea pigs, none have been a child or a family, yet. (This will change. There will come a time when companies can no longer allow disengagements when accidents are about to occur. If they are not experienced over and over, they cannot be learned.) This all results in most people buying in to the myth that public shadow and safety driving can create a legitimate autonomous vehicle and that the lives the process takes are necessary and for the greater good. That it is impossible to drive the trillion miles or spend $300B to stumble and restumble on all the scenarios necessary to complete the effort. In addition, the process harms people for no reason. The first issue is handover. The process cannot be made safe for most complex scenarios because the time to regain proper situational awareness and do the right thing, especially in time critical scenarios. cannot be provided. Another dangerous area is learning accident scenarios. AV makers will have to run thousands of accident scenarios thousands of times to accomplish this. That will cause thousands of injuries and deaths. The final issue is Deep Learning which is often fooled by patterns and shadows.

Like the 737 MAX issue there is a technical resolution to the issues in the AV industry. Once the echo chamber is broken all folks need to do is leverage DoD simulation technology to replace most public shadow and safety Driving, as well as the poor gaming-based simulation in the industry. And rely more on dynamic sensing and avoidance than deep learning. The question is only one of when and how many have to be injured or killed to get there. And will it need to be more than the 346 killed by Boeing?

More in my articles here

Proposal for Successfully Creating an Autonomous Ground or Air Vehicle

Autonomous Vehicles Need to Have Accidents to Develop this Technology

FAA making a grave error in granting autonomous drone and aircraft waivers to develop in the public domain

Simulation can create a Complete Digital Twin of the Real World if DoD/Aerospace Technology is used

Why are Autonomous Vehicle makers using Deep Learning over Dynamic Sense and Avoid with Dynamic Collision Avoidance? Seems very inefficient and needlessly dangerous?

Using the Real World is better than Proper Simulation for AV Development — NONSENSE

The Hype of Geofencing for Autonomous Vehicles

SAE Autonomous Vehicle Engineering Magazine — End Public Shadow/Safety Driving

Relevant Biography

Former system engineer, engineering and program manager for Lockheed Martin. Including aircraft simulation, the software engineering manager for all of NORAD and the Aegis Weapon System.

Key Autonomous Vehicle Industry Participation

- Lead — SAE On-Road Autonomous Driving (ORAD) Simulation Task Force

- Member SAE ORAD Verification and Validation Task Force

- SME — DIN/SAE International Alliance for Mobility Testing & Standardization group to create sensor simulation specs

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee

- Presented the IEEE Barus Ethics Award for Post 9/11 DoD/DHS Efforts

My company is Dactle — We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based development and testing system with an end-state scenario matrix to address all of these issues. We can supply all of the scenarios, the scenario matrix tool, the data, the integrated simulation or any part of this system. A true all model type digital twin. If someone would like to see a demo or discuss this further please let me know.

--

--

Michael DeKort
Michael DeKort

Written by Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation

No responses yet