Michael DeKort
9 min readAug 26, 2017

--

Mr. Marakby

My name is Michael DeKort. I am a former systems engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the Aegis Weapon System, NORAD and on C4ISR for the US Coast Guard and DHS. I worked in Commercial IT. Including cybersecurity. I also received the IEEE Barus Ethics Award for whistleblowing regarding the DHS/USCG Deepwater program post 9/11.

I am contacting you because I believe there are some issues regarding how autonomous vehicles are being engineered as well as how the industry is handling the creation of best practices you may not be aware of. Issues that will literally preclude ever reaching autonomous levels 4 or 5 and there being thousands upon thousands of avoidable accidents, injuries and casualties coming in the future. That in addition to the unnecessary expense of over $300B that will have to be spent by each AV maker in an effort to shadow drive one trillion miles each to get to L4/L5. I am fully aware that what I will be communicating here is well outside “conventional wisdom”. I assure you however I am objectively correct and not only provide references for each item I discuss but can connect you to industry and academic experts who are in agreement. (I came to this position because of the time I have spent in aerospace and Commercial IT. Especially in aircraft simulation. The two worlds are very different. Something not a lot of people are exposed to and as a result, aware of).

Issues with Public Shadow Driving AI that will make achieving Autonomous Levels 4 and 5 Impossible

1. Miles and Cost — One Trillion Miles and $300B

a. Toyota and RAND have stated that in order to get to levels 4 and 5 one trillion miles will have to be driven. This to accommodate the uncontrollable nature of driving in the real world, literally stumbling and then having to restumble on scenarios to train the AI. To accomplish this in 10 years it will cost over $300B. That extremely conservative figure is the cost of 684k drivers, driving 228k vehicles 24/7. This expense in time and money is per company and vehicle.(RAND has actually stated it cannot be done).

2. Injuries/Casualties of Public Shadow Driving

a. Data from NASA, Clemson University, Waymo, Chris Urmson (Aurora) and the UK have shown situation awareness and reaction times are very poor. Between 7 and 24 seconds are needed to properly acclimate and react. This delay results in drivers not being able to function properly especially in critical scenarios. They often make the wrong decision or over react. Many including Waymo, Volvo, Ford and Chris Urmson (Aurora) have called for L3 to be skipped due to these issues. The fact of the matter is if L3 is dangerous then so is using public shadow driving for L4 and L5. (The Netherlands uses the simulation and test tracks as opposed to public shadow driving).

b. Here is a video of a Tesla driver having to take over for the vehicle. Keep in mind this is in a clear night, no vehicles were to his left, the roads were not slippery and it was not a case where the driver was trying to force the vehicle to learn the accident case — http://www.carbuzz.com/news/2017/7/28/Human-Saves-An-Autopilot-Driven-Tesla-Model-S-From-Disastrous-Crash-7740342/

3. Injuries and Casualties caused in Complex, Dangerous and Accident Scenarios

a. In order for AI to learn how to handle complex, dangerous and actual accident scenarios it has to run them over and over. And they have to precisely match, or closely match, the original scenario. To date this is not being done. Which is why there have not been a lot of accidents, injuries or casualties. When that time comes the shadow drivers will have to drive and redrive scenarios that include progressively higher levels of complexity, involving many other vehicles or entities, bad weather, bad roads conditions, system errors etc. Many of those scenarios will be known accident scenarios. To learn these situations it will literally mean billions of miles have to be driven and possibly millions of iterations of these scenarios run to get this data. That will result in accidents, injuries and even casualties in the majority of these cases.

b. To date there have been no children or families harmed by using this process. (There have however been injuries and casualties involving drivers). That is largely because only benign scenarios are being run. The public shadow driving be utilized now occurs on well-marked, well lit, low complexity, well mapped and good environmental conditions. Given every company bringing this technology to market would have to drive that trillion miles and learn from progressively more dangerous scenarios, casualties are inevitable. I suggest that when this is known or that first mass tragedy or death of a child has occurred the public, litigators and governments will react strongly. That will halt progress for a very long time. Far more than self-realization and policing would.

c. This is where George Hotz, of comma.ai and PolySync have to be mentioned. These companies are selling autonomous kits and giving away the software source code. This means anyone can modify those systems. Including folks who want to weaponized vehicles. Beyond that this allows any person who can change software to modify the system. Then test it in the public domain. This is the worst kind of shadow driving. These developers are not experts of any kind. They have no idea what the system ramifications are of the changes they make. This practice should be outlawed.

4. AI — Machine Learning — Neural Networks have Inherent Flaws.

a. MIT has stated that these processes miss corner or edge cases. Which result in spontaneous and unexpected errors. And the engineers using the practice do not entirely know how it works.

If you look at these areas individually, let alone in combination, you can see for legal, morale, ethical and financial reasons public shadow driving is untenable.

As for Simulation being the solution.

I believe the answer is to create an international Simulation Trade Study/Exhibit and Association. The purpose being to:

· Make the industry aware of what simulation can do. Especially in other industries such as aerospace. (Where the FAA has had detailed testing to assess simulation and simulator fidelity levels for decades.)

· Make the industry aware of the MCity approach to finding the most efficient set of scenarios. Bring that one trillion miles down by 99.9%

· Make the industry aware of who all the simulation and simulator organizations are.

· Evaluate the available products to determine their current capabilities.

· Determine how close the industry and any individual product is to filling all the capabilities required to eliminate public shadow driving. Where there are gaps determine a way forward to improving products or possibly creating a consortium. This may involve utilizing expertise from other industries. (The University of Michigan just released a study showing simulation can be used. They make matters a bit worse by using ONLY Grand Theft Auto for the study vs aerospace simulation products. But it’s a start. I will reach out to them. Link — https://spectrum.ieee.org/cars-that-think/transportation/self-driving/how-much-can-autonomous-cars-learn-from-virtual-worlds).

· Note — Most companies use simulation. The issue is to what degree they use it. Most of the individuals and companies in this space are unaware of where aerospace simulation is and that technology can be used to improve the autonomous industry simulation and almost eliminate public shadow driving.

Key Item — 8–23–2017

Waymo has now switched to simulation from shadow driving and is using MCity’s process to create core scenarios. Just a short time ago Waymo announced Level 3 should be skipped. Waymo is doing this for a reason. Others should pay attention.

There is also one more issue that should be understood. And that is the root cause. There is a perfect storm involved here. The fact of the matter is the vast majority of engineers developing these systems, however hard working, dedicated and intelligent they are, actually have very little experience in the engineering areas involved. The vast majority come from Commercial IT. Developers who literally make Twitter, Google Search, Uber etc. They do not have backgrounds in complex systems engineering, exception handling, traffic engineering etc. It is assumed they are up to this because they make great apps, games etc. The fact of the matter is they are no more qualified to build autonomous cars than they are to design an aircraft. (Yes Elon Musk has Space X. If you look at the history of that NASA rejected his first code delivery for being poorly tested and not handling near enough exceptions or cases where things do not go as planned. He went to the aerospace community, hired engineers with the right background and fixed it. There is no NASA like organization filling the same roll here). The reason they are making the progress they are is two-fold. AI does have value and is learning for them. Masking their experience gaps. As they are mostly driving the easiest scenarios at this time that progress is grossly exaggerated. To the point of misleading the public and government officials into a significant level of false confidence. Government personnel who look to industry because they do not have the requisite experience either. Hence the perfect storm.

Beyond this there are several areas I believe the federal government needs to ensure that minimum safety standards are met. Best practices. The belief that industry will determine them is not well founded. First look at the history of the FAA. It took many tragedies to get to a point where the FAA is today. A point where they set minimum standards. A process that NHTSA and DoT should model themselves after. (This is especially important regarding the FAA handling of simulation quality levels and certification). Right now the autonomous industry is not coordinated enough nor are they actually trying to set actual best practices. The reasons for this are common. They do not want to give away IP. They do not want others to know where they are in the process. And most companies will not volunteer to spend money on areas they are not mandated to so. The reason for that is they do not want to suffer a financial and time to market disadvantage. Often this process results in the least than should be done — not best practices.

Areas relating to Common Federal Standards

1. Scenarios — Covering all the core normal and hazardous scenarios. A minimal set of scenarios these systems are expected to handle and how they are handled should be created and tested to. Use of public shadow driving will not create that matrix unless every possible scenario, especially road design, road condition and weather scenario is experienced by every AV maker. The only way to know if those are complete is to create a baseline inventory or matrix.

2. Sensor Quality and Redundancy in all conditions — Right now most companies are focusing on one or two main sensors. In some cases those are camera and LIDAR based. Neither of those systems is effective in bad weather. As with the aerospace industry there should be double if not triple verification of sensor data quality and availability for every core scenario. To meet this, their systems are usually based on several radar types, navigation aids then visual and other systems.

3. Systems Operational Commonality — AV systems need to be operated the same vehicle to vehicle. Like cruise control is now. It will be very problematic if engagement and disengagement of the systems be different vehicle to vehicle

4. Scenarios Handling Commonality — Key scenarios have to be handled the same in very vehicle or people will either take over or not take over the systems based on an expectation they may have from another vehicle.

5. Data Storage and Retrieval — Black Box — Right now companies are not transmitting data often enough nor building crash/fire proof black boxes to ensure critical data is captured. As with the NTSB and airplanes this data also needs to be received by NTSB first. Finally when these vehicles are privately owned the data should be available to the vehicle owner. If not owned by them. (Today owners can access CAM Bus data).

For much more detail on what I believe the keys to running a program to get to L4 please see the link to my LinkedIn article below.

Thank you for your time.

Michael DeKort

Who will get to Autonomous Level 5 First and Why.

https://www.linkedin.com/pulse/who-get-autonomous-level-5-first-why-michael-dekort

--

--

Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation