Letter to Congress — Handling of minimum standards for Autonomous industry

My name is Michael DeKort. I am a former systems engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the Aegis Weapon System, NORAD and on C4ISR for the US Coast Guard and DHS. I also worked in Commercial IT. Including cybersecurity. I received the IEEE Barus Ethics Award for whistleblowing regarding the DHS/USCG Deepwater program post 9/11.

I am also the founder of Professionals for Responsible Mobility

I am contacting you because I believe there are some issues regarding how autonomous vehicles are being engineered you may not be aware of. Issues that will literally preclude ever reaching autonomous levels 4 or 5. As well as there being thousands upon thousands of avoidable accidents, injuries and casualties coming in the future. That in addition to the unnecessary expense of over $300B that will have to be spent by each AV maker in an effort to shadow drive one trillion miles each to get to L4/L5. I am fully aware that what I will be communicating here is well outside “conventional wisdom”. I assure you however I have done my homework. As a point of confirmation Waymo has released a new Safety Report that confirms all of my key issues are correct.

· Waymo Safety Report — Confirms what I have been saying about Public Shadow Driving etc — https://www.linkedin.com/pulse/intel-mobileeye-creates-fake-autonomous-vehicle-ad-lebron-dekort/

The NTSB just found that Tesla bears some responsibility for the Joshua Brown accident. Unlike NHTSA who let Tesla completely off the hook the NTSB found that Tesla’s expectations of drivers was too high and that if the systems cannot function well on certain types of roads the system should not engage. They also said only sensing the driver touching the wheel is not good enough to determine if the drive is paying attention. Of course the next step is to show the NTSB that there is no reliable method to keep drivers paying attention properly per NASA. Clemson, the University of South Hampton, Ford, Waymo and Chris Urmson. (We also need NHTSA to redo that 2015 L2/L3 Study where they determined L3 can be made safe and control is defined as simply grabbing the wheel. Not the actions taken after that). This makes Waymo’s recent decision to public shadow drive remotely extremely dangerous as the lag to take over is not greater due to the remote control system lag. Since the NTSB wants to limit L3 or public shadow driving to areas where the vehicle has no capability deficiencies, using it for AI, engineering and test should be eliminated since those areas are being learned.

The second concern is I do not want to see the first child or family lose their lives needlessly in one of these vehicles. When the public, press, lawyers and governments realize that has occurred and is avoidable I believe they will doubt the competence and ethical fortitude of this industry, put up a brick wall and impose more regulation that is needed.

I want it to be clear that I want this technology to be successful. But it has to be done in a manner that actually results in this technology coming to market. And the process used to get there has to be as safe as possible.

Key Issues

The use of shadow driving for AI, engineering and testing as well as L3 are dangerous and unnecessary.

As the automakers sell cars with the L3 capability and the autonomous companies who use this practice for AI, engineering and testing to create autonomous vehicles, move from driving benign or easy conditions the accident, injury and death rate will increase. Both due to issues with situational awareness and because it literally has to happen. The reason for that is in order for the AI to learn how to handle every core scenario, including dangerous, complex and actual crash scenario, they have to be driven. That means thousands of dangerous scenarios driven thousands of times each. That means thousands of avoidable injuries and deaths. The reason this is deemed acceptable is because it is believed this process is the best or only way to attain autonomy and that the tragedies along the way are a necessary evil. That belief is wrong.

a. NASA, Clemson and the University of South Hampton have research and real world data, including air tragedies, to show shadow driving and L3 cannot be made reliably or repeatedly safe. Studies show no matter what alerting or enticement process you use it takes 5 to 40 seconds to properly assess your situational awareness and then make the right decision and properly execute it. Several companies/individuals are in agreement with this and have said L3 has to be skipped. Those being Waymo Ford and. Chris Urmson. Ford has said their professional drivers were falling asleep and it could not be stopped. (Chris Urmson has also stated it will take 30 years to get this technology right).

b. Waymo recently released their Safety Report to NHTSA in order to field geofenced L4 vehicles in Phoenix. The report states Waymo will not use L3 or public shadow driving. It goes on to say they are using far more simulation now. Making it their dominate AI, engineering and test platform. This confirms my concerns regarding public shadow driving. (The report also omits any reference to Waymo using remote control. It mentions several layers of system and personal support in the field should a problem arise. Hopefully they have dropped the use of remote control as it would make the public shadow driving issue worse due to more system delay. In addition the people who operate the system remote may need to be in a full motion simulator in order to feel the road etc so they can drive the vehicles properly).

c. Here is a video of a Tesla driver having to take over for the vehicle. Keep in mind this is in a clear night, no vehicles were to his left, the roads were not slippery and there were no children aboard. Now imagine if those situations did occur and even worse people experiencing that and worse scenarios thousands of times to train the AI.

d. If L3 is too dangerous to use in driving then if follows it is too dangerous to use for AI, engineering and test to create fully autonomous vehicles.

e. Recent articles have shown that Tesla’s own engineers believe the system puts lives at risk.

f. This is where George Hotz, of comma.ai and PolySync have to be mentioned. These companies are selling autonomous kits and giving away the software source code. This means anyone can modify those systems. Including folks who want to weaponized vehicles. Beyond that this allows any person who can change software to modify the system. Then test it in the public domain. This is the worst kind of shadow driving. These developers are not experts of any kind. They have little idea what the system ramifications are of the changes they make. This practice should be outlawed.

g. NHTSA Negligence? — NHTSA released a report in 2015 saying L3 could be made safe. This report leads those reading it, policy makers, the public, the press etc into believing ALL you have to do to safely let people shadow/safety drive (L2/L3) is give them warnings to take control and that control is ONLY defined as physically engaging with the steering wheel, brakes etc. Other than unplanned lane keeping issues there testing did not measure how effective that control is in handling scenarios after the wheel is grabbed. Especially urgent ones. Meaning their study did not perform the testing NASA, Clemson and the University of South Hampton conducted. Why did they stop there? I believe the study and subsequent conclusions were negligent.

Is it a coincidence that the companies fielding L3 vehicles, like GM, Mercedes and Google, were part of the test? Is this an effort to mislead those downstream into supporting or approving this activity or to buy the vehicles? What role did this play in how Tesla, NHTSA and the NTSB handle the Joshua Brown tragedy?


  • I have filed with the DoT IG, provided my information for the NTSB and am pursuing other routes.

Shadow Driving can never lead to autonomous vehicles

1. Toyota and RAND have stated that in order to get to levels 4 and 5 one trillion miles will have to be driven. (RAND stated it is not even possible). This to accommodate the uncontrollable nature of driving in the real world, literally stumbling and then having to restumble on scenarios to train the AI. To accomplish this in 10 years it will cost over $300B. That extremely conservative figure is the cost of 684k drivers, driving 228k vehicles 24/7. This expense in time and money is per company and vehicle.


The solution for all of these issues is to use simulation. Something it appears Waymo realized recently after determining L3 should be skipped several months ago.

Issues Summary

The practices of shadow driving and L3 are dangerous. Their use in the public or for companies to perform the AI, engineering and testing to create autonomous vehicles will create thousands needless of injuries and casualties. NHTSA has made this situation worse by producing a flawed report that enables this behavior. That report based on incomplete and misleading testing. In addition the practice of shadow driving to create autonomous vehicles will require so many miles to be driven at such a massive expense (one trillion and over $300B) that process will never result in a fully autonomous vehicle. Finally this is viirtually all avoidable if aerospace level simulation is used.

Use of Simulation

Note — Most AV makers use simulation now. The issue is to what degree. Aerospace level simulation should replace most of public shadow driving for AI, engineering and test.

1. While most of the AV makers use simulation they do use nearly as much as they should or could.

2. The reason for this is that they believe it is not capable of doing what is needed. Waymo and Uber have recently stated they are using much more of it. But will not say it is replacing most public shadow driving.

3. That belief exists because this industry is unaware of the technology that exists in aerospace. Believing for example Grand Theft Auto is the pinnacle of simulation.

a. MCity recently determined simulation could be used to replace shadow driving especially when combined with their process to cull down the driving by 99.9% and create a core set of scenarios. That is an outstanding development. However they used GTA. No products in their own industry or from aerospace. While GTA has some value it could lead to low adoption rates, continued or some continued use of shadow driving or flawed results because GTA has too many flaws to be utilized for most of the scenarios needed. All of this signifies even the “experts” advising this industry don’t know what is really capable in simulation.

4. To make matters worse it appears none of the simulation products in this industry can do all of what is needed. In addition it appears those companies are not aware of aerospace technology or have not tried to acquire it. This is based on my review of the published capabilities of the 30 or so companies in this industry and talking with about 10 of them. Including several leaders in the space.

5. As for Simulation being the solution. I believe the answer is to:

a. Make the industry aware of what simulation can do. Especially in other industries such as aerospace. (Where the FAA has had detailed testing to assess simulation and simulator fidelity levels for decades.)

b. Make the industry aware of the MCity approach to finding the most efficient set of scenarios. Bring that one trillion miles down by 99.9%

c. Make the industry aware of who all the simulation and simulator organizations are.

d. Evaluate the available products to determine their current capabilities.

e. Determine how close the industry and any individual product is to filling all the capabilities required to eliminate public shadow driving. Where there are gaps determine a way forward to improving products or possibly creating a consortium. This may involve utilizing expertise from other industries.

Root Cause

1. There is a perfect storm involved here. The fact of the matter is the vast majority of engineers developing these systems, however hard working, dedicated and intelligent they are, actually have very little experience in the engineering areas involved. The vast majority come from Commercial IT. Companies like Twitter, Google Search, Uber etc. They do not have backgrounds in complex systems engineering, exception handling, traffic engineering etc. It is assumed they are up to this because they make great apps, games etc. The fact of the matter is they are no more qualified to build autonomous cars than they are to design and make an aircraft. Yet in spite of this government agencies rely on their expertise and “best practices”. (Yes Elon Musk has Space X. If you look at the history of that NASA rejected his first code delivery for being poorly tested and not handling near enough exceptions or cases where things do not go as planned. He went to the aerospace community, hired engineers with the right background and fixed it. There is no NASA like organization filling the same roll here). The reason these developers are making the progress they are is AI does have value, is learning for them and that progress is being exaggerated. That process is masking their experience gaps.

Toyota and Waymo Get It

Toyota Gets It — It appears Toyota understands the risk of public shadow driving and how important simulation is. They may be the first to L4.

Waymo recently released their Safety Report to NHTSA in order to field geofenced L4 vehicles in Phoenix. The report states Waymo will not use L3 or public shadow driving. It goes on to confirm they are using simulation. This confirms my concerns regarding public shadow driving.

Other Areas the Industry and Government focus on

1. False Advertising — Misrepresenting Capabilities — The industry is stuck in a very dangerous loop concerning the misrepresentation of AV capabilities. This leads to providing the public a false sense of security. Misleading not just the public but insurance companies, oversight groups and agencies and those partnering with these companies. In order to secure funding and satisfy egos these companies are constantly trying to one up each other by misrepresenting the capabilities they have and when they will have a fully autonomous vehicle. Chris Urmson has stated it will take 30 years to reach full autonomy. Waymo just released a report stating that after 8 years they can only release an L4 vehicle in a tightly geofenced environment. (Phoenix with only light rain. Phoenix being a modern city has a very well-marked grid road system It is far easier to navigate and learn than most locations). Recently Intel was caught by a reporter faking a commercial. This is a key reason why it is important for the industry to create and utilize a minimal standards scenario matrix.

2. Scenarios — Scenario Matrix — Covering all the core normal and hazardous scenarios. A minimal set of scenarios these systems are expected to handle and how they are handled should be created and tested to. Use of public shadow driving will not create that matrix unless every possible scenario, especially road design, road condition and weather scenario is experienced by every AV maker. The only way to know if those are complete is to create a baseline inventory or matrix.

3. Sensor quality and redundancy in all conditions — Right now most companies are focusing on one or two main sensors. In some cases those are camera and LIDAR based. Neither of those systems is effective in bad weather. As with the aerospace industry there should be double if not triple verification of sensor data quality and availability for every core scenario. To meet this, their systems are usually based on several radar types, navigation aids then visual and other systems.

4. Systems Operational Commonality — AV systems need to be operated the same vehicle to vehicle. Like cruise control is now. It will be very problematic if engagement and disengagement of the systems be different vehicle to vehicle

5. Scenarios Handling Commonality — Key scenarios have to be handled the same in very vehicle or people will either take over or not take over the systems based on an expectation they may have from another vehicle.

6. V2X Operability — The current update rate being discussed most is 10hz. That is not often enough. Two vehicles coming at each other in opposing lanes, with no median, at 75mph each would require 60hz to deal with last moment issues. If the first message reliability is not 99% and a second is needed the rate moves to 120hz. There are other scenarios which would raise it more. This community needs to looks at the most complex of threads in the worst of conditions and set that update rate. This of course will magnify the data volume significantly.

7. Hardware Reliability — Building a self-driving car that meets the reliability requirements equal to our current system is one of the most challenging technological developments ever attempted. It requires building a very sophisticated, reliable electromechanical control system with artificial intelligence software that needs to achieve an unprecedented level of reliability at a cost the consumer can afford. Boeing claims a 99.7% reliability figure for its 737 which is equivalent to about 3,000 failures per million opportunities. A modern German Bosch engine control module achieves a reliability of about 10 failures per million units which is about 6 times worse for a single component than our current system of flawed drivers. This means that to meet the reliability requirements, all systems and sensors that control any aspect of the car’s movement must be double redundant which really means building nearly two cars in one body! The hardware cannot fail. This level of quality may be extremely hard to produce in volume and to be cost competitive.

8. Common Mapping Versions — Map versions have to be common for every user in any given area. We cannot have different services providing different maps for which there are crucial differences in data. An example being construction changes. That will cause system and V2x confusion and errors. A solution would be a common version and/or distribution point.

Links to other data references

AI Issues — Misses Corner-Cases or Outliers — Causes Errors — Users do not entirely know how it works

Shadow driving puts drivers and the public at risk

Chris Urmson stating L3 will be skipped due to public shadow driving issues — This confirmation L4/5 cannot be attained using it either

Video — Tesla AP being saved by its driver — Tip of the Shadow Driving Issue Iceberg

One Trillion Miles need to be driven to reach Level 5

MCity 99.9% Scenario Reduction Simulation Approach — Study on Simulation for AI

University of Michigan using Grand Theft Auto to prove Simulation can be effective (Aerospace level simulation is far better)

Systems Engineer, Engineering/Program Management -- DoD/Aerospace/IT - Autonomous Systems Air & Ground, FAA Simulation, UAM, V2X, C4ISR, Cybersecurity