Machine learning expert confirms use of the public domain for Autonomous Vehicle development will result in vast quantities of accidents and casualties

Michael DeKort
4 min readAug 31, 2019

Interestingly enough, Yann LeClun the subject of this article, was discussing the major issues with using the real-world to develop driverless vehicles on Lex Fridman’s podcast. Lex is a huge supporter of Tesla, public shadow and safety driving and the use of human as Guinea pigs. He also believes and has stated that safety driving can resolve all handover scenarios. A deeply flawed and dangerous position I cover below.

Here is the video link — https://www.youtube.com/watch?v=SGSOCuByo24

In the section starting at 53:50 Yann is responding to Lex’s question on the use on machine learning to create autonomous vehicles. Due to these systems not having any knowledge of the world beyond what they are made aware of while driving they will make very dangerous mistakes. These mistakes will lead to accidents, injuries and deaths. His point is that unlike a human, who has vast sums of data, or a “world model” about how the world works, especially regarding physics, they can learn to drive in about 30 hours of instruction because they can leverage this knowledge. Examples of this world knowledge includes gravity and what happens when you hit things etc. Without this knowledge serious repercussions will result.

This is exactly why proper simulation must be used to train these systems. Accidents can now be allowed to happen without injury so the systems can learn what to do and not so without the benefit of the world model. Factor in that scenarios, especially accident scenarios, must be experienced thousands of times each to learn them and you can see why using the real-world to train these systems is reckless and extremely counter-productive. Regarding the quality of the simulation being used all of the models need to reflect the real world especially when their performance curves are pushed. If all the models and real-time operation are not precise enough there will be a gap between the model’s performance and the real-world. This will likely translate into Planning issues around the timing or degree to which braking, acceleration and maneuvering is applied. Those performance gaps will result in real-world tragedies.

(Why is Public shadow and safety driving untenable? — The process being used by most AV makers to develop these systems, public shadow and safety driving, is untenable, has killed six people to date for no reason and will kill thousands more when accident scenarios are learned. It is impossible to drive the one trillion miles or spend over $300B to stumble and restumble on all the scenarios necessary to complete the effort. In addition, the process harms people for no reason. This occurs two ways. The first is through handover or fall back. A process that cannot be made safe for most complex scenarios, by any monitoring and notification system, because they cannot provide the time to regain proper situational awareness and do the right thing the right way, especially in time critical scenarios. The other dangerous area is training the systems to handle accident scenarios which I discussed above.)

More on my POV here including the solution here

Using the Real World is better than Proper Simulation for Autonomous Vehicle Development — NONSENSE

Autonomous Vehicles need to Have Accidents to Develop this Technology

The Hype of Geofencing for Autonomous Vehicles

SAE Autonomous Vehicle Engineering Magazine-End Public Shadow Driving

My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.

Key Industry Participation

- Lead — SAE On-Road Autonomous Driving SAE Model and Simulation Task

- Member SAE ORAD Verification and Validation Task Force

- Member DIN/SAE International Alliance for Mobility Testing & Standardization (IAMTS) Sensor Simulation Specs

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)

- Presented the IEEE Barus Ethics Award for Post 9/11 Efforts

My company is Dactle

We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.

--

--

Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation