Elon Musk admits he needs accidents to happen to develop “Autopilot”

Michael DeKort
3 min readSep 12, 2019

Here is the Twitter thread — https://twitter.com/elonmusk/status/1172235837792145408?s=20

As I have said — Tesla learns to avoid and best handle accidents having them. Thousands of accident scenarios run thousands of times because AI/ML is inefficient. This will cause thousands of injuries & deaths. The process requires the drivers to commit suicide. Why? The entire thread has to be tested. If they disengage the testing is shorted. Look at Brown/Banner. Has to happen many more times.

Is this what we want? Soon the first of many children and families will be killed for no reason. People inside and outside these vehicles. There is a solution. Move 99.9% of the development to DoD/aerospace simulation technology.

Note — It’s not just Elon and Tesla. Virtually every AV maker does this. Tesla is most egregious and has most deaths to date because they have far more vehicles out there then everyone else combined, he is the most aggressive using the whole US now in order to get to L4 this year and they do not use LiDAR. At some point all the other AV makers will have most of these same issues while they stumble and restumble on the same accident scenarios to learn them. The worst parts being that no one needs to die if this is done right. And the process of public shadow and safety driving will never get close to L4.

Why is Public shadow and safety driving untenable? — The process being used by most AV makers to develop these systems, public shadow and safety driving, is untenable, has killed six people to date for no reason and will kill thousands more when accident scenarios are learned. It is impossible to drive the one trillion miles or spend over $300B to stumble and restumble on all the scenarios necessary to complete the effort. In addition, the process harms people for no reason. This occurs two ways. The first is through handover or fall back. A process that cannot be made safe for most complex scenarios, by any monitoring and notification system, because they cannot provide the time to regain proper situational awareness and do the right thing the right way, especially in time critical scenarios. The other dangerous area is training the systems to handle accident scenarios which I discussed above. As I said the solution is to use DoD/aerospace simulation technology, informed and validated by real-world data, for most of the development and testing.

More on my POV here including the solution here

Using the Real World is better than Proper Simulation for Autonomous Vehicle Development — NONSENSE

The Hype of Geofencing for Autonomous Vehicles

SAE Autonomous Vehicle Engineering Magazine-End Public Shadow Driving

My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.

Key Industry Participation

- Lead — SAE On-Road Autonomous Driving SAE Model and Simulation Task

- Member SAE ORAD Verification and Validation Task Force

- Member DIN/SAE International Alliance for Mobility Testing & Standardization (IAMTS) Sensor Simulation Specs

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)

- Presented the IEEE Barus Ethics Award for Post 9/11 Efforts

My company is Dactle

We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.

--

--

Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation