Tesla “autopilot” development effort needs to be stopped and people held accountable

Michael DeKort
9 min readJun 20, 2020

--

Yes, I am quite aware of what that title says. Allow me to double down on it. Elon Musk, Andrej Karpathy and the engineering team at Tesla are knowingly and willfully causing the injury and death of people needlessly. And they are doing so for incredibly selfish reasons. This to include ego, and to avoid financial, career and legal hardship. These are grossly negligent unethical, immoral, incompetent and horrible people. The only way to stop them that I see is to force their efforts to stop through a court order or charging them for gross negligence or worse.

What brought me to this point? First, I missed the intersection testing they started in April included STOPPING AT GREEN LIGHTS. (Which it turns out is worse than this because the cars are stopping randomly. See my article below for a video showing this.) The other revelation is in the article just below this section. Andrej Karpathy shows no desire whatsoever to modify his current development and testing approach or design. Instead he states its necessary to do this right. It is clear these folks are beyond ethically, morally and professional bankrupt and incorrigible.

Tesla admits its approach to self-driving is harder but might be only way to scale

· https://electrek.co/2020/06/18/tesla-approach-self-driving-harder-only-way-to-scale/

The Charges — (Except for the last two these are explained in detail in my articles below. I will cover the last two in more detail below.)

· Willfully sacrificing human life for an untenable and needless development and testing approach

· Implementing a hardware/sensor/detection system cannot properly detect stationary or crossing object

· Choosing not to fix Automatic Emergency Breaking or Stationary/Crossing Object design flaws

· The system under test without a human driving stops at green lights (Let that sink in. Worse yet the car stops randomly)

· Ignoring proper simulation and using human Guinea pigs in the public domain needlessly

· Elon Musk routinely videoing himself breaking Tesla’s requirement that hands be kept on the wheel

· Calling the system “autopilot” when it is nothing of the sort

· Utilizing a driver monitoring and alarm system that is designed to enable accidents

· Refusing to augment the camera centric design approach in a safe manner

Two Issues I want to cover in more detail

Utilizing a driver monitoring and alarm system that is designed to enable accidents

· Tesla uses a torque system versus eye gaze monitoring to determine if the driver is properly holding the wheel. It is easily defeated with various objects like an orange or water bottle. The alarm system does not trigger for 8 seconds. Do the math at various speeds to see how far that car goes before the alarm goes off

· Why are these utilized? — Tesla needs accidents to happen to learn how to avoid and best handle many of them. (This is true for every AV maker). If the human Guinea pig “safety drivers” disengage this cannot occur. Having a poor monitoring and alarm system keeps disengagements from happening. Elon’s hands-off videos send his human test subject customers the message they should let go of the wheel. (A privilege they pay Tesla for. At a cost that keeps going up from the original $3000.)

Refusing to augment the camera centric design approach in a safe manner

· This involves Tesla’s belief that the only way these systems can scale (be manufactured cost effectively and operate without relying on external data sources) is to rely on camera sensors. They believe LiDAR is too expensive and dependency on HD Maps for world ground truth can be misleading if that data is incorrect for a variety of reasons.

· First, let me say that I agree that at least right now LiDAR is still to expensive. But costs are coming down. As for HD Maps I agree they should not be relied on for the reason stated. Beyond this I believe these systems should not rely on any external data source for similar reasons. This includes V2X, and GPS. I also would like to add I am sensor agnostic. I don’t care what the final solution is as long as it’s onboard, competent and redundant.

· Having said this camera systems are nowhere near where near able to do what is needed here. And Tesla’s cameras are not even set up in a true stereo fashion. Setting that aside cameras struggle with object depth determination, especially when the object appear or are 2D. Or there are no objects around, especially on either side, to help judge depth. And they struggle in direct light, low light and in bad weather.

· What about radar? Tesla’s Bosch radar does not scan. As such it cannot tell where an object is laterally. Hence the reckless stationary and crossing object detection, AEB and AP fails. Several of which have killed people and various objects hit including police cars, fire trucks, a street sweeper, a tow trucks, passenger cars, trailers and barriers.(With regard to ultrasonic. That system is used for very close in operations. Something not useful at speed. Regarding the successful detection of crossing pedestrians. My assumption is they either spent a lot of time training on people, or they added a hard-coded solution, where that has not occurred for other objects. If this is wrong, I would love to know why.) (See my correction above on the radar types used and that they do have limited scanning. Overall, though, my points stand.)

· Where this goes off the rails? As I stated before the camera systems are nowhere near ready for prime time. And Tesla uses a non-scanning radar and no LiDAR. This approach causes massive issues detecting stationary and crossing objects. Leading to the death of several people. Which at some point will include many families and children. (Tesla clearly states this issue exists in their response to the NTSB’s Banner report.) Given there seems to be no indication of any major design or approach change there will be many more. Something Tesla appears to have no issue living with for eternity. This leads me to ask why Tesla will not add a scanning radar or even LiDAR as a temporary backup system to cover issues? As I stated above, I believe this is not being done because it would demonstrate capitulation. This would result in ego, financial and career issues. (I also wonder why Mobileye doesn’t do this in all their cars. Instead they split them up with one being camera only. Same reasons as Tesla?)

Where are NHTSA and the NTSB?

NHTSA is grossly incompetent and negligent. They are enabling the exact issues they exist to eliminate. Administrator Owens has stated over and over that no safety standards should be put in place until the tech is sorted out. And that it stifles competition. Why do I car what tech is used to meet a safety standard that says — don’t hit massive stationary objects like firetrucks, street sweepers, tow trucks, police cars, passenger cars and trailers? With regard to competition. Objective and testable safety standards increase competition by leveling the playing field. It minimizes hype and the race to be first. (Which becomes a race to the ethical and moral bottom.)

The NTSB is almost as bad. They have stated over and over that the systems in development on the roads need to be aware of where they are so they can keep autonomous modes from engaging when they are in areas they cannot handle the relevant scenarios. Folks, they can’t get to that point until the machine learning is trained. How do they train? Try stuff, fail, receive small corrections and try again. And this happens hundreds if not thousands of times for each scenarios. (which is made far worse by the preciseness required by deep learning.) This means we need the Brown/Banner, Yaning, Huang, Umeda scenarios to happen over and over and over, until we get to where the NTSB says they already should be. These folks have zero clue how inefficient what machine learning is right now. It is like playing Simon says with an idiot who has a memory problem.

Tesla is currently safer than a human nonsense — NHTSA and Tesla “Safety Data” hoax etc

· No AP/FSD is not currently safer than a human. Yes, it has saved lives. However, the data from NHTSA is wrong — https://www.thedrive.com/tech/26455/nhtsas-flawed-autopilot-safety-study-unmasked

· Tesla does not release disengagement and the associated root cause data. This means we cannot see all the times the human saves themselves, others and the car. I guaranty it is far more often than the reverse and will always be on their current development and test path.

· It is impossible to have the massive stationary and crossing object flaw and be a better driver than a human.

The Right Path

The proper way for Tesla, and every AV maker, to do this right, minimize shadow driving and virtually eliminate safety driving and the use of human Guinea pigs, is to switch most of the development and testing over to proper simulation. That proper simulation would facilitate the building of a legitimate digital twin. This cannot be done using the current systems and approaches this industry uses now. This is due to their use of gaming-based systems. These have problematic real-time architectures and do not employ proper active sensor models. DoD/aerospace simulation technology and approaches resolve all of this. For more on this please see my articles below. (Yes, I know Musk and Karpathy think you have to use the real world because of edge cases. That is wrong and covered below.)

Update 6–29–2020 — First, Tesla is now using a LiDAR on a development vehicle. This appears to be capitulation on Tesla’s part. The issue now is do they retrofit all existing vehicles? In addition, I recently received data on the radars being used. They are Bosch MRR and now Continental ARS4-A. Both are scanning radars. However, they do not have enough transmitters to detect a crossing object (not enough Doppler fidelity laterally) and they cannot detect stationary objects because they are not pulse radars. (CW radars rely on movement toward or aware to vary the continuous wave frequency. The Doppler effect.) The transmitters they have help determine the moving objects traveling toward or away from them are somewhere in the lane ahead of them. But not much more. Given all of this my original point stands

Update 10–23–2020 — With better radar Elon admits Fatal “Autopilot” Design Flaw and Cameras as Primary Sensor is a Joke

https://medium.com/@imispgh/with-better-radar-elon-admits-fatal-autopilot-design-flaw-and-cameras-as-primary-sensor-is-a-joke-cb2819e034fb

Update 12–19–2020 — Tesla in AP kills a man attending to his vehicle on the side of the road in Norway. This is now 7 confirmed dead in a Tesla in “autopilot” and who likely dies due to the flawed sensor system. Here is the article link — Use Google translator to read it.

https://motor.no/autopilot-nyheter-tesla/tesla-pa-auto-styring-da-mann-ble-meid-ned/188623

More information

Tesla “autopilot” development includes Stopping at Green Lights

· https://medium.com/@imispgh/tesla-autopilot-development-includes-stopping-at-green-lights-a25f45072029

Forget Tesla’s “autopilot” their Automatic Emergency Braking is a Debacle

· https://medium.com/@imispgh/forget-teslas-autopilot-their-automatic-emergency-braking-is-a-debacle-c027e5a0fe6c

The Autonomous Vehicle Industry can be Saved by doing the Opposite of what is being done now to create this technology

· https://medium.com/@imispgh/the-autonomous-vehicle-industry-can-be-saved-by-doing-the-opposite-of-what-is-being-done-now-b4e5c6ae9237

Proposal for Successfully Creating an Autonomous Ground or Air Vehicle

· https://medium.com/@imispgh/proposal-for-successfully-creating-an-autonomous-ground-or-air-vehicle-539bb10967b1

Simulation can create a Complete Digital Twin of the Real World if DoD/Aerospace Technology is used

Autonomous Vehicles Need to Have Accidents to Develop this Technology

Using the Real World is better than Proper Simulation for AV Development — NONSENSE

NTSB’s Tragically Incompetent Tesla-Banner Investigation Report

· https://medium.com/@imispgh/ntsbs-tragically-incompetent-tesla-banner-investigation-report-4628648dd287

NHTSA is Enabling the Crash of the Driverless Vehicle Industry and More Needless Human Test Subject Deaths

· https://medium.com/@imispgh/nhtsa-is-enabling-the-crash-of-the-driverless-vehicle-industry-and-more-needless-human-test-28a6e7becefc

My name is Michael DeKort — I am a former system engineer, engineering, and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, a software project manager on an Aegis Weapon System baseline, and on C4ISR for DoD/DHS

Industry Participation — Air and Ground

- Founder SAE On-Road Autonomous Driving Simulation Task Force

- Member SAE ORAD Verification and Validation Task Force

- Member UNECE WP.29 SG2 Virtual Testing

- Stakeholder USDOT VOICES (Virtual Open Innovation Collaborative Environment for Safety)

- Member SAE G-34 / EUROCAE WG-114 Artificial Intelligence in Aviation

- Member Teleoperation Consortium

- Member CIVATAglobal — Civic Air Transport Association

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee

SAE Autonomous Vehicle Engineering magazine editor calling me “prescient” regarding my position on Tesla and the overall driverless vehicle industry’s untenable development and testing approach — (Page 2) https://assets.techbriefs.com/EML/2021/digital_editions/ave/AVE-202109.pdf

Presented the IEEE Barus Ethics Award for Post 9/11 DoD/DHS Whistleblowing Efforts

--

--

Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation