Tesla “autopilot” development effort needs to be stopped and people arrested
Update 6–29–2020 — First, Tesla is now using a LiDAR on a development vehicle. This appears to be capitulation on Tesla’s part. The issue now is do they retrofit all existing vehicles? In addition, I recently received data on the radars being used. They are Bosch MRR and now Continental ARS4-A. Both are scanning radars. However, they do not have enough transmitters to detect a crossing object (not enough Doppler fidelity laterally) and they cannot detect stationary objects because they are not pulse radars. (CW radars rely on movement toward or aware to vary the continuous wave frequency. The Doppler effect.) The transmitters they have help determine the moving objects traveling toward or away from them are somewhere in the lane ahead of them. But not much more. Given all of this my original point stands
Yes, I am quite aware of what that title says. Allow me to double down on it. Elon Musk, Andrej Karpathy and the engineering team at Tesla are knowingly and willfully causing the injury and death of people needlessly. And they are doing so for incredibly selfish reasons. This to include ego, and to avoid financial, career and legal hardship. These are grossly negligent unethical, immoral, incompetent and horrible people. The only way to stop them that I see is to force their efforts to stop through a court order or arresting them for gross negligence or worse.
What brought me to this point? First, I missed the intersection testing they started in April included STOPPING AT GREEN LIGHTS. (Which it turns out is worse than this because the cars are stopping randomly. See my article below for a video showing this.) The other revelation is in the article just below this section. Andrej Karpathy shows no desire whatsoever to modify his current development and testing approach or design. Instead he states its necessary to do this right. It is clear these folks are beyond ethically, morally and professional bankrupt and incorrigible.
Tesla admits its approach to self-driving is harder but might be only way to scale
The Charges — (Except for the last two these are explained in detail in my articles below. I will cover the last two in more detail below.)
· Willfully sacrificing human life for an untenable and needless development and testing approach
· Implementing a hardware/sensor/detection system cannot properly detect stationary or crossing object
· Choosing not to fix Automatic Emergency Breaking or Stationary/Crossing Object design flaws
· The system under test without a human driving stops at green lights (Let that sink in. Worse yet the car stops randomly)
· Ignoring proper simulation and using human Guinea pigs in the public domain needlessly
· Elon Musk routinely videoing himself breaking Tesla’s requirement that hands be kept on the wheel
· Calling the system “autopilot” when it is nothing of the sort
· Utilizing a driver monitoring and alarm system that is designed to enable accidents
· Refusing to augment the camera centric design approach in a safe manner
Two Issues I want to cover in more detail
Utilizing a driver monitoring and alarm system that is designed to enable accidents
· Tesla uses a torque system versus eye gaze monitoring to determine if the driver is properly holding the wheel. It is easily defeated with various objects like an orange or water bottle. The alarm system does not trigger for 8 seconds. Do the math at various speeds to see how far that car goes before the alarm goes off
· Why are these utilized? — Tesla needs accidents to happen to learn how to avoid and best handle many of them. (This is true for every AV maker). If the human Guinea pig “safety drivers” disengage this cannot occur. Having a poor monitoring and alarm system keeps disengagements from happening. Elon’s hands-off videos send his human test subject customers the message they should let go of the wheel. (A privilege they pay Tesla for. At a cost that keeps going up from the original $3000.)
Refusing to augment the camera centric design approach in a safe manner
· This involves Tesla’s belief that the only way these systems can scale (be manufactured cost effectively and operate without relying on external data sources) is to rely on camera sensors. They believe LiDAR is too expensive and dependency on HD Maps for world ground truth can be misleading if that data is incorrect for a variety of reasons.
· First, let me say that I agree that at least right now LiDAR is still to expensive. But costs are coming down. As for HD Maps I agree they should not be relied on for the reason stated. Beyond this I believe these systems should not rely on any external data source for similar reasons. This includes V2X, and GPS. I also would like to add I am sensor agnostic. I don’t care what the final solution is as long as it’s onboard, competent and redundant.
· Having said this camera systems are nowhere near where near able to do what is needed here. And Tesla’s cameras are not even set up in a true stereo fashion. Setting that aside cameras struggle with object depth determination, especially when the object appear or are 2D. Or there are no objects around, especially on either side, to help judge depth. And they struggle in direct light, low light and in bad weather.
· What about radar? Tesla’s Bosch radar does not scan. As such it cannot tell where an object is laterally. Hence the reckless stationary and crossing object detection, AEB and AP fails. Several of which have killed people and various objects hit including police cars, fire trucks, a street sweeper, a tow trucks, passenger cars, trailers and barriers.(With regard to ultrasonic. That system is used for very close in operations. Something not useful at speed. Regarding the successful detection of crossing pedestrians. My assumption is they either spent a lot of time training on people, or they added a hard-coded solution, where that has not occurred for other objects. If this is wrong, I would love to know why.) (See my correction above on the radar types used and that they do have limited scanning. Overall, though, my points stand.)
· Where this goes off the rails? As I stated before the camera systems are nowhere near ready for prime time. And Tesla uses a non-scanning radar and no LiDAR. This approach causes massive issues detecting stationary and crossing objects. Leading to the death of several people. Which at some point will include many families and children. (Tesla clearly states this issue exists in their response to the NTSB’s Banner report.) Given there seems to be no indication of any major design or approach change there will be many more. Something Tesla appears to have no issue living with for eternity. This leads me to ask why Tesla will not add a scanning radar or even LiDAR as a temporary backup system to cover issues? As I stated above, I believe this is not being done because it would demonstrate capitulation. This would result in ego, financial and career issues. (I also wonder why Mobileye doesn’t do this in all their cars. Instead they split them up with one being camera only. Same reasons as Tesla?)
Where are NHTSA and the NTSB?
NHTSA is grossly incompetent and negligent. They are enabling the exact issues they exist to eliminate. Administrator Owens has stated over and over that no safety standards should be put in place until the tech is sorted out. And that it stifles competition. Why do I car what tech is used to meet a safety standard that says — don’t hit massive stationary objects like firetrucks, street sweepers, tow trucks, police cars, passenger cars and trailers? With regard to competition. Objective and testable safety standards increase competition by leveling the playing field. It minimizes hype and the race to be first. (Which becomes a race to the ethical and moral bottom.)
The NTSB is almost as bad. They have stated over and over that the systems in development on the roads need to be aware of where they are so they can keep autonomous modes from engaging when they are in areas they cannot handle the relevant scenarios. Folks, they can’t get to that point until the machine learning is trained. How do they train? Try stuff, fail, receive small corrections and try again. And this happens hundreds if not thousands of times for each scenarios. (which is made far worse by the preciseness required by deep learning.) This means we need the Brown/Banner, Yaning, Huang, Umeda scenarios to happen over and over and over, until we get to where the NTSB says they already should be. These folks have zero clue how inefficient what machine learning is right now. It is like playing Simon says with an idiot who has a memory problem.
Tesla is currently safer than a human nonsense — NHTSA and Tesla “Safety Data” hoax etc
· No AP/FSD is not currently safer than a human. Yes, it has saved lives. However, the data from NHTSA is wrong — https://www.thedrive.com/tech/26455/nhtsas-flawed-autopilot-safety-study-unmasked
· Tesla does not release disengagement and the associated root cause data. This means we cannot see all the times the human saves themselves, others and the car. I guaranty it is far more often than the reverse and will always be on their current development and test path.
· It is impossible to have the massive stationary and crossing object flaw and be a better driver than a human.
The Right Path
The proper way for Tesla, and every AV maker, to do this right, minimize shadow driving and virtually eliminate safety driving and the use of human Guinea pigs, is to switch most of the development and testing over to proper simulation. That proper simulation would facilitate the building of a legitimate digital twin. This cannot be done using the current systems and approaches this industry uses now. This is due to their use of gaming-based systems. These have problematic real-time architectures and do not employ proper active sensor models. DoD/aerospace simulation technology and approaches resolve all of this. For more on this please see my articles below. (Yes, I know Musk and Karpathy think you have to use the real world because of edge cases. That is wrong and covered below.)
Tesla “autopilot” development includes Stopping at Green Lights
Forget Tesla’s “autopilot” their Automatic Emergency Braking is a Debacle
The Autonomous Vehicle Industry can be Saved by doing the Opposite of what is being done now to create this technology
Proposal for Successfully Creating an Autonomous Ground or Air Vehicle
Simulation can create a Complete Digital Twin of the Real World if DoD/Aerospace Technology is used
- https://medium.com/@imispgh/simulation-can-create-a-complete-digital-twin-of-the-real world-if-dod-aerospace-technology-is-used-c79a64551647
Autonomous Vehicles Need to Have Accidents to Develop this Technology
Using the Real World is better than Proper Simulation for AV Development — NONSENSE
- https://medium.com/@imispgh/using-the-real world-is-better-than-proper-simulation-for-autonomous-vehicle-development-nonsense-90cde4ccc0ce
NTSB’s Tragically Incompetent Tesla-Banner Investigation Report
NHTSA is Enabling the Crash of the Driverless Vehicle Industry and More Needless Human Test Subject Deaths
My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.
Key Industry Participation
- Founder SAE On-Road Autonomous Driving Simulation Task Force
- Member SAE ORAD Verification and Validation Task Force
- Stakeholder for UL4600 — Creating AV Safety Guidelines
- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)
- Presented the IEEE Barus Ethics Award for Post 9/11 Efforts
My company is Dactle
We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.