Tesla ditching radar and Elon’s explanation show us how bad and how deadly this system is
I will begin with and then end with the same statement. Tesla’s “Autopilot” and “Full Self-Driving” are grossly negligent engineering debacles that will harm and fatally injure many, many, many more people. And this is all avoidable with the right company leadership and autonomous system development and sensor system designs.
This article will focus on two areas. The first being whether there is a set of circumstances where Tesla ditching the radar is value added. And the other evaluating Elon’s explanation defending it.
When I first saw Elon stating they are removing the radar I assumed it was a lie. Not unlike other examples where showboating trumped humanity. Like using humans as needless Guinea pigs to develop the system, not producing any ventilators, sending a rigid cave tube to rescue boys trapped in a tight winding cave, taking an anti-vaccine, mask and Covid-19 denying position and creating tunnels almost void of necessary safety systems. Upon more reflection I think he may not only be telling the statistical vs empirical truth, but in doing so he is signaling the incredible dire straits this system and likely Tesla is in.
I believe this statement from Elon tells us exactly what is going on here:
“When radar and vision disagree, which one do you believe? Vision has much more precision, so better to double down on vision than do sensor fusion. Sensors are a bitstream and cameras have several orders of magnitude more bits/sec than radar (or lidar). Radar must meaningfully increase signal/noise of bitstream to be worth complexity of integrating it. As vision processing gets better, it just leaves radar far behind,” Musk explained.”
What if Tesla’s “Autopilot” and “Full Self-Driving” system designs are so poor and/or the cost is so exorbitant to fix it, Elon is simply using statistical “truth” to keep his bait and switch efforts alive for a bit longer? (Which will likely include another price increase. This to above $10k). What if the radar unit and the system that uses it are so poor, especially given the massive stationary and crossing object issue, and the fix is so expensive, in MOST ODD’s and scenarios they are statistically correct in going to a camera only system? If, as stated by Elon, the system requires cameras and radar to agree and that integration is so troubled and so expensive to fix in hardware and/or software, they really do improve system operability statistically? Meaning what if the pathetic and dangerous system is a little less pathetic and dangerous by ODD and scenario quantity without the radar in their implementation?
Regarding Elon’s ridiculous and misleading statement defending ditching the radar. Bit quantity is not directly analogous to quality nor ability to process those bits well any get all critical information needed. Yes, if the data is value added and you can process it properly, more of it is generally good. But how many quality bits do I get in dense fog? Direct light? How are cameras creating tracks and discerning speed of objects? Or even exact position, especially when the objects are or appear 2D or are complex, like a car carrier. And how is the system going to discern a photograph of an object from an object? (It has already confused pictures on trucks for objects.) This all comes down to cameras being a passive sensor. They infer information active sensors like radar confirm. Active sensors reach out and touch the world, cameras do not. (Yes, active sensors can produce flawed results because of this.)
As the technology to create a competent sensor and fusion system exists, like using the right sensors, in the right design with the right Kalman filtering etc, I believe the root cause here is likely ego and cost. I bet the Arbe 48X48 radar resolves the sensor/perception issues. (Not the overall development approach, over reliance on deep learning or not using HD Maps, even as a backup. But that is another story I cover in my articles below.)
Of course, the whole think is a grossly negligent canard. Elon and Tesla have even less of a shot now of getting to L4. But what they may have done is create a situation where the abysmal performance of the system, in common scenarios and ODDs looks better. Thereby enabling them to use that scant performance jump to try to convince people the system is better than it is. Unfortunately, that will work with the cult followers and buy them some time. While this approach may result in less accidents by quantity, it will not only leave many in the pipeline, but also likely make them more catastrophic. Why? Because of where even a poor radar system would offset the camera’s massive Achilles heels.
As I said in the beginning Tesla’s “Autopilot” and “Full Self-Driving” are grossly negligent engineering debacles that will harm and fatally injure many, many, many more people. And this is all avoidable with the right company leadership and autonomous system development and sensor system designs. More on this below.
Note-I used to be of the opinion LiDAR was necessary. While ti still may be I believe a dense array radar fits the sweet spot between “point cloud” fidelity, with cameras, and handling issues that severely hamper cameras and LiDAR. Particularly since folks are still working in LiDAR tech that can classify objects and determine speed/tracks. Having said all of this I am surely not against adding LiDAR to the fusion mix.
More detail here
The Autonomous Vehicle Industry can be Saved by doing the Opposite of what is being done now to create this technology
SAE Autonomous Vehicle Engineering Magazine — Simulation’s Next Generation (featuring Dactle)
Tesla “autopilot” development effort needs to be stopped and people held accountable
Forget Tesla’s “autopilot” their Automatic Emergency Braking is a Debacle
Simulation can create a Complete Digital Twin of the Real World if DoD/Aerospace Technology is used
- https://medium.com/@imispgh/simulation-can-create-a-complete-digital-twin-of-the-real world-if-dod-aerospace-technology-is-used-c79a64551647
Using the Real World is better than Proper Simulation for AV Development — NONSENSE
My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.
Key Industry Participation
- Founder SAE On-Road Autonomous Driving Simulation Task Force
- Member SAE ORAD Verification and Validation Task Force
- Stakeholder for UL4600 — Creating AV Safety Guidelines
- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)
- Presented the IEEE Barus Ethics Award for Post 9/11 Efforts
My company is Dactle
We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.