After Consumer Reports did the right thing and failed Tesla’s “autopilot” and “Full Self-Driving” it has now forfeited any ethical and professional standing it has in understanding and reviewing autonomous vehicle technology. Because of who they are and their associated reputation this report will lead directly to consumer false confidence, injuries and deaths.
Consumer Reports Tesla article — Tesla’s ‘Full Self-Driving Capability’ Falls Short of Its Name — https://www.consumerreports.org/autonomous-driving/tesla-full-self-driving-capability-review-falls-short-of-its-name/
Consumer Reports commits several tragic errors in their multi-OEM review.
· They don’t weigh autonomous system performance high enough — When you allow people to let go of the steering wheel and lose situational awareness, which is unavoidable regardless of monitoring, that should be weighted far higher than other categories.
· They don’t test the performance properly — 1) Only one system, the Cadillac, restricts it’s system’s use to the proper Operational Design Domain (ODD). This means all the others can be used where they are not supposed to. This should have been mandatory. 2) AEB was not tested in any of the modes. 3) Handover nor handover alerts were tested.
· They don’t differentiate production systems from those in development and using their drivers and public as human Guinea pigs — The Tesla and comma.ai systems are in development using the drivers, their families and the public as Guinea pigs for L4. They should not be included in the testing for L2/3.
· They don’t realize handover cannot be made safe in time critical scenarios — Consumer Reports assumes handover to the driver can be made safe in time critical scenarios. It cannot.
· They make the assumption that the development process is tenable and the risk to humans is necessary — It is a myth that public shadow and safety driving can create a legitimate autonomous vehicle. And the lives the process takes are necessary and for the greater good. It is impossible to drive the trillion miles or spend $300B to stumble and restumble on all the scenarios necessary to complete the effort. The process also harms people for no reason. The first safety issue is handover. The time to regain proper situational awareness and do the right thing, especially in time critical scenarios. cannot be provided. Another dangerous area is learning accident scenarios. AV makers will have to run thousands of accident scenarios thousands of times to accomplish this. That will cause thousands of injuries and deaths
More in my articles here
The Autonomous Vehicle Industry can be Saved by doing the Opposite of what is being done now
Be Wary of Waymo’s New Safety Record and Brad Templeton’s Declaration the System is Superhuman and should be Deployed Today
SAE Autonomous Vehicle Engineering Magazine — Simulation’s Next Generation (featuring Dactle)
Autonomous Vehicle Industry’s Self-Inflicted and Avoidable Collapse — Ongoing Update
Proposal for Successfully Creating an Autonomous Ground or Air Vehicle
Simulation can create a Complete Digital Twin of the Real World if DoD/Aerospace Technology is used
Simulation Photorealism is almost Irrelevant for Autonomous Vehicle Development and Testing
Autonomous Vehicles Need to Have Accidents to Develop this Technology
Using the Real World is better than Proper Simulation for AV Development — NONSENSE
The Hype of Geofencing for Autonomous Vehicles
SAE Autonomous Vehicle Engineering Magazine — End Public Shadow/Safety Driving
My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.
Key Industry Participation
- Founder SAE On-Road Autonomous Driving Simulation Task Force
- Member SAE ORAD Verification and Validation Task Force
- Stakeholder for UL4600 — Creating AV Safety Guidelines
- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)
- Presented the IEEE Barus Ethics Award for Post 9/11 Efforts
My company is Dactle
We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.