Well here we go again. The NTSB can’t seem to do its engineering and ethical due diligence and rise above the echo chamber.
(The reason I chose the very strong wording in the title is the egregious nature of NTSB’s performance here. There are simply to many single and combined events to label this as a flaw — again.)
They are still captive of the myth that public shadow and safety driving can create a legitimate autonomous vehicle and that the lives the process takes are necessary and for the greater good. The fact is it is impossible to drive the trillion miles or spend $300B to stumble and restumble on all the scenarios necessary to complete the effort. In addition, the process harms people for no reason. The first issue is handover. The process cannot be made safe for most complex scenarios because the time to regain proper situational awareness and do the right thing, especially in time critical scenarios. cannot be provided. Another dangerous area is learning accident scenarios. AV makers will have to run thousands of accident scenarios thousands of times to accomplish this. That will cause thousands of injuries and deaths
Driver of the Truck
Failure to yield for the Tesla
Failing to determine a process to verify the operability of safeguards for partial automated vehicles — To which NHTSA said it would not do this. Because it sees no problematic trends.
System use not limited to conditions it is designed for
· This highlights the NTSBs incompetence. First — machine and deep learning is used for development and testing in Tesla’s. That means the system doesn’t have any “designed conditions” until the system experiences them over, and over to learn them. Which means they screw them up over and over until they understand them. See the reckless irony here?????? They have to have accidents and injure or kill people over and over until they don’t anymore. Possibly thousands of times. The last time this scenario was 3 years ago when Joshua Brown was killed. Look at the bright side, maybe only 998 more to go? On that point can someone tell me why this cannot be done on a track or in proper simulation???????????
Driver permitted to not touch the steering wheel for almost 8 seconds-Tesla said 8 seconds was too short. (The driver had actually only engaged the system 10 seconds prior to the crash.)
· I will help you with the math. Even if the driver was doing the 55mph speed limit you would go almost 500 yards in 8 seconds. As I have said before this fact and that Elon Musk is shown over and over violating his own hands on standards show he and Tesla want drivers to avoid disengaging so they can experience the accident threads and eventually, after many accidents and deaths, learn them.
Neither forward collision warning or emergency braking were activated-The system did not detect the truck at all. Tesla actually responded the Model 3 could not detect crossing vehicles or avoid accidents at high speeds (69mph in a 55mph zone). Tesla also stated the camera system and radar have to agree on the detection. And at no time did that occur.
· This should be no surprises for two reasons. The first, as I stated above, these systems know nothing until they learn it through trial and many, many, many errors. The other being that Telsa’s cannot properly detect stationary and crossing objects. Not just that, they can’t tell where they are laterally or what size they are. This because they chose not to use LiDAR and their camera and radar systems cannot make up for the massive negligent shortfall. LET ME MAKE THIS CLEAR. This is NOT a machine learning thing to be overcome by massive trial, error and deaths. The hardware to do this does not and will never exist. So, they will go on forever hitting crossing trailers trucks and other vehicles they have hit and killed people while doing so. Police cars, fire trucks, tow tricks, street sweepers, other cars and barriers.
More information can be found here. Including how to resolve all of this.
Are the Autonomous Vehicle Industry and NHTSA on higher ground than Boeing and the FAA?
Proposal for Successfully Creating an Autonomous Ground or Air Vehicle
Autonomous Vehicles Need to Have Accidents to Develop this Technology
Simulation can create a Complete Digital Twin of the Real World if DoD/Aerospace Technology is used
Why are Autonomous Vehicle makers using Deep Learning over Dynamic Sense and Avoid with Dynamic Collision Avoidance? Seems very inefficient and needlessly dangerous?
Tesla hits Police Car — How much writing on the wall does NHTSA need?
NHTSA Uber Determinations are a Tragedy for Everyone
My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.
Key Industry Participation
- Lead — SAE On-Road Autonomous Driving SAE Model and Simulation Task
- Member SAE ORAD Verification and Validation Task Force
- Stakeholder for UL4600 — Creating AV Safety Guidelines
- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)
- Presented the IEEE Barus Ethics Award for Post 9/11 Whistleblowing Efforts
My company is Dactle
We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further, please let me know.