Waymo is Hyping and Hiding where Tesla Cannot-But Tesla is doing its Best
Waymo just announced it is “fully autonomous” in areas of Phoenix. I asked them if there was a local or remote “safety driver” and they said no. (As I understand it, they do have people who are available to support.) The issue here being that zero proof has been provided. How do we know engineering due diligence has been done? How were accident scenarios that cannot be avoided & need to be best handled learned? How were all relevant accident scenarios that can be avoided learned? What about rain? Loss of traction scenarios? Would be nice if Waymo released proof of due diligence-Tested scenario, disengagement, and root cause data. Heck, it would be great to see Waymo let Consumer Reports run the same benign scenario tests Tesla recently failed. (Several people have reminded me about the trove of data Waymo has been releasing. That is sensor data. Not learned scenario, disengagement, and root cause data. It does it even mean perception is right. All that sensor data could be correct, and every scenario fails due to the remainder of the perception system, planning and even execution.)
In the end neither Waymo, nor Tesla nor any other AV maker, nor ALL of them combined actually, can get close to a legitimate L4 on public roads relying on public shadow and “safety driving” augmented by gaming based simulation. As we see they can get far enough along to cherry pick enough video to make it look like they can. But they will not have learned a fraction of the scenarios they should, especially complex and accident scenarios. It is a myth that public shadow and safety driving can create a legitimate autonomous vehicle. And the lives the process takes are necessary and for the greater good. It is impossible to drive the trillion miles or spend $300B to stumble and restumble on all the scenarios necessary to complete the effort. The process also harms people for no reason. The first safety issue is handover. The time to regain proper situational awareness and do the right thing, especially in time critical scenarios. cannot be provided. Another dangerous area is learning accident scenarios. AV makers will have to run thousands of accident scenarios thousands of times to accomplish this. That will cause thousands of injuries and deaths.
While “geofencing” to a location does minimize the workload, it is being severely hyped. Many of the accident scenarios and objects and patterns (clothing etc) deep learning must learn can exist in both of those environments. All you can really cut down on is learning relevant scenarios or objects for more than just that location. (Highway geofencing helps a bit more since many urban or city road patterns do not exist. Hence the associated scenarios would not have to be learned. However, many of the objects and their clothing variations would to the degree deep learning is used.)
Regarding Tesla’s performance and latest hype. They just said they are about to release a beta that is “capable of zero intervention”. This is on top of Elon’s insane proclamation they would get to L5, not L4, this year. First, that statement means literally NOTHING. All that has to happen for that to be true is it drives for two inches requiring no handover. The bigger problem for Tesla here is they chose to use customers as Guinea pigs, where virtually every other AV makers uses employees. That means they cannot hide all their data or hype. There are several Tesla owners out there providing that information routinely. Then there is that stationary/crossing object issue that has caused several accidents and killed at least six people.
As for which company is further along. First, it is totally irrelevant. An F-35 can fly much higher than a paper airplane. However, when flying to the moon is the goal, does it matter? Beyond that Waymo has made the far better development approach choice to go with employee “safety drivers” and a location geofence for the reasons stated above. In addition to the inability to hide what Waymo can hide and having about 774,000 more vehicles than them out there, they made the extremely unwise choice of not using LiDAR. (More on that below).
This brings me to the “autonowashing” folks, PAVE Campaign and industry folks who have been saying no L4 autonomous systems exist. What are they going to do now? Are they going to call Waymo, the largest AV maker and member of PAVE, out? Ask for proof? Ignore it and continue to hypocritically and recklessly call out just Tesla? (Tesla does deserve to be called out as it is the most egregious by far. They have over 750K cars out there needlessly using human Guinea pigs, where the rest of the industry in the US has less than 2000. That is a 375 to 1 ratio or 1%. And Tesla has a fatally flawed stationary/crossing object AEB design flaw. More on that below.)
Finally, my usual mantra — this is all avoidable if the right development approach is used. Something that will happen. It is unavoidable. Unfortunately, that will likely take the unnecessary and tragic deaths of a child or family. Probably in a Tesla. (More on that remedy is below as well.)
More in my articles here
The Autonomous Vehicle Industry can be Saved by doing the Opposite of what is being done now to create this technology
SAE Autonomous Vehicle Engineering Magazine — Simulation’s Next Generation (featuring Dactle)
Simulation can create a Complete Digital Twin of the Real World if DoD/Aerospace Technology is used
- https://medium.com/@imispgh/simulation-can-create-a-complete-digital-twin-of-the-real world-if-dod-aerospace-technology-is-used-c79a64551647
Using the Real World is better than Proper Simulation for AV Development — NONSENSE
Tesla “autopilot” development effort needs to be stopped and people arrested
Elon Musk backed himself into a L5 corner (case)
Autonomous Vehicle Industry’s Self-Inflicted and Avoidable Collapse — Ongoing Update
Autonomous Vehicle makers should take the Consumer Reports Challenge
The “Autonowashing” Town Criers are Autonowashing
My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.
Key Industry Participation
- Founder SAE On-Road Autonomous Driving Simulation Task Force
- Member SAE ORAD Verification and Validation Task Force
- Stakeholder for UL4600 — Creating AV Safety Guidelines
- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)
- Presented the IEEE Barus Ethics Award for Post 9/11 Efforts
My company is Dactle
We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.