Waymo and Cruise should prove their systems are legitimately L4
Recently Waymo and Cruise have been stating their systems are L4. The hype cycle is picking up steam. There is no proof of these boasts. One would think given the huge distrust of this technology, admittedly mostly due to Tesla, these folks would want to prove their systems perform as advertised. The reason they don’t do that is because they can’t. (Explained in much more detail below). Not only do these folks not provide proof their systems work as advertised, but they also go out of their way to keep us from seeing the data to make the determination ourselves. There is no scenarios learned list, the disengagement data provided does not delineate crashes avoided by the “safety driver”, the test events are controlled marketing releases vs being livestreamed, there has been no independent evaluation, Waymo forces riders to sign NDAs, and no one has determined what constitutes L4. I guaranty you Waymo nor Cruise perform as well as a teenager on a permit. How can I say that? Edge, corner, and crash cases. L4 is only as good as the degree to which those are learned. Not the benign, less complex, or safer scenarios leading up to them. Given the industry’s over reliance on the real-world for development and testing and use of gaming-based simulation technology, the ONLY way Waymo, Cruise and the rest can learn all these scenarios is for their systems to experience many of them, over and over and over. (Waymo has come out and stated they FINALLY figured out gaming sim tech is inadequate. The problem is it does not appear they understand what the right tech is, especially regarding real-time, physics fidelity and the need for many full motion simulators. More below on this as well.)
The Most Dangerous, Deceitful and Deadly days in the Autonomous Vehicle Industry are upon us
We are entering a phase in the hype and deception cycle where the industry and general population believe these systems are much further along than they are. This false confidence will lead to a temporary rise in trust, confidence, and funding. Which is the point of the orchestrated con. This is facilitated by the industry’s tech evolving to a point where it appears these systems drive as well as a human in everyday conditions. Some with what appears to be a fair amount of complexity. And in some cases, avoiding crashes. The fake it till you make it process has been successful and hit an inflection point where the average person thinks these systems are already safer than a human and very close to L4. Videos from Cruise, Zoox and others are pushed to the public routinely. With folks telling us there were no disengagements etc. Waymo says they are already L4 and Cruise says they have fully autonomous Fridays. Chris Urmson from Aurora has even changed his tune again. Going from estimates of L4 coming and going, to decades in the future, to providing no more dates to now being just a couple years away. Argo AI has also increased its hype (All timed around SPACs or IPOs. Quite the coincidence.) The mechanics here involve some real technical prowess but even more message and information control and manipulation. These companies only let you see what they want you to see. Where are the livestreams? All disengagement data? Scenarios attempted and learned? It’s a reuse. Beyond invoking common sense, the proof of this is with Tesla. They use public human Guinea pigs not employees. They cannot control most of the information. Do you see how poorly Tesla performs? Yes, there are reasons Tesla perform as far worse than the rest. Some of those reasons justify things looking much worse than their counterparts and some do not. (More on that below.) But don’t think for a second that the others found some secret sauce and will NEVER harm or kill a human Guinea pig or get anywhere near L4. An F-35 can fly much higher than a paper airplane. But when the moon is the desired destination, does it matter? At some point soon Tesla’s competitors will not be able to avoid crashes to train the systems further. Yes, working incrementally through progressively more complex and dangerous ODDs helps avoid crashes. But not all of them. Some will still have to be experienced to be learned. And for many the “safety driver” will have to wait so long to punch out to acquire the first instance crash moment that they will not be able to avoid a crash. Scenarios here involve when acted upon by another system, threads, loss of traction, bad weather and where crashes must be best handled versus avoided. The reason why the industry is gunning for Tesla right now is to avoid Tesla poisoning the well and killing people before they do. A consortium of companies (needlessly) experimenting on humans is not going to sit by and let the sloppy and overly active renegade spoil the pot. Of course, all of this is avoidable if the industry switches most of its development and testing to proper simulation.
Why does Tesla perform worse than the rest? The justifiable part of the explanation involves ODD. Tesla is not limiting this to good weather and lower complexity areas it has learned the hell out of. The ODD is basically the entire US in almost any condition. The other aspect is as I mentioned above. They cannot control much of what we see regarding performance with all the customer Guinea pigs who provide footage of negative results. On the unjustifiable side is Tesla’s sensor system. The camera only based system has way too may flaws, especially around determining object location and when they are stationary or crossing in front of them. Most of Tesla’s competitors have far more competent sensor systems involving LiDAR and radar. (Having said this many have similar issues as Tesla but not to the same degree and hide or avoid them. This is because they use low fidelity radar and/or do not use LiDAR data to identify objects and their location. That has been changing over time. And the new imagine radars coming out should help a great deal.)
(All you “woke” folks who thought Tesla was doing it right and ahead years ago, you are just as wrong now as before. Do not let more competent or “better” fool you. Do you really need the others to kill people as well to provide you more incremental epiphanies before common sense sinks in? Tesla is a wolf in wolf’s clothing. The rest are sheep. Don’t be one of them. Have the courage to think critically, put your ego to the side and break free from the echo chamber.)
Below are a couple articles that explain my POV in more detail. Including why the industry would rather harm people and go down with the ship than change.
The Autonomous Vehicle Industry can be Saved by doing the Opposite of what is being done now to create this technology
How the failed Iranian hostage rescue in 1980 can save the Autonomous Vehicle industry
SAE Autonomous Vehicle Engineering Magazine — Simulation’s Next Generation
My name is Michael DeKort — I am a former system engineer, engineering, and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.
Industry Participation — Air and Ground
- Founder SAE On-Road Autonomous Driving Simulation Task Force
- Member SAE ORAD Verification and Validation Task Force
- Member UNECE WP.29 SG2 Virtual Testing
- Stakeholder USDOT VOICES (Virtual Open Innovation Collaborative Environment for Safety)
- Member SAE G-34 / EUROCAE WG-114 Artificial Intelligence in Aviation
- Member CIVATAglobal — Civic Air Transport Association
- Stakeholder for UL4600 — Creating AV Safety Guidelines
- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee
- Presented the IEEE Barus Ethics Award for Post 9/11 DoD/DHS Whistleblowing Efforts