Waymo admits it is nowhere near creating a legitimate autonomous vehicle
“We can only drive in places that we have already built a map,” said Andrew Chatham, the software engineer who heads mapping efforts at Waymo, the self-driving technology company.
This quote is from the New York Times article — Driverless Cars Are Taking Longer Than We Expected. Here’s Why.
- https://www.nytimes.com/2019/07/14/us/driverless-cars.html
Let’s unpack and translate that.
Waymo is saying it cannot properly navigate scenarios properly with it’s own sensors. Without a pre-made 3D detailed map of the world, their on-board sensors and perception systems are not competent enough to reach L4. This brings me to a rare moment. Agreement with Elon Musk. In his rebellion against LiDAR, which I do not agree with, he makes the point that AVs must be able to navigate with their core sensors and not rely on outside data for basic operation. Not mapping, V2X, GPS etc. Let’s take this a step further. What if that detailed map changes? Examples being new construction, a detour. police or fire blockade, a parade or protest group etc.
Another area that I believe is actually a much bigger problem is object detection in public areas. This is where the massive hype of geofencing comes in. If you literally choose to operate at one intersection on the planet you have the obligation to ensure you can properly detect any object that could possibly be there. If there is a jacket only made in France that gives any sensor issues that cannot be covered by another object you must train for it. And you must train for combinations of it with other objects at various times of day, in various weather etc. Basically, this means you have to train for the majority of the world’s objects and movements of them as soon as you decide to use the public domain.
Let me make sure I don’t leave you with the impression I think Waymo is the only one. Every single AV maker using public shadow driving for the majority of its development and testing vs proper simulation has this problem. Even if they did not have an external data crutch. As a matter of fact, you could get every one of them together and they would still never get close.
More in my articles here
Using the Real World is better than Proper Simulation for Autonomous Vehicle Development — NONSENSE
All the Autonomous Vehicle makers combined would not get remotely close to L4
Common Misconceptions about Aerospace/DoD/FAA Simulation for Autonomous Vehicles
SAE Autonomous Vehicle Engineering Magazine-End Public Shadow Driving
My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.
Key Industry Participation
- Lead — SAE On-Road Autonomous Driving SAE Model and Simulation Task
- Member SAE ORAD Verification and Validation Task Force
- Member DIN/SAE International Alliance for Mobility Testing & Standardization (IAMTS) Sensor Simulation Specs
- Stakeholder for UL4600 — Creating AV Safety Guidelines
- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)
- Presented the IEEE Barus Ethics Award for Post 9/11 Efforts
My company is Dactle
We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based testing and AI system with an end-state scenario matrix to address several of the critical issues in the AV/OEM industry I mentioned in my articles below. This includes replacing 99.9% of public shadow and safety driving. As well as dealing with significant real-time, model fidelity and loading/scaling issues caused by using gaming engines and other architectures. (Issues Unity will confirm. We are now working together. We are also working with UAV companies). If not remedied these issues will lead to false confidence and performance differences between what the Plan believes will happen and what actually happens. If someone would like to see a demo or discuss this further please let me know.