First — I want this technology to come to market ASAP. So please don’t respond by citing accident statistics. My point in this article is that the path the industry is on is self-destructive. It is extremely inefficient and far more dangerous than it need be. In many cases AV makers will never get to an autonomous vehicle and take lives needlessly trying to get there. Making matters far worse is the lack of due diligence and irrational exuberance by the public, the press, governments and transportation and mobility “experts”.
Rand just released a study that said that any incremental approach autonomous vehicles make is a benefit. As such they should be released to the public incrementally and ASAP. I agree. However there is a massive assumption regarding this approach and the systems currently available. That assumption being that they perform as good as or better than a human. That is the crux of the issue. Most are not. As a matter of fact they are far worse, their capabilities are being grossly exaggerated and there is actually no way to tell just how deficient those capabilities actually are. In many cases we are being led down a road of false confidence. These vehicles are helping with regard to several high occurrence accidents like front end collisions. However their abilities in many common scenarios is poor and they are nowhere near handling most complex scenarios. Said differently they perform better than humans in some areas but are worse and even far worse in many other areas. If that situation was being controlled by tight geofencing, testing and accurate press releases and advertisements that would be fine. The problem is that is not happening. Beyond that the practice most companies are using, public shadow driving for AI, engineering and test, vs aerospace level simulation, will prohibit these AV makers from ever finishing their efforts. And them creating thousands of avoidable casualties in their futile pursuit to save people through the use of the technology they will never finish creating.
Root Causes — The Perfect Storm
· Most of the developers creating these systems have little subject matter or aerospace level systems engineering practice experience. They have never worked on anything this complex, this large or that involved anything close to those much exception handling. They are learning as they go and using AI which has major deficiencies to cover that lack of experience and learn for them.
· The public, governments. Insurers etc think these folks are more than up to this and can do this in their sleep. After all look at how the cool the apps are and how much money they make.
· There is a massive competition to be first. That leads people to exaggerate the capabilities they already exaggerate because of their lack of experience in the domain. Coupled with pride, ego and the desire to hit that pay day before the rest, creates an accelerating vicious loop.
· Everyone is under the very false impression that the only way this technology comes to fruition ASAP is if there is zero regulation. While that may very well apply to how this technology is accomplished it does not apply to what it has to be able to do. Especially relative to minimum capabilities. With governments punting and the industry in a competitive race we wind up with zero due diligence regarding minimal capabilities.
· Far too many believe the practice of handover or L2/L3 is safe. It is not. Every competent study I have seen, including real world data from NASA, says it cannot be made reliably safe. The reason is that no monitoring or notification system can adequately mitigate the 2–45 second situational awareness issues. Especially in critical, complex and fast paced scenarios. It is for this reason NASA, Toyota and recently Waymo and Chris Urmson have said they need to be skipped. When this practice is used to create the technology it will result in creating thousands of avoidable accidents and casualties when the developers start running thousands of complex and dangerous scenarios thousands of times over. Worse yet is that the it can take thousands of runs to train the system. Coupled with the millions of scenarios and that you cannot control the real world it would take one trillion miles, at an expense of over $300B, to conduct this effort. That is impossible. (Before someone cites the NHTSA 2015 l2?l3 Study that seems to counter this take a look at their test method. It ignored the time it takes to gain proper situational awareness. And as a result they missed the fact that monitoring and notification systems cannot mitigate this adequately).
· V2X is too slow. The 10hz rate discussed most often cannot accommodate a plethora of high speed scenarios. Not platooning trucks nor vehicles in opposite lanes with no median for example. If the transmission quality is not over 99% and retransmission of data is needed there are scenarios under which 120hz is needed. Beyond that if the 10hz is used it will make those situations far worse. Even create accidents. The reason for that would be every vehicle in the thread would be receiving late data they believe to be correct. That will cause a domino effect of wrong actions.
· No current sensor technology nor any combination of them can handle extreme weather. As a matter of fact it appears to me that Cameras and LiDAR will never be able to work well in a blizzard or driving rain for example. The technology that I believe would have to be used, 3D radar, may not scale especially in cost.
· None of the hardware systems is anywhere near reliable enough. Far below the current critical and redundant systems in vehicles now. While this is somewhat understandable during engineering these vehicles are being used in the real world.
· These systems are susceptible to hacking and weaponization. Especially products from companies like comma.ai who provides the code to customers.
· There is no testable criteria for ascertaining the minimal capabilities of these systems. Especially as they relate to being as good as a human let alone 2X or 10X. No one has created or implemented a Scenario Matrix or Taxonomy. Especially for the geofenced applications already being fielded.
If the AV community does not remedy these issues you will, in very short order, create a far, far, far worse situation than the ones you are trying to avoid. You will take lives and never save the lives you intended to save. That process may happen when the first child or family is lost. The result will be the world will figure out all the things I have listed here. That will lead to mistrust, the feeling of betrayal and that the engineers doing this work are not competent. That will lead to far, far, far more delay and regulation than if the industry self-policed. While Waymo’s recent paradigm shift and calling out of folks like Tesla is a huge step it is not nearly enough. A moratorium on handover/L2/L3 and most public shadow driving for AI has to be set in place. Next the industry has to create those minimal standards and look to use as much aerospace level simulation and test tracks as possible. They also need to improve the reliability of these systems, fix the V2X update rate and make these systems hack and weaponization proof.
The question now is . . . who is going to step up and make this happen?
For more details on the issues and solutions please see my articles here
Autonomous Levels 4 and 5 will never be reached without Simulation vs Public Shadow Driving for AI
Who will get to Autonomous Level 5 First and Why