Corner or Edge Cases are not Most Complex or Accident Scenarios
I am seeing a very disturbing trend. Folks in the autonomous vehicle industry are using the terms “Corner or Edge Cases” to include complex, dangerous and accident scenarios.
Here is the Wiki definition of a Corner Case — “In engineering, a corner case (or pathological case) involves a problem or situation that occurs only outside of normal operating parameters — specifically one that manifests itself when multiple environmental variables or conditions are simultaneously at extreme levels, even though each parameter is within the specified range for that parameter.”
What folks are calling edge or corner cases are the core complex or dangerous scenarios that must be learned in the primary path. Call them exception handling cases or negative testing but they are NOT edge or corner cases. Edge or corner cases would be cases outside the bounds of those normal operating cases. I say normal because these are scenarios that have to be learned because they will or can happen. Whether the scenarios are benign, complex or dangerous they all have to be learned and are in the core scenario set. There is far, far, far more work to do determine the “what ifs” than anything else. (Aerospace, DoD and NASA get this and it’s normal ops. Commercial IT does not since they do very little of any of this except some for OS). If we allow for corner or edge cases to be defined as some are starting to do we will miss many, many scenarios. That practice will lead to false confidence, accidents and probably fatalities. That in turn will lead to lawsuits, the press, governments and the public doubting the technology and the competence of those involved. That will lead to bankruptcies and a massive delay in the tech coming to market and probably and excessive amount of legislation.
There is a parallel issue I would like to address. I saw an AV maker say they had run a “myriad” of tests. Myriad is defined as “countless or an extremely great number”. That number is not hundreds or thousands. It may be in the millions. Saying or thinking there are this few scenarios to learn and test will also lead to false confidence, nothing close to L4 and tragedy.
So how do we remedy the situation?
- First we define these terms properly and don’t use them as excuses to not to our work. (And yes it is a large amount of work). What we don’t do continue to act like the Wild West. Where the industry isn’t coordinated and governments think they can’t create detailed tests until the tech is sorted out. That is all wrong and extremely counter-productive.
- The next thing we need to do is to switch from public shadow driving for AI and test to the primary use of aerospace level simulation.
- The final thing that needs to happen is that a top down scenario matrix is created. (Scenarios found driving, not shadow driving, would clearly be very important.) Yes that is a massive effort. Every object and degraded version of objects would have to be accounted for. That includes moving, fixed, environmental, laws/social cues etc. Then you have to create the scenarios and the plethora of variations of those scenarios. Are the possibilities endless? Yes. Do we have the time or funds to find them all? No? What we can do is take the effort to a Six Sigma level. Which is 99.99966%. Keep in mind that air travel in the US is at 6.4 Sigma. (I believe there has never been a software driven space tragedy.) If the goal of AV is to be 10X better than a human that is one accident every 1.6M miles in the US. That would bring the accident rate down from 40k per year to 4k per year. If you compare this to air travel AV vehicles at 10X would have 150 deaths per 10B miles. Air travel is still much safer at .2 deaths per 10B miles. While no one can be perfect, and no one is expected to be clairvoyant, we are expected as a world-wide community of experts to ensure everyone has expended every professional and ethical effort possible to know what is knowable. Anything less is unprofessional, unethical and untenable if you ever want to get close to L4 and save lives.
For more detail on these issues please see some of my other articles
Autonomous Levels 4 and 5 will never be reached without Simulation vs Public Shadow Driving for AI
Autonomous Vehicle Testing — Where is the Due Diligence?