Disengagements and Miles Driven Mean Almost Nothing

Disengagements and miles driven mean almost nothing without scenario and root cause data. They are often misleading. All that matters is scenarios learned, how many are left and what that whole set is. Ever seen that? No one has and there is a reason.

1000 disengagements could be 100 of the same disengagements 10 times each. Or the reverse. And what is the root cause? How easy/difficult is the fix? 1 disengagement could lead to a major architecture change and/or relearning a vast amount of scenarios.

Even if miles were a good indicator with one trillion to do you would have to create a sim miles to road miles equivalence metric. Clearly sim miles are far more productive than real miles. Given that does that ratio and the current productivity projection allow for all the scenarios to be run in a couple years?

While it is also true non-DIL systems can be run faster than real-time what about the running of scenarios for AI and test that mandate the need for full motion DIL? If they were the only scenarios left to learn and test what is the productivity projection on that? (The ratio of simulation “mile” or scenarios to on road “miles” or scenarios needs to be over 99% or it will be difficult to get to L4)

Finally is there a list of target scenarios you need to learn and test to reach full L4? Is that list complete? Does it prove you did your due diligence?

Beyond this most AV makers will never get close to L4. (Waymo’s recent paradigm shift demonstrates this).

Issues

  • Most AV makers are using public shadow driving vs proper aerospace level simulation for most AI & testing. That path will not get close to L4.
  • L2+/L3 cannot be made consistently safe
  • Most AV makers & OEMs use inferior simulation. You have to integrate AV sensor & full motion DIL in actual real-time for many core scenarios
  • You have to create a scenario matrix with a major part of that effort being from the top down. You cannot drive around and stumble on most of them
  • Accident scenarios are not corner/edge cases. All plausible scenarios regardless of outcome need to be learned and tested.

More details on these topics

Autonomous Levels 4 and 5 will never be reached without Simulation vs Public Shadow Driving for AI

https://www.linkedin.com/pulse/autonomous-levels-4-5-never-reached-without-michael-dekort

Autonomous Vehicle Testing — Where is the Due Diligence? https://www.linkedin.com/pulse/autonomous-vehicle-testing-where-due-diligence-michael-dekort/

Corner or Edge Cases are not Most Complex or Accident Scenarios https://www.linkedin.com/pulse/corner-edge-cases-most-complex-accident-scenarios-michael-dekort/

Written by

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store