Why are Autonomous Vehicle makers relying on Deep Learning over Dynamic Sense and Avoid with Dynamic Collision Avoidance? Seems very inefficient and needlessly dangerous?
I would like to understand why Deep Learning folks are hyper detecting most objects. Humans use as little info as possible to determine object identity and assumed movement. We play the odds, which are massively in our favor, then use our processing time to rescan for any assumption being invalidated. We do not focus on small parts of things then getting lost or jammed up because of strange patterns or colors. Like the stop sign test, creating false lane markers with two pieces of white tape or freezing AVs with T-shirts having odd patterns and colors on them.
Example — when we approach a city street, we assume people are people based on outlines and location. Meaning objects with those general shapes at that location are very likely to be people. We then assume they will not run out in front of us. We then rescan to ensure those very high probability assumptions are not being invalidated. We do not focus on color of skin, clothing etc.
While there would clearly be reasons to do some hyper detection and classification, for specific objects, that should not be the rule. Doing things this way will keep you from ever finishing and you will harm people for no reason. For example, do you plan to scan all fabric patterns on the planet in various lighting and weather conditions? And group them together to ensure that when grouped tightly they are not an issue? Deep learning requires massive processing and can be fooled by patterns.
Shouldn’t folks be using Dynamic Sense and Avoid with Dynamic Collision Avoidance augmented with Deep Learning where needed? AV mining and some slow speed shuttles detect any object, don’t classify it at all, and apply the brakes. In other uses cases determining if the object is a human with no further classification is needed. In some case Deep learning is required. Like learning specific signs and vehicles like an ambulance. The answer should be to do as little work as possible to avoid over working and false positives or negatives.
You can find more in my articles here
Proposal for Successfully Creating an Autonomous Ground or Air Vehicle
Without DoD simulation technology Autonomous Vehicles cannot be created or created Legally
Using the Real World is better than Proper Simulation for AV Development — NONSENSE
- https://medium.com/@imispgh/using-the-real world-is-better-than-proper-simulation-for-autonomous-vehicle-development-nonsense-90cde4ccc0ce
Simulation can create a Complete Digital Twin of the Real World if DoD/Aerospace Technology is used
- https://medium.com/@imispgh/simulation-can-create-a-complete-digital-twin-of-the-real world-if-dod-aerospace-technology-is-used-c79a64551647
The Hype of Geofencing for Autonomous Vehicles
SAE Autonomous Vehicle Engineering Magazine — End Public Shadow/Safety Driving
Former system engineer, engineering and program manager for Lockheed Martin. Including aircraft simulation, the software engineering manager for all of NORAD and the Aegis Weapon System.
Key Autonomous Vehicle Industry Participation
- Lead — SAE On-Road Autonomous Driving (ORAD) Simulation Task Force
- Member SAE ORAD Verification and Validation Task Force
- SME — DIN/SAE International Alliance for Mobility Testing & Standardization group to create sensor simulation specs
- Stakeholder for UL4600 — Creating AV Safety Guidelines
- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee
- Presented the IEEE Barus Ethics Award for Post 9/11 DoD/DHS Efforts
My company is Dactle — We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based development and testing system with an end-state scenario matrix to address all of these issues. We can supply all of the scenarios, the scenario matrix tool, the data, the integrated simulation or any part of this system. A true all model type digital twin. If someone would like to see a demo or discuss this further please let me know.