Why are autonomous vehicle makers using hyper object detection deep learning? Seems very inefficient and needlessly dangerous?

I would like to understand why deep learning folks are hyper detecting most objects. Humans use as little info as possible to determine object identity and assumed movement. We play the odds, which are massively in our favor, then use our processing time to rescan for any assumption being invalidated. We do not focus on small parts of things then getting lost or jammed up because of strange patterns or colors. Like the stop sign test, creating false lane markers with two pieces of white tape or freezing AVs with T-shirts having odd patterns and colors on them.

Example — when we approach a city street, we assume people are people based on outlines and location. Meaning objects with those general shapes at that location are very likely to be people. We then assume they will not run out in front of us. We then rescan to ensure those very high probability assumptions are not being invalidated. We do not focus on color of skin, clothing etc.

While there would clearly be reasons to do some hyper detection that should not be the rule. Doing things this way will keep you from ever finishing and you will harm people for no reason. For example, do you plan to scan all fabric patterns on the planet in various lighting and weather conditions? And group them together to ensure that when grouped tightly they are not an issue?

You can find more in my articles here

Proposal for Successfully Creating an Autonomous Ground or Air Vehicle

· https://medium.com/@imispgh/proposal-for-successfully-creating-an-autonomous-ground-or-air-vehicle-539bb10967b1

Without DoD simulation technology Autonomous Vehicles cannot be created or created Legally

· https://medium.com/@imispgh/without-dod-simulation-technology-autonomous-vehicles-cannot-be-created-or-created-legally-50ba95fbf4e6

Using the Real World is better than Proper Simulation for AV Development — NONSENSE

Simulation can create a Complete Digital Twin of the Real World if DoD/Aerospace Technology is used

The Hype of Geofencing for Autonomous Vehicles

SAE Autonomous Vehicle Engineering Magazine — End Public Shadow/Safety Driving

Relevant Biography

Former system engineer, engineering and program manager for Lockheed Martin. Including aircraft simulation, the software engineering manager for all of NORAD and the Aegis Weapon System.

Key Autonomous Vehicle Industry Participation

- Lead — SAE On-Road Autonomous Driving (ORAD) Simulation Task Force

- Member SAE ORAD Verification and Validation Task Force

- SME — DIN/SAE International Alliance for Mobility Testing & Standardization group to create sensor simulation specs

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee

- Presented the IEEE Barus Ethics Award for Post 9/11 DoD/DHS Efforts

My company is Dactle — We are building an aerospace/DoD/FAA level D, full L4/5 simulation-based development and testing system with an end-state scenario matrix to address all of these issues. We can supply all of the scenarios, the scenario matrix tool, the data, the integrated simulation or any part of this system. A true all model type digital twin. If someone would like to see a demo or discuss this further please let me know.

Systems Engineer, Engineering/Program Management -- DoD/Aerospace/IT - Autonomous Systems Air & Ground, FAA Simulation, UAM, V2X, C4ISR, Cybersecurity

Systems Engineer, Engineering/Program Management -- DoD/Aerospace/IT - Autonomous Systems Air & Ground, FAA Simulation, UAM, V2X, C4ISR, Cybersecurity