Lex Fridman interviews George Hotz — Both are brilliant and severely misguided
Video link — https://www.youtube.com/watch?v=iwcYp-XT7UI
I found this video extremely interesting. I believe these gentlemen express the thinking of most of the industry at this time. Which is very unfortunate. I also think these gentlemen both mean well. Problem is they can’t see the forest for the trees. They both either don’t get or ignore public shadow and safety driving is untenable from a time, cost and safety POV. They did not mention accident scenarios a single time. Why? Because they don’t get this breaks them or to avoid this video being used for discovery later? I assume it is the former because over and over they state that handover that humans can get the situational awareness time in a handover event in all scenarios if it’s done right. That is absolute nonsense in complex and time critical scenarios. It’s like they think humans have premonition and will avoid those situations.
Finally, they also have no idea what proper simulation can do. And that even though that will not be simple, it is doable. And through its use you can actually reach a legitimate L4/5 and quantify it. (Hotz appears to understand you cannot use gaming engines?) I would be glad to provide proof of this if these gentlemen would like to reach out to me. Given the process they espouse requires thousands of Kamikaze drivers, and will take and ruin the lives of thousands of people needlessly, the least they could do it take an hour to go through it.
The Interview
The autonomous vehicle section starts .26:45. Some of my notes at time stamps are below.
Right off the bat Hotz says Tesla AP is worse than a human except for lane keeping and ACC. (Which is ridiculous because tragedy after tragedy, video after video shows Tesla’s struggle with lane keeping, especially when everything is not pristine.)
.41:00 Handover discussion
- They never address that no driver monitoring system can provide the time to regain proper situational awareness to do the right thing the right way in complex and time critical scenarios.
- Never discussed how they will get people to sacrifice themselves to train accident scenarios
- Elon will not get L5 this year
- Elon’s plan to not have driver monitoring is dumb
.50:00 — Discuss how radar cannot tell stopped car from light pole when static
.51:00 — Hotz discusses his simulation being “NOT UNITY based — can load in real state. Simulate what system would have done on historical data”. (Sims that can work on real data vs not real data). They also talk about occluded objects or area.
.55:00 — Very interesting discussion from Hotz on Perception tied to Planning
.57:00 — Simulators miss too much per Fridman
1:00 — How hard is this?
1:07 — Hotz says promising L5 in short run is wrong. (After he says l4 should be skipped which is ironic given he doesn’t skip 2 or 3.)
1:09 — Hotz says Waymo can do L4 in Phoenix. They overlook objects from around the world being there, especially clothing. And they NEVER mention handling accident scenarios.
1:17 — “Formal verify” is 10 million miles — absolute nonsense. Nowhere near enough AND their data is based on safety drivers bailing out their systems.
1:20 — LiDAR is a crutch. Using for localization more than perception. Thing is no one wants to prove LiDAR would have helped Tesla avoid 6 deaths.
1:26 — Any room for leap frogging? They ask if there is revolution in simulation. Hotz mentions that there are 3 areas. Static (need maps/LiDAR), dynamic (moving objects) and the “counter factual” — where the ego model changes the dynamic.
1:32 — Fridman hates safety engineers. Which would be fine in simulation not the real-world with safety drivers. Then Hotz says the safety driver saving the car has no consequences. Never mentions when they will not be able to in accident scenarios.
1:35 — Hots says there is a L2 “safety model”. Ridiculous. Relies on safety drivers. Says that is always safe which is ridiculous. Assumes drivers can gather proper situational awareness in all scenarios. They don’t account for the driver bailing out to live and will never commit suicide, so they won’t learn beyond that.
1:46 — Because they are L2 they pass ethical dilemma to the human by disconnecting. Of course, that again assumes the driver can successfully take over in all scenarios.
1:47 — Critical ops should be handled by local data/sensors — I agree.
Please find more on my POV and some bio bits below;
Using the Real World is better than Proper Simulation for Autonomous Vehicle Development — NONSENSE
The Hype of Geofencing for Autonomous Vehicles
SAE Autonomous Vehicle Engineering Magazine-End Public Shadow Driving
All the Autonomous Vehicle makers combined would not get remotely close to L4
My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.
Key Autonomous Vehicle Industry Participation
- Lead — SAE On-Road Autonomous Driving SAE Model and Simulation Task Force
- Member SAE ORAD Verification and Validation Task Force
- Expert — DIN/SAE International Alliance for Mobility Testing & Standardization (IAMTS) group to create sensor simulation specs
- Stakeholder for UL4600 — Creating AV Safety Guidelines
- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee (AI&ASPC)
- Presented the IEEE Barus Ethics Award for Post 9/11 Efforts
My company is Dactle