Fridman is a hack.
Lex Fridman, MIT Deep Learning Research Scientist, is Misleading his Students and putting them at Risk
Your safety data has been debunked and it does not include when the needless human Guinea pigs disengage and save the system, themselves and others. This happens far more than the reverse.
https://www.thedrive.com/tech/26455/nhtsas-flawed-autopilot-safety-study-unmasked
Yes, some scenarios are learned at disengagement, But not threads. And most drivers punch out ahead of this. There are still plenty left to kill people needlessly.