Michael DeKort
Aug 22, 2021

--

Fridman is a hack.

Lex Fridman, MIT Deep Learning Research Scientist, is Misleading his Students and putting them at Risk

https://medium.com/@imispgh/lex-fridman-mit-deep-learning-research-scientist-is-misleading-his-students-and-putting-them-at-b600f203c425

Your safety data has been debunked and it does not include when the needless human Guinea pigs disengage and save the system, themselves and others. This happens far more than the reverse.

https://www.thedrive.com/tech/26455/nhtsas-flawed-autopilot-safety-study-unmasked

Yes, some scenarios are learned at disengagement, But not threads. And most drivers punch out ahead of this. There are still plenty left to kill people needlessly.

--

--

Michael DeKort
Michael DeKort

Written by Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation

Responses (1)