Lex Fridman, MIT Deep Learning Research Scientist, is Misleading his Students and putting them at Risk

Michael DeKort
3 min readApr 1, 2019

--

I recently watched this episode of Lex Fridman’s series — MIT Self-Driving Cars: State of the Art (2019) — https://www.youtube.com/watch?time_continue=2019&v=sRxaMDDMWQQ

I find much of what Mr. Fridman saying to be extremely concerning. Not just because it is factually incorrect but because it is misleading to the point of negligence. I believe Mr. Fridman’s has so much bias toward Tesla and the use of public shadow and safety driving that what he is saying is not only not the type of objective and informed information he is supposed to present as an educator but that it is unethical and dangerous since it misleads his students and audience to the point of providing them false confidence in these systems and as such puts them in danger.

Some quotes and my responses from the video

“(Tesla) fatalities is not a large number”

  • To those families it is a very large number

“In order to design successful autonomous vehicles those vehicles have to take risks”. . . .And when the risks don’t pan out the public doesn’t understand the general problem, we are tackling. . .”

  • The vehicles aren’t risking a thing. The human Guinea pigs in and around them are.
  • I believe the families of those who were killed taking that risk and for no reason might say things were far worse than not “panning out.”

MIT has a “fully autonomous vehicle” that is in a “particular location and is “severely constrained”.

  • How is that an actual L4 autonomous vehicle? This is hype.

“Who will be first to deploy 10,000 L4 AVs?” . . .“Curmudgeons and the engineers say no one in the next 50 years will do it.” He foes on to say “70 years of research showing a human are not able to maintain vigilance in monitoring a system. They tune out. . . they over trust, they misinterpret, and they lack vigilance. It very well could be true but what if it is not? We have to consider if it is not.”

  • NASA, Missy Cummings from Duke, a plethora of studies and even several AV makers and OEMs have stated that safety driving or the use of handover is dangerous. The former have added that these systems cannot be made safe by any monitoring or alarm system in critical scenarios because enough time cannot be provided to regain enough situational awareness to do the right thing the right way. Beyond this it is impossible to drive the one trillion miles or spend over $300B to stumble and restumble on all the scenarios necessary to complete the effort. Many of which are accident scenarios no one will want you to run once let alone thousands of times.

Regarding his slides on LiDAR vs Tesla’s camera approach

  • His statements on the LiDAR system approach is misleading. They also usually have cameras AND they do deep learning as well. This seems to be an effort to mislead people into thinking Tesla’s solution is much better than it actually is.

In closing I believe Mr.Fridman’s conduct is extremely concerning. In addition to Mr. Fridman misleading people, providing them false confidence and contributing to the deaths caused by this untenable process, I believe his conduct is so far over the line it should be reviewed by the proper authorities at MIT.

Please find more, as well as my suggestion for resolving these issues with aerospace/DoD simulation technology and systems engineering in my articles here

SAE Autonomous Vehicle Engineering Magazine — End Public Shadow Driving

The Hype of Geofencing for Autonomous Vehicles

Common Misconceptions about Aerospace/DoD/FAA Simulation for Autonomous Vehicles

Remote Control for Autonomous Vehicles — A far worse idea than the use of Public Shadow “Safety” Driving

--

--

Michael DeKort
Michael DeKort

Written by Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation