The Theranos echo chamber is much bigger in the Driverless Vehicle industry

Michael DeKort
5 min readApr 1, 2022


For reference, please watch the series called Dropout or read John Carreyrou’s book “Bad Blood”.

Some time back I wrote an article on the similarities between Theranos and the driverless vehicles industry. (That article with the similarities list is below.) What I would like to focus on here is the echo chamber, how much larger it is in the driverless vehicle industry, how easy it was to create and why, as compared to the Theranos debacle.

Unlike the Theranos debacle the driverless vehicle industry has created some useful and pretty amazing technology. The problem of course being that technology works just well enough to make people believe its capabilities are far better than they are. (Which is analogous to Theranos using a Siemens machine to process the blood samples and create fake positive results.) Take Waymo, Cruise and Gatik for example. They all say they now have “fully driverless” systems. Like Theranos they tried an incremental development approach. Instead of choosing easier diseases to detect they chose easier Operational Design Domains (ODDs) to learn. However, unlike Theranos who pretty much failed at everything, these AV systems are handling a small subset of the scenarios and objects required pretty well. And that’s the problem. They handle them well enough to play the crash odds so they can lie to the public by inferring they have mastered all the relevant scenarios, including crashes and edge cases, to a full SAE L4. Why is this successful? The average person gets in a crash every 160k miles in the US. Keep in mind that is spread out across the entire US, in all conditions. Not some small well mapped area with only a thousand or so cars driving around. That means it is going to be a long time until a crash occurs. Since it takes about 10 million miles per person to create a death from a crash, that will take even longer. Combined with how cool this technology is, social media and the dynamics of peer pressure in groups, it is easy to see how this has become the largest and most incorrect echo chamber in history. Getting that second billion to believe is far easier than the first dozen individuals. Once Maslow’s Triangle is involved, especially around ego, belonging and money, it is quite easy to see how we got to where we are.

Of course, this is easy to resolve. There is no reason to take anyone’s word for it. Let’s just look at the “driver’s test” results. The problem here being there aren’t any. None of these companies supplies any of the data required to know how qualified these systems really are. Waymo actually sued and won to avoid providing less than should be required. Thus, leaving what it seems to me is an illegal unlicensed driver on our streets? (It is mind boggling the federal and state DOTs don’t get this.) So, what will actually resolve this issue? Some combination of time to get through more of those miles I mentioned above and Tesla. Tesla’s camera only system design is far less competent, there are far more of them, there is no ODD and the driver monitoring system alarm minimum delay of 20 second. That perfect storm will lead to the first of many deaths of a small child or family soon. That will lead to changes. But, which ones. My fear is Tesla is banned and nothing changes for anyone else. That very same echo chamber will continue to believe Tesla does something fundamentally different than the rest. As opposed to the truth. Which is Tesla is just far worse at the same untenable development and design approach. Meaning they are only the most egregious, reckless and incompetent offender. If however, the echo chamber has epiphany, maybe they ban Tesla and impose a driver’s test? The key there being what should that test comprise? It should include a list of scenarios and objects learned, to include relevant crash and edge cases and with all descriptors. A list of all disengagements listed with each actual and possible crash identified, including in simulation. And proof that each of the model’s used in simulation has the right fidelity level compared to its real-world compliment. (The learned scenario testing data must also include testing the sensor, perception and planning systems. The reason for this being machine and deep learning can do the right thing for the wrong reason or by luck That is because unlike people, these systems do not yet infer well. Current general and deep learning processes are not up to that yet.) This is where one might ask themselves why Waymo, Cruise, Gatik and the industry don’t supply this data. Why sue to avoid producing it? Or even suggesting some form of driver’s test is needed even if they game it and produce a bad one? After all wouldn’t a company that is telling the truth, who did the actual due diligence, want to prove it and build trust?

For more detail please find my other articles. Including how to develop and test these system properly.

The Driverless Vehicle Industry is in Danger of becoming Worse than Theranos


The Autonomous Vehicle Industry can be Saved by doing the Opposite of what is being done now to create this technology


How the failed Iranian hostage rescue in 1980 can save the Autonomous Vehicle industry


My name is Michael DeKort — I am Navy veteran (ASW C4ISR) and a former system engineer, engineering, and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, a software project manager on an Aegis Weapon System baseline, and a C4ISR systems engineer for DoD/DHS and the US State Department (counterterrorism). And a Senior Advisory Technical Project Manager for FTI to the Army AI Task Force at CMU NREC (National Robotics Engineering Center)

Autonomous Industry Participation — Air and Ground

- Founder SAE On-Road Autonomous Driving Simulation Task Force

- Member SAE ORAD Verification and Validation Task Force

- Member UNECE WP.29 SG2 Virtual Testing

- Stakeholder USDOT VOICES (Virtual Open Innovation Collaborative Environment for Safety)

- Member SAE G-35, Modeling, Simulation, Training for Emerging AV Tech

- Member SAE G-34 / EUROCAE WG-114 Artificial Intelligence in Aviation

- Member Teleoperation Consortium

- Member CIVATAglobal — Civic Air Transport Association

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee

SAE Autonomous Vehicle Engineering magazine editor calling me “prescient” regarding my position on Tesla and the overall driverless vehicle industry’s untenable development and testing approach — (Page 2)

Presented the IEEE Barus Ethics Award for Post 9/11 DoD/DHS Whistleblowing Efforts



Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation