The Messenger Cancellation Failed-LinkedIn Reverses Request to Remove My Aurora Post-And Aurora’s Telling Response to USDOT VOICES

On 6–10–2021 I received a notice from LinkedIn that the following post was removed from Nat Beuse’s Aurora thread because it violated LinkedIn policy.

“Aurora is trying to buy credibility and trust vs earn it. And it is doing so by misleading and manipulating the public. The way to do this right is to do exactly the opposite. Use the right development approach and show the public scenarios learned and disengagements. Data vs hype.

Aurora’s “Unshakable Safety Culture” relies on Aerospace Safety Experts it Forbid to Talk to Me. Then Heidi King Makes It Worse"

(Link to that thread —

Here is what I sent to LinkedIn challenging my post removal

I realize you cannot look into the details of both sides of most issues. I must respectfully request that you either do that here or not allow Aurora to post that their autonomous vehicles development process is safe. It literally uses humans as needless test subjects. (Not unlike Tesla. only Tesla is far more egregious for several reasons.) What I stated in the post is factually accurate in that regard and regarding David Carbaugh. I will be glad to show you the emails. Having said this if there is different wording I can use that doesn’t change the meaning I am open to it.

I have earned the IEEE Barus Ethics Award presented to me in congress by Rep Cummings. I am in several books on ethics and in a documentary called War on Whistleblowers. I am protecting the public here as well.

An article of mine explaining the basic issue and my relevant background.

The Autonomous Vehicle Industry can be Saved by doing the Opposite of what is being done now

My name is Michael DeKort — I am a former system engineer, engineering, and program manager for Lockheed Martin. I worked in Aerospace/DoD/FAA simulation, as a Sr PM and then the Software Engineering Manager for all of NORAD, as a PM on the Aegis Weapon System, as a C4ISR systems engineer for the DHS Deepwater program and the lead C4ISR engineer for the Counter-terrorism team at the US State Department. I am now CEO/CTO at Dactle.

Industry Participation — Air and Ground
— Founder SAE On-Road Autonomous Driving Simulation Task Force
— Member SAE ORAD Verification and Validation Task Force
— Member UNECE WP.29 SG2 Virtual Testing
— Stakeholder USDOT VOICES (Virtual Open Innovation Collaborative Environment for Safety)
— Member SAE G-34 / EUROCAE WG-114 Artificial Intelligence in Aviation
— Member CIVATAglobal — Civic Air Transport Association
— Stakeholder for UL4600 — Creating AV Safety Guidelines
— Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee
— Presented the IEEE Barus Ethics Award for Post 9/11 DoD/DHS Efforts

LinkedIn’s Response

We received your request to take a second look. If we find your content doesn’t go against our Professional Community Policies,, we’ll put it back on LinkedIn.

Messenger Cancellation Failed

First, good for LinkedIn. I have no idea which way LinkedIn went regarding my request. But it doesn’t matter. LinkedIn allowed for a level playing field. In addition to my pedigree information above is one other important piece. In 2006 I became the first person to use YouTube as a whistleblower. How do I know I was the first? All the major news outlets, except Fox, ran stories on it. I appeared in GQ’s Man of the Year edition, and I visited YouTube and we discussed it a couple months after this. All of this might mean I am the first documented internet “troll”? That of course brings me to the definition of “troll”. All too often it is used as a cancellation copout by those who can’t objectively or successfully respond to challenges. I believe that is what happened here. I assure you that if I was as energetic is agreeing with Aurora I would have received a “like”. The YouTube post 9/11 whistleblowing video “troll” I made in 2006 and the “trolling” I do now show one important thing. The internet can level the playing field between those who have money and power and those that do not.

Way Forward

Nat Beuse and Aurora please respond to my objective POV and push back. Also please provide proof of safety. Justify the “safety drivers” you use and produce information on scenarios learned and all disengagements. Better yet, let’s all get on a live podcast? Nat, Chris, Sterling and your new safety team members, especially the two from aerospace. I suppose you could block me. But I would fight that with LinkedIn and probably legally and in the press. Also . . .I am actually trying to help you here. You are headed to bankruptcy and harming people for no reason. I realize my approach can be off-putting. And egos are involved here. But let’s get beyond that. After all, if my approach were the real issue, you would have already made the paradigm development shift and then simply stated my approach need not have been so direct. (I have stated before that I find my approach to be unfortunate. I do it because that is the only thing that has a shot at working. Echo chambers and egos don’t flip course 180 degrees and admit they made a mistake because someone makes suggestions or says please. Look at human history. It usually takes increasing severity of tragedies, press coverage and laws. About a dozen people have already died needlessly for an untenable development method. Do we really need the first child or family to die? While Tesla is way off the rails all the rest using the same method have to harm people by design. While many use a better sensor system and more responsible Agile ODD approach so they will harm less people, they will still harm many needlessly.)

Nat Beuse/Autora’s Response to VOICES

As I write this a little birdie sent me a Bloomberg article that is beyond apropos. (Full disclosure, I have been talking with the journalist Gabrielle Coppola.)

Hyperdrive Daily: AV Safety Is Whatever (Insert Self-Driving Company Name) Says It Is —

In the article Nat Beuse/Aurora and Gabrielle make the following statements about government involvement in standards and verification of performance as well as VOICES approach to it. (And approach I not only support but am assisting with as a stakeholder.)

“I spoke to Aurora’s head of safety, Nat Beuse, who also happens to be a former NHTSA official, for his take: Where are all these safety papers headed? Will we converge on one standard some day?

“If everybody’s looking for the one standard, we’re going to be in really big trouble,” he said. “We have metrics we’re looking at, but we need to solve for proving to ourselves that they actually result in meaningful safety.”

In other words: We don’t yet know what meaningful safety looks like for an AV — we’re still figuring out how to measure and prove it. So please don’t write rules that box us in, let us do the work first.

NHTSA is basically doing that, but safety advocates argue AVs are already on the road, and the government needs to demand more disclosure and start to build the data and capacity to oversee it.

To that end, there is a wonky little proposal that was floated by the Department of Transportation last November called VOICES. Borrowed from the Department of Defense, it’s basically a software platform that lets federal contractors making a plane, for instance, upload performance data so that it can be verified, while also protecting the intellectual property. Proponents say the industry could use such a platform to create a common language that lets all sorts of machines — smart traffic lights, cars, drones, UAVs, “talk” to each other, and in the process, develop collective safety standards cheaply and more efficiently than the billions of dollars each company is throwing at it now.

Beuse describes VOICES as regulation in sheep’s clothing — designed to look like a nifty software tool that industry can use, but really, it’s a way for Uncle Sam to check their math.

“Five years from now, what does that actually look like?,” he asked. “My understanding of DOD land is, it’s how the government evaluated a widget from a supplier. That’s a different kettle of fish than, ‘Everybody’s in this learning and sharing environment.’”

So I’m not holding my breath for a single definition of what a “safe” AV is anytime soon. Tech and car companies will be the ones to figure it out — we’re just along for the ride.”

Wow, seems like a lot to unpack there. But there isn’t. Let me restate what Beuse/Aurora said in English. We have no interest in the world seeing the emperor is naked regarding our actual lack of capability or understanding of how to develop this system properly.

What VOICES does is to provide a simulation networking system, based on DoD architectures that have existed for decades, in the form of message structures like DIS, HLA, DDS etc, as well as provide scenario control, in order to both help multiple companies develop and test together AND give the government insight into how things are going so they can be better informed for when it is time to create a “driver’s test” for the machine driver. (More on that in a minute.) Keep in mind though, that Aurora does not use ANY of this technology, especially the greatly enhanced aerospace simulation, modeling and real-time technology Aurora and the industry needs to affect the development paradigm shift needed for them to be. (Note-VOICES is starting off with CARLA and meeting the companies where they are to start. Their plan however, clearly notes this has to be improved.) So. . .Beuse is right. But what is the problem with the government checking the math? (Which by the way is code for scenario and disengagement performance.) All the government wants is for the machine driver to undergo a driver’s test. And btw VOICES has no charter to judge anything. That “math check” is purely informational. When the government gets to establishing the driver’s test it will be through legislation that the SAE ORAD V&V Task Force, which Aurora and every AV maker and OEM can participate in, assists the government in creating. (I am on that task force as well as the one for simulation.)

Again, lot’s of whining and transparent dodging, hype, and misleading statements by Beuse and Aurora. If they were doing the right thing the right way would all of this be occurring? Wouldn’t they trip over themselves to fully disclose how stellar their development method and progress is? Or like Gabrielle says — “So I’m not holding my breath for a single definition of what a “safe” AV is anytime soon. Tech and car companies will be the ones to figure it out — we’re just along for the ride.”

More on my POV including how to resolve this

The Autonomous Vehicle Industry can be Saved by doing the Opposite of what is being done now


SAE Autonomous Vehicle Engineering Magazine — Simulation’s Next Generation


Systems Engineer, Engineering/Program Management -- DoD/Aerospace/IT - Autonomous Systems Air & Ground, FAA Simulation, UAM, V2X, C4ISR, Cybersecurity