Sebastien Thrun needs to do some more homework on autonomy development and Chris Anderson

Michael DeKort
11 min readJun 14, 2021

--

Recently Larry Page’s Kitty Hawk announced it had purchased 3DR and made Chris Anderson its head of autonomy. Sebastian Thrun also stated his position on how autonomy development should be approached and Chris Anderson’s place in that pursuit.

Article — Billionaire Larry Page’s Kitty Hawk Is Making An All-In Bet On Robot Air Taxis. Its Program Head Is Out In Disagreement Over It. https://www.forbes.com/sites/jeremybogaisky/2021/06/11/larry-page-kitty-hawk-air-taxi-sebastian-thrun/?sh=32df4d2c738c

There are some significant issues with Kitty Hawk and Thrun’s POV.

· Pursuit of an autonomy first or only design

· Utilizing remote control especially multiple aircraft controlled from one pilot

· Exaggerating the link between drone autonomy development for aircraft, especially for passenger aircraft and FAA certification

· Utilizing Chris Anderson to lead these effort. buying 3D Robotics and using it as a development base

· The right way to develop and test an autonomous system

Note — Forbes reports Kitty Hawk terminated Damon Vander Lind, the engineer who created the first Heaviside aircraft. Apparently, Vander Lind thought Thrun’s approach was too risk. The article does not mention the areas of risk. I was unable to find more information in a search. To the degree Vander Lind’s objections mirrored mine, he should have been listened to.

Overall domain or Operational Design Domain Comment

The domain or ODD here is urban environments and passenger aircraft. Not a military or commercial drone flying up high or in other far less complex environments. That complexity includes not just other objects and their proximity to you, but buildings and other structures and turbulence caused by those building. This makes all the difference between how drone autonomy works and what is needed here. And by here I don’t mean the rosy Agile bottoms up simple scenarios folks hype now. Engineers have a responsibility to engineer to the worse case scenarios not the “happy path” ones. Meaning edge or corner cases, exceptions, or plausible what-if scenarios. That is where actual due diligence and safety live. The industry has created UAM (Urban Air Mobility) Maturity Levels (UML) to describe the associated complexity. UML4 is 100s of entities in flight at the same time. Up to 999. UML5 is thousands of simultaneous flights up to 9999. And UNL6 (which is ridiculous) is tens or thousands of entities flying in a city at the same time. Up to 99,999. (None of this includes birds by the way.) In what city do you think a human pilot is going to fly multiple aircraft with 999 simultaneous flights in the air and birds at UNL4 let alone UML5?

Pursuit of an autonomy first or only design

While these are good reasons to push for and go straight to autonomy, and I support the sentiment, I believe the end may not justify several of the means.

Assuming the development approach is the right one, which most companies in ground and air autonomy do not use (more later), there are three significant issues with not using or having the capability to use a human pilot.

The first issue is how long it would take to get to autonomy and forgo revenue for that period. It is absolutely impossible to leapfrog human piloted aircraft, especially passenger aircraft, with autonomy. Trying to do so with a drone system, especially one based on PX4 will fatally injure people and decimate this industry. Beyond that is the time required to do this right, assuming the development process were the right one (more below). At best you are looking at 5 years for a very tight ODD geofence. (Location and weather). Most likely 8–10 years for anything significant. Many human piloted aircraft will be in operation and making money by then.

The next issue is even is you built a legitimate autonomous system you would be eliminating any edge case scenario where a human pilot would mitigate the situation. Let’s say something in the autonomous system fails, whether it be software or hardware, and the system can get the passengers to safety but not in a planned spot or even a back up way point. And that spot is remote. Now let’s say manual flight is possible. What are you doing with those passengers and that aircraft? I am sure if we look at the wide array of things that could happen, and have even happened before, we can come up with more examples. Other than for short term marketing it seems to me this approach is short-sighted and will backfire, especially with regard to public trust. Which, as you know, may only needs to happen once. (Regarding the use of remote operations specially where the aircraft runs into problems. This process has a place but there are limitations, especially regarding latency in the urban domain and from UML 4 and up due to complexity and density.)

Utilizing remote control especially multiple aircraft controlled from one pilot

The article says the pilot on the ground will take care of flight when the autonomous system fails. And be able to do so with many aircraft at a time. Apparently, by communicating with air traffic controllers and controlling the aircraft themselves. Before I go into how preposterous and grossly negligent this is overall let me first state an extremely limited use case this may be acceptable. If the pilot has a direct not cloud latency riddled connection with a single craft and has complete visual acquisition or better yet is in an FAA Level A full motion simulator, it would be possible to fly the aircraft to safety. Beyond this most of the reasons this is reckless and untenable are in my limited use case description. The cloud would create far too much latency. (Yes, 5G too. Electrons only flow so fast through cloud hardware.) That delay would not only create control and response delays, but they would also preclude a motion system for the pilot to sit on because the delay would make them sick. That delay would also cause a catastrophic ripple effect with the pilot reacting too late to actions they already created or are reacting to. Taking this to a level a human could somehow fly multiple aircraft at a time remotely in the public domain, should be something I do not have to explain is ridiculous.

Exaggerating the link between drone autonomy development for aircraft, especially for passenger aircraft and FAA certification

Everything stated so far leads up to the proper way to develop and certify autonomous systems. Drone development largely relies on trial and error in the real-world or gaming-based simulation. Currently most drones are not allowed to fly over or near people. This drastically lowers the environmental and other flying entity complexity. Which is of course key. Once that use case goes from simple world cargo deliveries to complex world people deliveries, the game changes so much you must utilize a development paradigm shift to implement. Gaming-based simulation technology must give way to aerospace/DoD/FAA Level 6 or above simulation technology (more in my article below) and it’s use for primary development vs the real-world. Even though most of the developers in the air domain are forgoing machine learning for a rules-based approach, which would make this task even more untenable and reckless from a time, cost and safety POV like those doing so with public road transportation, it’s not viable for the same time, cost and safety reasons.

It is a myth that public shadow and safety flying can create a legitimate autonomous vehicle. And the lives the process takes are necessary and for the greater good. It is impossible to fly the miles required or spend the money to stumble and restumble on all the scenarios necessary to complete the effort. With respect to using human Guinea pigs the process also harms people for no reason. The first safety issue is handover. The time to regain proper situational awareness and do the right thing, especially in time critical scenarios. cannot be provided. Another dangerous area is learning accident scenarios. AV makers will have to run thousands of accident scenarios thousands of times to accomplish this. That will cause thousands of injuries and deaths. The next issues is the use of gaming based simulation technology which has too many technical limitations to facilitate the creation of a legitimate real-world digital twin. As I stated the solution is to use DoD/Aerospace simulation technology, informed and validated by the real-world, and shift most of the autonomous system development and testing over to it.

Beyond this is the use of PX4 based open-source code as the basis for the autonomous system. PX4 has too many core control and security issues to use it. You are far better off starting from scratch. This is why it also cannot be type certified without dumbing down those certification procedures. Which brings me to Chris Anderson, Kitty Hawk buying 3D Robotics and leveraging it.

Utilizing Chris Anderson to lead these effort. buying 3D Robotics and using it as a development base

Chris Anderson has a choppy past. First, going against the China juggernaut is a massive undertaking. So I don’t fault anyone for not unseating them. (Skydio seems to have found the best magic sauce? Having said this they benefit from US government DJI bans 3DR did not.) Beyond that though, Chris Anderson is trying to leverage drome technology especially Pixhawk/PX4 for passenger aircraft autonomy and 3D Robotics had a plethora of issues. Most self-inflicted under Anderson’s watch. Many are highlighted in this article

Behind The Crash Of 3D Robotics, North America’s Most Promising Drone Company — https://www.forbes.com/sites/ryanmac/2016/10/05/3d-robotics-solo-crash-chris-anderson/?sh=70cfac193ff5

From the article

The drone’s GPS system sometimes failed to connect correctly to ensure stable flight, causing the drone to fly away or crash. The gimbal, or camera-stabilizing device, faced production delays and the first Solos hit the market without this add-on, making it unsuitable for photos and video, the chief use of most consumer drones. “Making the gimbal was harder than making the drone,” said Guinn, who noted that the devices didn’t get to customers until August, a full two months after Solo’s launch.

More on those fly aways here

One of the issues is random drone fly aways. Detailed in this Reddit thread — https://3drpilots.com/threads/solo-flew-away-and-crashed.7410/

Folks who think you can simply enhance drone autonomy code for the massively more complex task of creating an autonomous passenger aircraft, it’s pretty clear Chris Anderson has not learned much from that experience.

Another issue with Chris Anderson is his desire to dumb down FAA type certification, especially around safety. As drone’s progress to flying Beyond Visual Line of Site (BVLOS) and do so autonomously this is going to manifest itself in tragedies. Tragedies that will pale to passenger aircraft crashes. As this is the air travel and not automobiles, this industry will not survive a single tragedy, especially when it was found to be avoidable if the right design and testing approach was taken.

(Note-I tried to address these concerns with Chris Anderson on a couple LinkedIn thread of his. He checked my profile then blocked me. I understand my approach is direct. However, given the health of the industry and people is literally at stake here, I think he should be able to handle a bit of scrutiny. Not responding to my objective POV with objective responses usually signals there aren’t any. The reason for the direct if not interventionary approach is due to how human beings, especially in large echo chambers built on hype and dangerous suggestions, which in turn are built on a significant lack of domain or systems engineering experience, greed and ego, don’t alter course on their own. They usually require increasing tragedies, press coverage and intervention from lawyers and law makers. I would like to skip that part. That means taking an unfortunately direct approach since making suggestions and saying please never work in environments like this.) )

The right way to develop and test an autonomous system

Now let’s talk about the approach to making an autonomous system. This involves both a tenable approach from a time, cost, and safety POV as well as using a human pilot in that development approach in a variety of ways. While many in the air domain are not using machine and deep learning as much as the ground domain, some are. If a company chose to use this approach it would literally never get close to autonomy, go bankrupt trying and harm people literally by design trying. More on this in my article here.

The Autonomous Vehicle Industry can be Saved by doing the Opposite of what is being done now

https://medium.com/@imispgh/the-autonomous-vehicle-industry-can-be-saved-by-doing-the-opposite-of-what-is-being-done-now-b4e5c6ae9237

Next comes the degree to which machine learning is used (this assumes little if no deep learning). First, let’s go with none and a pure rules-based system. Since you are flying passengers, the way you fly, especially in bad weather or edge/crash case scenarios, matters a great deal. It goes to passenger comfort and trust. Are you going to create those response curves with no human pilot input? And if you did, would you limit that to an FAA Level C or D simulator and not verify it in the actual aircraft? Seems to me no human pilot means no human pilot. Not in the actual aircraft or a simulator. With regard to machine learning I would like to suggest imitation and reinforcement learning should be used either in tandem with the rules process or to augment it by providing information that may be beneficial. An example might be to use imitation learning to have a professional pilot, using a Level C or D simulator, to show the system what to do, especially for variations of areas you provided rules for. This would save massive amounts of time and money by making the process far more efficient.

Regarding proper simulation. In order to develop your aircraft and machine pilot as fast, as cheaply and as safely as possible, as well as to get both certified, you will need an FAA Level A or above simulator, most likely a Level C or D, and associated simulation technology as motion cues are critical. The reasons to use it for developing the human or machine pilot were mentioned above. With regard to developing the aircraft It is imperative to have a various models have the requisite level of fidelity and the entire system the proper real-time capability. The last area of concern here is certification of the aircraft and human or machine pilot. Currently the FAA prefers to use simulators, from Level 5 and up, for various certification. Given the aircraft and autonomous systems are new technology, FAA DER’s are likely going to want to ensure due diligence is present in the simulator before they go up in the actual aircraft.

With respect to the technology needed to build a proper simulator, simulation host and all the associated systems, gaming architecture-based system, including XPlane, do not have this capability. (XPlane can only qualify to FAA Level 5 which is a generic cockpit.) If the proper simulation technology is not used there will come a point where the models differ enough from the real-world to cause errors and false confidence. Many of why may not be caught until a tragedy occurs. That will in turn cause massive amounts of rework and financial expenditures, if not far worse. (In the commercial area the only company I am aware of that makes a proper off the shelf aircraft development system and flight/engine model is j2 Aircraft Dynamics. The major aerospace companies make their own in house.)

More on the technical difference between these technologies in my article here. (There is no meaningful difference between the air and ground domains with regards to the points I make.)

SAE Autonomous Vehicle Engineering Magazine — Simulation’s Next Generation

https://www.sae.org/news/2020/08/new-gen-av-simulation

My name is Michael DeKort — I am a former system engineer, engineering, and program manager for Lockheed Martin. I worked in Aerospace/DoD/FAA Level D simulation, as the Software Engineering Manager for all of NORAD, as a PM on the Aegis Weapon System, as a C4ISR systems engineer for the DHS Deepwater program and the lead C4ISR engineer for the Counter-terrorism team at the US State Department. I am now CEO/CTO at Dactle.

Industry Participation — Air and Ground

- Founder SAE On-Road Autonomous Driving Simulation Task Force

- Member SAE ORAD Verification and Validation Task Force

- Member UNECE WP.29 SG2 Virtual Testing

- Stakeholder USDOT VOICES (Virtual Open Innovation Collaborative Environment for Safety)

- Member SAE G-34 / EUROCAE WG-114 Artificial Intelligence in Aviation

- Member CIVATAglobal — Civic Air Transport Association

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee

- Presented the IEEE Barus Ethics Award for Post 9/11 DoD/DHS Whistleblowing Efforts

--

--

Michael DeKort
Michael DeKort

Written by Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation

No responses yet