An Unsolicited Opinion on Air Taxis going Straight to Autonomy and How to Get There
While these are good reasons to push for and go straight to autonomy, and I support the sentiment, I believe the end may not justify several of the means. And I would like to provide some guidance on getting there regardless of when that may be.
Assuming the development approach is the right one, which most companies in ground and air autonomy do not use (more later), there are three significant issues with not using or having the capability to use a human pilot.
The first issue is how long it would take to get to autonomy and forgo revenue for that period. At best you are looking at 5 years for a very tight ODD geofence. (Location and weather). Most likely 8–10 years.
The next issue is even is you built a legitimate autonomous system you would be eliminating any edge case scenario where a human pilot would mitigate the situation. Let’s say something in the autonomous system fails, whether it be software or hardware, and the system can get the passengers to safety but not in a planned spot or even a back up way point. And that spot is remote. Now let’s say manual flight is possible. What are you doing with those passengers and that aircraft? I am sure if we look at the wide array of things that could happen, and have even happened before, we can come up with more examples. Other than for short term marketing it seems to me this approach is short-sighted and will backfire, especially with regard to public trust. Which, as you know, may only needs to happen once. (Regarding the use of remote operations specially where the aircraft runs into problems. This process has a place but there are limitations, especially regarding latency in the urban domain and from UML 4 and up due to complexity and density.)
Now let’s talk about the approach to making an autonomous system. This involves both a tenable approach from a time, cost, and safety POV as well as using a human pilot in that development approach in a variety of ways. While many in the air domain are not using machine and deep learning as much as the ground domain, some are. If a company chose to use this approach it would literally never get close to autonomy, go bankrupt trying and harm people literally by design trying. More on this in my article here.
The Autonomous Vehicle Industry can be Saved by doing the Opposite of what is being done now
Next comes the degree to which machine learning is used (this assumes little if no deep learning). First, let’s go with none and a pure rules-based system. Since you are flying passengers, the way you fly, especially in bad weather or edge/crash case scenarios, matters a great deal. It goes to passenger comfort and trust. Are you going to create those response curves with no human pilot input? And if you did, would you limit that to an FAA Level C or D simulator and not verify it in the actual aircraft? Seems to me no human pilot means no human pilot. Not in the actual aircraft or a simulator. With regard to machine learning I would like to suggest imitation and reinforcement learning should be used either in tandem with the rules process or to augment it by providing information that may be beneficial. An example might be to use imitation learning to have a professional pilot, using a Level C or D simulator, to show the system what to do, especially for variations of areas you provided rules for. This would save massive amounts of time and money by making the process far more efficient.
Regarding proper simulation. In order to develop your aircraft and machine pilot as fast, as cheaply and as safely as possible, as well as to get both certified, you will need an FAA Level A or above simulator, most likely a Level C or D, and associated simulation technology as motion cues are critical. The reasons to use it for developing the human or machine pilot were mentioned above. With regard to developing the aircraft It is imperative to have a various models have the requisite level of fidelity and the entire system the proper real-time capability. The last area of concern here is certification of the aircraft and human or machine pilot. Currently the FAA prefers to use simulators, from Level 5 and up, for various certification. Given the aircraft and autonomous systems are new technology, FAA DER’s are likely going to want to ensure due diligence is present in the simulator before they go up in the actual aircraft.
With respect to the technology needed to build a proper simulator, simulation host and all the associated systems, gaming architecture-based system, including XPlane, do not have this capability. (XPlane can only qualify to FAA Level 5 which is a generic cockpit.) If the proper simulation technology is not used there will come a point where the models differ enough from the real-world to cause errors and false confidence. Many of why may not be caught until a tragedy occurs. That will in turn cause massive amounts of rework and financial expenditures, if not far worse. (In the commercial area the only company I am aware of that makes a proper off the shelf aircraft development system and flight/engine model is j2 Aircraft Dynamics. The major aerospace companies make their own in house.)
More on the technical difference between these technologies in my article here. (There is no meaningful difference between the air and ground domains with regards to the points I make.)
SAE Autonomous Vehicle Engineering Magazine — Simulation’s Next Generation
https://www.sae.org/news/2020/08/new-gen-av-simulation
In closing I strongly encourage air taxi/autonomous systems makers to consider putting a pilot and pilot capabilities in the aircraft. At the very least it gives you the ability to earn trust by proving due diligence before removing the capability later. To utilize proper simulation for the development and certification of the aircraft and human or machine pilot/autonomous system. And to consider machine learning, especially imitation learning to at least augment development.
My name is Michael DeKort — I am a former system engineer, engineering, and program manager for Lockheed Martin. I worked in Aerospace/DoD/FAA Level D simulation, as the Software Engineering Manager for all of NORAD, as a PM on the Aegis Weapon System, as a C4ISR systems engineer for the DHS Deepwater program and the lead C4ISR engineer for the Counter-terrorism team at the US State Department. I am now CEO/CTO at Dactle.
Industry Participation — Air and Ground
- Founder SAE On-Road Autonomous Driving Simulation Task Force
- Member SAE ORAD Verification and Validation Task Force
- Member UNECE WP.29 SG2 Virtual Testing
- Stakeholder USDOT VOICES (Virtual Open Innovation Collaborative Environment for Safety)
- Member SAE G-34 / EUROCAE WG-114 Artificial Intelligence in Aviation
- Member CIVATAglobal — Civic Air Transport Association
- Stakeholder for UL4600 — Creating AV Safety Guidelines
- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee
- Presented the IEEE Barus Ethics Award for Post 9/11 DoD/DHS Whistleblowing Efforts