Boeing should do the engineering due diligence on Wisk

Michael DeKort
12 min readSep 25, 2022

--

Recently, Boeing significantly increased its investment into its joint venture partner Wisk. As well increasing its marketing of the company. This includes tripling down on going autonomous first. I say tripling because Wisk is so invested in this approach, they have no pilot station or controls. This means they cannot make use the aircraft to generate revenue with a pilot while it develops its autonomous systems like most of its competitors. In my communications I raised the following concerns and relevant solutions. I am requesting Boeing divorce itself from the hype and Silicon Valley/IT echo chamber and do the engineering due diligence here. If the current path continues Boeing’s already tarnished reputation, from the 737 MAX debacle, which was itself void of engineering due diligence due to leadership decisions, will suffer significant additional damage from a safety, fiscal responsibility, ethical and trust POV. This will also likely include avoidable tragedy.

· Autonomy first and with no human controls

· Reliance and under use on machine and deep learning due to nascent general learning capabilities

· The untenable and dangerous development approach Boeing and the industry is using to develop its autonomous systems

· The use of inferior gaming technology like X-Plane for aircraft and autonomy development, testing and certification as well as human and machine pilot training and certification

· The use of remote control in the public domain using the cloud and the associated latency

· The use of inferior design and development tools in the development of their autonomous systems and aircraft

· Misleading the public and enabling false confidence

· Sensor technology

Autonomy first and with no human controls

It is extremely unwise from a safety, cost, and time POV to take this approach. I will explain the details of this concern below, however, Wisk’s expectation of being autonomous and safe in our lifetime is fanciful. Even if the issues below are resolved it would likely not be production ready for at least 10 years. Can it forgo operation and the associated revenue for that long? If all of them are not resolved, it will never reach autonomy.

This brings us to the issue of the dangerous hype cycle this industry is in and a comparison to their ground counterparts who are farther down that road. Recently Waymo, Cruise and Argo started lying about their capabilities and saying they are “driverless” while not only providing no proof of capabilities but suing the state of California to not produce even the minimal set they asked for. They can get away with this in the short term by advancing their tech and Operational Design Domain (ODD) limits to a point they are good enough to play the odds, but nowhere near safe overall. Particularly regarding relevant crash and edge cases. That situation is going to end very badly. At some point Wisk will face the same self-inflicted pressures and field a system safe enough to play the odds. And then they will mislead the public and induce false confidence. Something the air domain can get away with easier and for longer than their terra firma counterparts due to its much less complex ODD. However. it will suffer far greater consequences when a tragedy occurs because it does not get nearly amount of get out of free cards the ground domain does. All of this is enabled by the FAA having no guardrails for the public development and testing due diligence of autonomous systems, especially where the public may be at risk as needless human Guinea pigs. This is likely due to the FAA buying the hype and being inundated in the past 10 years, like many other industries, by folks from Silicon Valley/IT who have little domain or proper engineering experience. A move fast and break things approach that is virtually void of systems or domain engineering or safety practices.

(I would like to note that since Wisk has no pilot controls and will rely on remote control, they will not be using needless human Guinea pigs as safety pilots like many of their competitors do. That is excellent. However, there are issues with remote operations I will discuss below.)

Reliance and under use on machine and deep learning due to nascent general learning capabilities

Machine and deep learning rely on general learning which itself relies on inference. Humans use information gained to infer where that information would apply to things we have not experienced. This keeps us from having to endure eternal and dangerous pattern recognition efforts. For example, we do not have to memorize every or most objects and all their characteristics and associated downsides. Nor every or most scenarios or situations. We infer that all stove burners can be hot regardless of the exact stove or burner type for example. Currently, general learning is in its infancy and creates very little inference. This leaves autonomous system develops to experience and experience an impossible number of scenarios, objects, and their variations to memorize them. The systems then travel the world comparing second by second real world instances against the memorized patterns and then execute plans based on what our machine learning or deep learning processes assigned to them.

The untenable and dangerous development approach Wisk and the industry is using to develop its autonomous systems

The use of machine and deep learning and relying on the real-world to facilitate their ability to learn and be tested requires an impossible number of scenarios to be experienced and reexperienced. By impossible I mean from a safety, cost, and time POV. Setting the inference problem aside for a moment. If there were inference the systems would still have to experience crash scenarios to facilitate that inference, the associated development and test it. Even if that is a fraction of what would be needed if there weren’t inference, this domain cannot endure one tragedy. Beyond that, proper use of remote operations in a test area to minimize human interaction in the cockpit or on the ground would still require crash scenario interactions with other aircraft and objects. No one has the money to do that right. Now extending that to the current world of little inference, that issue gets far worse and includes a time element that makes it beyond untenable. For those folks using human pilot Guinea pigs the safety issue becomes insane.

This brings me to rules-based development vs machine and deep learning. On the ground side, mining companies do this for example. And as a result, are successful. The reason for this is mining domains, like air travel, is extremely coordinated and not very complex. The mining folks control the number of objects, how they maneuver and keep unwanted objects. This means they have no reason to classify objects, figure out what they might do next and simply stop if something unwanted crosses their path, take very broad evasive maneuvers. Or humans clear up the rare situation. The problem here is the air domain does not have the same capabilities to do that. First, the aircraft cannot just stop. Second, the autonomous systems would have to then figure out where to land and to accomplish that safely. That is an extremely complex and difficult situation. (While remote control is an option it should be a very last resort which I will explain below. And clearly ground based aircraft operation could likely work much like mining.)

My concern about a rules-based system lies with the degree to which aircraft autonomous systems need to know what objects are in their world and what they might do. A common detect and avoid process will likely be problematic. For example, if every scenario were treated to assume the other entity is the worst case, there might not be time or room to maneuver wide enough to accommodate it. Given this it seems to me that machine and deep learning must be leveraged.

The use of inferior gaming technology like X-Plane for aircraft and autonomy development, testing and certification as well as human and machine pilot training and certification

As I stated above, this domain has become inundated with folks from Silicon Valley and IT. That means most folks think gaming simulation technology is either good enough or no one else has anything better. While the visual engines out there are terrific, the simulation hosts themselves and their architectures are not. The issue being they are too inefficient and do not facilitate the running of actual and specific high-fidelity models in human real time or faster. This impacts flight and sensor models the most, especially en masse. Systems like X-Plane simply cannot facilitate enough model fidelity for critical use. I worked for a small simulation company who was the first to try. (A plan that was well under weigh before I got there.) They went bankrupt. When I reached out to the founder of X-Plane Austin Meyer to discuss this and sent him the FAA certification procedures for the systems required, he told me he was unaware of the requirements, and his system could not pass them. X-Plane is fine for basic use or pilot training for generic flight or cockpit familiarization, or FAA level 5. It is not suitable for the training of a pilot for any specific aircraft. This extends to aircraft design and test as well. (For an explanation of the technical differences between the technologies and some history on how we got here, please see my article below.) I would also like to note that the use of a proper simulator for imitation learning and testing, with an expert pilot, may be advantages to your development and testing process. If you use reinforcement learning it could help a lot with testing as well, especially regarding comfort.

(One area I will expand on is the creation and use of real-world performance curves for the creation of exact sensor models, especially in the transition zones where various sensors in various conditions tap out. There is no way around doing this work. Specs or a couple tests with one or two objects/materials won’t do. This is something I have yet to see any sensor company do, X-Plane or any ground domain simulation provider do.)

The use of remote control in the public domain using the cloud and the associated latency

The issue here is the combination of system and cloud latency and the issues it causes. That latency causes delays between what the remote pilot does, the aircraft responding to it and getting the feedback. While that latency requirement is usually far less taxing for the air than ground domains (100ms vs 16ms) and will likely not impact benign or normal flight, it will likely be an issue in cases where extreme maneuvers are taken. Not only in control delay but in scenarios where progressive critical maneuvers are made. That could cause a domino effect. There is also the issue of forgoing critical motion cues because the delay causes motion sickness. In the same critical flight scenarios, pilots need to feel what the aircraft does. (Given the folks doing this now do not appear to use full motion systems or FAA level A or above simulators, I question if the latency is an issue in normal ops now or if these folks simply don’t understand the issue or don’t want to spend the money to use the right device. Which is considerably more than a gaming desktop.)

(I would like to mention the issues with using a safety pilot for those who use them. It is not possible to provide enough time for them to regain proper situational awareness and do the right thing the right way in time critical scenarios. Many of which are crash scenarios.)

The use of inferior design and development tools in the development of their autonomous systems and aircraft

Many of the companies and staff who are creating eVTOLs come from outside the aerospace industry, lack domain or even proper engineering experience, forgo proper systems and domain engineering, test and safety practices and utilize Agile and its associated “move fast and break things” approach. Beyond engineering and knowledge and use of proper simulation this extends to aircraft development and testing due diligence and what tools to use and how. In many case X-Plane is used or only basic Matlab or other implementations. These are nowhere near as robust as what the legacy aircraft makers use. If you do not utilize the right approach and tools, you will likely gain a debilitating and dangerous level of false confidence. Basically, you will not know what you do not know and that will come back to bite you. The only publicly available system I am aware of that provides one is j2 Aircraft Dynamics. If you are thinking you can make one on your own even if you had the skill, I suggest that is a much more expensive option. (Full disclosure. I have worked with j2. If there were competing systems, I would list those here as well.)

Misleading the public and enabling false confidence

The issue and associated impacts here should be obvious from an ethical, trust, marketing and legal POV.

Sensor technology

I saved this one for last because it is an area where the downside impacts are not within the complete control of the companies making autonomous systems. In the ground domain this technology has been improving rapidly. To include imaging radar, with macro classification capability, and using LiDAR beyond localization and adding classification and the creation of object tracks. The issue in the air domain is distance and update rates. (A problem some have on the ground side due to ODD or the exact units they are using.) Most of these systems do not operate over 20hz, which is too slow, and do not work, or work well, over 300 meters, especially in degraded lighting or environmental conditions. Those issues are a bigger deal for this domain.

Solutions

· Alter the aircraft design and include a pilot station or controls or ensure you have enough funding to last

· The use of progressive development and testing due diligence approaches. That means use the right people, the right engineering processes, the right tools, and the right simulation to work towards the real-world. (The FAA should create a regulation that covers this as well.)

· Use the proper tools and simulation technology. For simulation and a simulator that means the use of aerospace FAA Level 6 and above simulation technology for most development, testing, training and certification of the aircraft and machine pilots

· Use only local remote edge based operations, with at least a Level A device, nowhere near people.

· Assist the FAA in creating the relevant aircraft, autonomy and pilot development, test and production certifications required to do this right. At the very least this will virtually eliminate the hype and the need to do so as the playing field and associated delays will be even

· Regarding increasing machine and deep learning inference capability, I do not have an answer for this, beyond adopting a rules-based approach where appropriate. And so far, no one else on the planet does either. This means no one who relies on it can get near full autonomy. Whether they follow my other recommendations or not. Having said that, I suggest pressing forward to learn and accomplish what needs to be done regardless. That includes perception/sensor, planning and execution subsystem and integrated engineering, writing regulations and certification processes, formulating an inference development plan so it is as efficient as possible etc.

· Stop the hype and misleading the public

· Regarding sensors. There is probably no way around influencing the relevant vendors to create what this domain needs and using alternate though lower capability systems to make as much progress as you can. This is where simulation can be a major benefit. You can create the sensor models you need to both make overall progress and assist in the design of the real-world tech you need others to make.

Full disclosure, I interviewed with Wisk in the spring and had this entire conversation with them at this time. I also sent a message on the subject to Brian Yutko the VP and Chief Engineer of Sustainability and Future Mobility.

More relative information in my articles here

How the failed Iranian hostage rescue in 1980 can save the Autonomous Vehicle industry

https://imispgh.medium.com/how-the-failed-iranian-hostage-rescue-in-1980-can-save-the-autonomous-vehicle-industry-be76238dea36

New FAA Certification Chief’s Words and Intentions Send Up Serious Red Flags

https://imispgh.medium.com/new-faa-certification-chiefs-words-and-intentions-send-up-serious-red-flags-f8ae5cd388ff

Is the FAA VTOL Parts 23 and 21.17(b) Certification Debate Counterproductive Noise?

https://imispgh.medium.com/is-the-faa-vtol-parts-23-and-27-17-b-certification-debate-counterproductive-noise-a9db14c64703

Cruise stops testing unprotected lefts in response to a crash, yet is somehow still driverless?

https://imispgh.medium.com/cruise-stops-testing-unprotected-lefts-in-response-to-a-crash-yet-is-somehow-still-driverless-a953e41d0d5

The Autonomous Vehicle Industry can be Saved by doing the Opposite of what is being done now

https://medium.com/@imispgh/the-autonomous-vehicle-industry-can-be-saved-by-doing-the-opposite-of-what-is-being-done-now-b4e5c6ae9237

My name is Michael DeKort — I am Navy veteran (ASW C4ISR) and a former system engineer, engineering, and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, a software project manager on an Aegis Weapon System baseline, and a C4ISR systems engineer for DoD/DHS and the US State Department (counterterrorism). And a Senior Advisory Technical Project Manager for FTI to the Army AI Task Force at CMU NREC (National Robotics Engineering Center)

Autonomous Industry Participation — Air and Ground

- Founder SAE On-Road Autonomous Driving Simulation Task Force

- Member SAE ORAD Verification and Validation Task Force

- Member UNECE WP.29 SG2 Virtual Testing

- Stakeholder USDOT VOICES (Virtual Open Innovation Collaborative Environment for Safety)

- Member SAE G-35, Modeling, Simulation, Training for Emerging AV Tech

- Member SAE G-34 / EUROCAE WG-114 Artificial Intelligence in Aviation

- Member Teleoperation Consortium

- Member CIVATAglobal — Civic Air Transport Association

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee

SAE Autonomous Vehicle Engineering magazine editor calling me “prescient” regarding my position on Tesla and the overall driverless vehicle industry’s untenable development and testing approach — (Page 2) https://assets.techbriefs.com/EML/2021/digital_editions/ave/AVE-202109.pdf

Presented the IEEE Barus Ethics Award for Post 9/11 DoD/DHS Whistleblowing Efforts

--

--

Michael DeKort
Michael DeKort

Written by Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation

No responses yet