Right next to the hype in the autonomous vehicle industry (air and ground) are the Air Taxis and eVTOL makers. There are a couple red flags one should look for to determine there may be significant problems with the designs and testing approaches of these companies.
The first red flag being experience. If you peruse LinkedIn resumes for these folks, like the other, you will see many with very little domain experience. Meaning, they do not have a lot of experience making aircraft. Before I go on, I should mention that I do not either. However, I have deep experience in fixed and rotary wing simulation to an FAA Level D. That means I understand what it takes to make extremely detailed models of these systems including on the flight and engine side. What I have learned from that experience, which applies directly here, is how simulation plays major roles in aircraft development. That in turn leads me to understand another red flag in addition to resume depth. That being, if simulation is used for development and testing, as well as what simulation is used. If these folks use none, there is impending doom on the horizon. (Likely experienced during live flight tests, that should be done in simulation, going very, very wrong.) Hopefully, that doom is bankruptcy from excessive failures and costs before the doom that brings human injury and death. The next issue is what simulation do they use. Use of X-Plane for critical design and testing is a huge red flag. While it can have value, when coupled with serious development software like J2 Flight Dynamics provides, it should not be used as the primary simulation platform. The reason being two-fold. The first is depth and breadth of flight and engine model fidelity. While the system is extremely well done and valued very highly by professional pilots for recreational flying, it does not have enough depth and breadth of tuning parameters to facilitate precise modeling. You tweak one thing, and you wind up affecting another negatively because the choices that exist are too broad in nature. The other issue is the architecture is not deterministic and the models are not federated. This means the system cannot run in proper real-time, especially when the scenarios are complex and mathematically more intense. I know all of this because I had the extreme misfortune of working for a company was the first to try to use X-Plane to build two different fixed wing FAA Level 6 simulators. They, and others, had used X-Plane for level 5 systems in the past successfully. The problem is that one level jump to Level 6 is vast. Level 5 is a general aircraft model. The systems are used for cockpit familiarization. Level 6 however, is where you must have “the form fit and function” of an exact aircraft. When I showed the FAA Part-60 Level 6 test set to X-Plane, who had never seen it before, they told me there was no way they could meet it. …
Program references
· https://www.transportation.gov/hasscoe/voices
· https://usdot-voices.atlassian.net/wiki/spaces/VP/overview
I have reviewed the Virtual Open Innovation Collaborative Environment for Safety (VOICES) Proof-of-Concept (PoC) document and am very encouraged by what it says. It appears may be a paradigm shift at DoT/NHTSA in the works under the new Secretary of Transportation Pete Buttigieg.
The folks behind this new effort clearly understand that use of the public domain for most driverless and ADAS system development and testing (SAE L1-L5) is untenable from a time, money and safety point of view. And the solution is “high-fidelity” simulation. I put “high-fidelity” in quotation marks because this is the rub or final epiphany for the industry. While the paper mentions needing higher fidelity than what this industry uses now, it also mentions developing what is needed. I would like to suggest that there is no “development” is needed. Meaning, there is nothing to innovate. The technology and modeling approaches needed exist in other industries like aerospace/DoD and the FAA for decades. They simply need to be adapted to this industry. (While gaming architectures and visual engines look fantastic, get some of the physics right and have benefit, they are simply not capable of the level of fidelity and real-time that is necessary. The solution is to look to other industries, especially aerospace/DoD/FAA. More in my articles below.) Beyond this we need to ensure the simulation and models have the right fidelity for the associated use cases and scenarios. …
I just read the NASA UAM ConOps for UML4. Unfortunately, it seems to confirm several things:
· An unreasonable and dangerous over reliance on UTM in complex environments The definition of UML 5 specifically states “UTM inspired ATM” — And that is with up to 9999 SIMULTANEOUS ops
· Inclusion of an independent “surveillance” common operating picture, warning and control system is “supplemental” or optional no matter how complex the environment
· No mention or assistance with the V2X 5G 30mhz limited bandwidth issue — Or that the RF is shared with the ground side V2X systems
· Urban and state governments are left on their own to not only figure out what the gaps are but to perform the end-state system of systems designs, create implementation plans and execute…
Wow!! So many companies getting rid of “safety drivers” and attaining SAE L4 withing a couple months of each other. More will come soon as the hype and desperation domino tumbling picks up speed, especially avoidable tragedies with the loss of in vehicle safety drivers. (They should not exist at all. Having said this, they are far better off in the vehicle since remote ops adds significant latency.)
· L4 and Removed Safety Drivers — Waymo, Cruise, Yandex, Gatik
· Removed Safety Drivers — Motional, AutoX, Baidu
· Zoox and Voyage appear to pop any day now
Of course, NONE of them provides any significant proof of their capabilities. You know why Tesla, Uber and Lyft can’t say they have something they don’t? They don’t use employees as Guinea pigs they can control. Tesla uses its customers and the rideshare folks would not be able to silence their customers. This funding pump will not make it through the new year. …
As I have stated before, the current simulation systems being used in this industry have value and the visual rendering engines are fantastic from a visual and basic physics POV. However, the gaming architecture and modeling approaches will have model fidelity and real-time issues as they are used for increasingly complex and loaded scenarios. (More in my earlier article below.)
“Physics based”, “Digital Twin” and “Real-time” Simulation Terms can be Misleading
I have been watching the simulation companies in the autonomous vehicle space add to and morph their use of buzzwords for well over a year. In most cases using my words incorrectly to mislead the industry versus fixing their technology. Miraculously these folks discovered “determinism” and even “federation” recently after adopting and ramping up use of “digital twin”, “real-time”, “physics” etc. I have been using most of these words for almost 4 years. I tried to get most of these companies to use the right tech and approach almost 4 years ago, offered to help them do so and they refused. The reasons for that varied from not wanting to admit they took the wrong approach, their customers would have to redo work, they would have to replace what was out there at their cost or carry two baselines. (That last reason was predominantly from OEM simulation companies.) I was actually told by some of them they would fix their systems when their customers figured out the tech was flawed and paid for it. Since that likely will not be until a real-world tragedy occurs, I decided to take this on myself and created Dactle. Having said all of this, as well as what I have been saying about the use of public shadow and “safety driving” over the same period, it should be obvious my issue here is not a competitive one. It is an ethical one. I do not want my words or POV to be hijacked and bastardized by people who have not only have no interest in creating and using the better technology, they have no experience with it. Instead, they want to mislead people into thinking they do and using and getting buried in their inadequate technology. Which will eventually cause significant problems, rework and harm people. I have questioned folks here and there on this over the past year. However, AImotive has gone too far. Six days ago, I wrote the article I posted a link for above. Just prior to that I announced I received a MegaGrant from Epic Unreal to continue our efforts to supply a legitimate “digital twin” that will facilitate FULL L4/5 development. Just today, and quite coincidentally, AImotive posted their LinkedIn entry and YouTube video. I posted a response to those items on LinkedIn today and they deleted it within 5 minutes versus responding to my questions and concerns directly. …
As I have stated before, the current simulation systems being used in this industry have value and the visual rendering engines are fantastic from a visual and basic physics POV. However, the gaming architecture and modeling approaches will have model fidelity and real-time issues as they are used for increasingly complex and loaded scenarios. (More in my earlier article below.)
“Physics based”, “Digital Twin” and “Real-time” Simulation Terms can be Misleading — https://imispgh.medium.com/physics-based-digital-twin-and-real-time-simulation-terms-can-be-misleading-c7a9066eb3de
I have been watching the simulation companies in the autonomous vehicle space add to and morph their use of buzzwords for well over a year. In most cases using my words incorrectly to mislead the industry versus fixing their technology. Miraculously these folks discovered “determinism” and even “federation” recently after adopting and ramping up use of “digital twin”, “real-time”, “physics” etc. I have been using most of these words for almost 4 years. I tried to get most of these companies to use the right tech and approach almost 4 years ago, offered to help them do so and they refused. The reasons for that varied from not wanting to admit they took the wrong approach, their customers would have to redo work, they would have to replace what was out there at their cost or carry two baselines. (That last reason was predominantly from OEM simulation companies.) I was actually told by some of them they would fix their systems when their customers figured out the tech was flawed and paid for it. Since that likely will not be until a real-world tragedy occurs, I decided to take this on myself and created Dactle. Having said all of this, as well as what I have been saying about the use of public shadow and “safety driving” over the same period, it should be obvious my issue here is not a competitive one. It is an ethical one. I do not want my words or POV to be hijacked and bastardized by people who have not only have no interest in creating and using the better technology, they have no experience with it. Instead, they want to mislead people into thinking they do and using and getting buried in their inadequate technology. Which will eventually cause significant problems, rework and harm people. I have questioned folks here and there on this over the past year. However, AImotive has gone too far. Six days ago, I wrote the article I posted a link for above. Just prior to that I announced I received a MegaGrant from Epic Unreal to continue our efforts to supply a legitimate “digital twin” that will facilitate FULL L4/5 development. Just today, and quite coincidentally, AImotive posted their LinkedIn entry and YouTube video. I posted a response to those items on LinkedIn today and they deleted it within 5 minutes versus responding to my questions and concerns directly. …
Ever since the Uber tragedy a couple years ago simulation companies, and AV makers, in this industry have been upping their use of the terms “Physics based”, “Digital Twin” and “Real-time”. If these systems had physics and real-time capabilities to match the incredible visual aspect of the gaming engines, all would be well. But they do not. While some companies have actually increased their capabilities in this area, the vast majority of them are exaggerating these capabilities. Many to the point of misleading people, creating false confidence and serious downstream problems. Many of which few people are unaware of because they do not know there are gaps between the simulation or models and the real-world or that they could be filled if they used different technology. Technology that comes from DoD and aerospace. Worst of all, many of these companies have come to realize these gaps and issues exist. But refuse to acknowledge or fix them because they do not want their customers to know the truth, question their actual understanding of the space, provide remedies on their own nickel, or be set up for liability issues when real-world tragedies occurred. When I have discussed this with simulation companies several have told me they will fix the gaps when their customers figure out there are issues and pay for them to be fixed. The problem with this is those problems will likely not be discovered until a succession of real-world tragedies forces a review of the various model’s performance curves compared to the real-world. When it comes to the use of the terms themselves and the absolute truth the issue is legitimately fuzzy. Depending on the use cases or scenarios all may be fine. …
Article — ‘A legal first’: B.C. man accused of dangerous driving for sleeping in self-driving, speeding Tesla — https://globalnews.ca/news/7534323/a-legal-first-b-c-man-accused-of-dangerous-driving-for-sleeping-in-self-driving-speeding-tesla/
A 20-year-old named Learn Cai was criminally charged in Vancouver Canada for falling asleep at the wheel of a Tesla in “autopilot”. He was also charged for speed at 150 km/h.
Some Questions
· Why is a Tesla in “autopilot” going 150 km/h, especially with no driver input?
· I thought Tesla’s were supposed to slow down and stop if the system does not detect a driver driving?
· Were their alarms and they were ignored?
Some Points
· Tesla’s driver monitoring system is a reckless joke. There is no camera monitoring the driver. It relies on torque which can be rigged with a water bottle etc. …
“Smart City” efforts appear to be significantly behind where they can and should be to ensure safe urban environments for new air and ground services and technology. We seem to be missing end-state design and implementation due diligence. Whether that is from autonomous air and ground vehicles or those operated by a human. This includes use cases ranging from personal use to deliveries, inspections, ride sharing, air taxis, etc. The puzzling part of this is that unlike creating autonomous vehicles, where machine learning and deep learning are still major challenges, the technology and know-how exist to design and deploy most of what is needed now. Albeit it from a variety of industries. (Which is part of the problem.) The most concerning part is the apparent lack of end-state designs, top-down systems engineering and deployment plans. Including air and ground air traffic control or “AWACS” capabilities. (At least as far as what the community is sharing, especially NASA and the FAA.) This is particularly concerning given how long it will take to design and field solutions given there are already “driverless” vehicles and robots on our streets and in the air now. From ride sharing to delivery drones, air taxis, non-cooperative drones. It seems as if the industry is a bit Pollyanna and not giving nearly enough respect to Murphy. Choosing to follow IT/Silicon Valley “Agile” methods that are often void of systems engineering, exception handling or what if scenarios at a total system level. Complex safety-based systems are not the time or place to “move fast and break things”. (While waterfall users can try to know too much up front, many Agilists ignore too much up front.) One air tragedy or a significant accident on the ground involving a child or family and the whole thing comes to a grinding halt. With an air safety rating of 6.4 sigma a couple years ago, which resulted in no deaths from air travel in the US, to the recent Boeing disaster, seven people around the world killed needlessly by autonomous vehicles to date and trust in the FAA and autonomous systems low, one would think the current approach and posture would be the opposite of what it is. …
Reference article — Autonomous vehicle makers should be held responsible for accidents, says Law Commission — https://snip.ly/vog5f2#https://thenextweb.com/shift/2020/12/18/autonomous-vehicle-makers-should-be-held-responsible-for-accidents-says-law-commission/
The law is meant to hold the autonomous vehicle makers responsible for when the systems do not perform properly. But it’s the part about them adding time for handover that is like the meteor that ended the dinosaurs. Both very helpful to mankind. And in this case saves the industry from itself.
“However, the Law Commission references a “transition period” of 10 to 40 seconds in which a driver would have to regain control of the vehicle. …