Automatic Lane Keeping System (ALKS) — UK Government and TRL Putting People in Harm’s Way Needlessly

Michael DeKort
4 min readApr 29, 2021

The TRL report — Safe performance of other activities in conditionally automated vehicles

https://trl.co.uk/publications/safe-performance-of-other-activities-in--conditionally-automated-vehicles

The government report — Safe Use of Automated Lane Keeping System (ALKS)

Summary of Responses and Next Steps https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/980742/safe-use-of-automated-lane-keeping-system-alks-summary-of-responses-and-next-steps.pdf

First, in spite of the name Automatic Lane Keeping System (ALKS), this system is not just a L1/ADAS driver assist system. It allows for the driver to ceded steering control, which includes lane changes and hands free non-attentive driving. It is not a system that is confined to making sure you stay in the lane, while you have control, by providing alarms, vibrations, or simple corrections.

This report is a dangerous wolf in sheep’s clothing. It mentions critical safety issues and leads one to believe it has your back. It does not. It cherry-picks data and focuses on the rosy statistical end of the safety and due diligence scales. Meaning it maximizes the statistically high “happy path” while minimizing the statistically low edge, corner, and many crash scenarios. The latter being where efforts like this should be focused. That last “10%” matters most. The root cause of this deception is in the section on trust. People must trust ALKS to use it. This report misleads people into false confidence by posing as a competent and ethical arbiter that is not just keeping them safe, but making them safer. Nothing could be further from the case.

An example being handover and situational awareness timing issues. By acknowledging there are situational awareness timing issues associated with handover but then cherry-picking scenarios by avoiding edge cases and many accident scenarios, it misleads people and creates dangerous false confidence. This leads folks to believe all handovers can be made safe. They cannot in time critical scenarios because time to regain proper situational awareness cannot be provided. Stating there is no single takeover time is correct. However, a plethora of studies has shown the time needed is 3–45 seconds. That is based on the scenario or scenario threads. The report stayed far away from times over 10 seconds. (Admittedly the 60kph speed limit would likely not involve the higher end of that range.)

The 30 second driver awareness check alarm is a Tesla-like joke. How far does a car go at 60kph in 15–30 seconds?

This statement shows TRL understands what I am saying is correct, yet it plows forward –“Physical and sensory disengagement with the driving task, and adoption of NDRTs might leave drivers ill-prepared for transition demands and resumption of control in certain circumstances. However, aspects of the ALKS system, as currently specified, should minimize some potential risks.” After this the report mentions that if the speed and complexity increases ALKS may be too dangerous to use. ODDs where there are pedestrians and other objects around, there is not a divided median and higher speeds. The problem being other vehicles can cause issues and it is not impossible for pedestrians, animals, or other objects to exist in the currently associated ODD. (The second report I link to above includes a survey where others made similar observations as me.)

Beyond this is the double edged 10 second handover time that is built in. While it may be helpful at 60kph it also largely invalidates the use of the mode. Why? Because you need a 10 second gap at 60kph to other cars to ensure 10 seconds is available should something go wrong. How often is that gap going to be provided? And how often is the mode going to recklessly pop in and out as that threshold is crossed and uncrossed? Where is the requirement for the systems to prove their capabilities? Where are the requirements for critical artifacts like scenarios tested, disengagements and their root causes? Proof that deep learning systems properly handle a requisite number of objects and degraded object? Versus being confused, misidentifying, or ignoring them?

ALKS, and SAE Levels 2 and 3 should not exist. This effort makes things far worse. It creates the situation it says it wants to avoid, will harm people as a result and hasten the industry’s collapse and public mistrust.

Note-The ONLY government organization I am aware of on the planet who understands all of this is US DOT VOICES. Please check them out.

USDOT introduces VOICES Proof of Concept for Autonomous Vehicle Industry-A Paradigm Shift?

· https://imispgh.medium.com/usdot-introduces-voices-proof-of-concept-for-autonomous-vehicle-industry-a-paradigm-shift-87a12aa1bc3a

More in my articles here

The Autonomous Vehicle Industry can be Saved by doing the Opposite of what is being done now

· https://medium.com/@imispgh/the-autonomous-vehicle-industry-can-be-saved-by-doing-the-opposite-of-what-is-being-done-now-b4e5c6ae9237

UK Government is Ending (and Saving) the Autonomous Vehicle Industry as We Know It

· https://imispgh.medium.com/uk-government-is-ending-and-saving-autonomous-vehicle-industry-as-we-know-it-73d5be453330

SAE Autonomous Vehicle Engineering Magazine — Simulation’s Next Generation

· https://www.sae.org/news/2020/08/new-gen-av-simulation

The Hype of Geofencing for Autonomous Vehicles

· https://medium.com/@imispgh/the-hype-of-geofencing-for-autonomous-vehicles-bd964cb14d16

My name is Michael DeKort — I am a former system engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS.

Industry Participation

- Founder SAE On-Road Autonomous Driving Simulation Task Force

- Member SAE ORAD Verification and Validation Task Force

- Stakeholder USDOT VOICES (Virtual Open Innovation Collaborative Environment for Safety)

- Member SAE G-34 / EUROCAE WG-114 Artificial Intelligence in Aviation

- Stakeholder for UL4600 — Creating AV Safety Guidelines

- Member of the IEEE Artificial Intelligence & Autonomous Systems Policy Committee

- Presented the IEEE Barus Ethics Award for Post 9/11 DoD/DHS Whistleblowing Efforts

--

--

Michael DeKort

Non-Tribal Truth Seeker-IEEE Barus Ethics Award/9–11 Whistleblower-Aerospace/DoD Systems Engineer/Member SAE Autonomy and eVTOL development V&V & Simulation