By Michael Haines
Opinion

"Levels" of Driverless Autonomy Are Dangerous — Clarity Is Key

In 2016 we witnessed the first high-profile fatal accident involving a Tesla vehicle, where the driver had relinquished control to the autopilot. In that case, Tesla is denying liability as it — accurately — claims that the autopilot is sold as a “Level 2” device. This requires the driver to remain on alert and ready to respond if, for some reason, the autopilot fails to detect impending danger.

The trouble is, people are terrible at passive monitoring tasks. We get bored and distracted.

Tesla released an upgrade, which it claims is even better and will avoid the problems of the earlier version. As a result there will be many fewer times the driver needs to intervene, with the likelihood that they become even more blasé. The unfortunate irony is that, as the device improves, it potentially becomes more dangerous. When it does fail, the distracted driver will have no hope of responding in time.

It is clear that the development of driverless cars is outpacing the ability of regulators to govern road safety.

In the US, the National Highway Traffic Safety Administration (NHTSA) and SAE International (a global association of more than 128,000 engineers and related technical experts in the aerospace, automotive and commercial-vehicle industries) have developed a set of guidelines. As well, each state is writing its own legislation. In Europe, as elsewhere, various organisations and government bodies are grappling with how to define and regulate increasing levels of automation.

The problem is that the guidelines are far too general.

"There should never be any doubt as to who is in control."

It means that innovative companies like Tesla can be led down a path that seems to improve safety, but is actually dangerous.

For the sake of clarity, it would be better to recognise only two separate use cases: “driver-assist” and “driverless”.

In the first case — driver-assist — the car may control acceleration and braking to maintain speed and spacing (essentially active cruise control). However, the driver should never be permitted (even for a moment) to deliberately release control of steering. Nor should the car take over active steering, except to avert an accident. In this instance, the challenge is to monitor the driver’s behaviour and alertness, as well as the immediate surrounds, to ensure the driver is responding appropriately and, if not, to intervene. Intervention would include steering out of harm’s way, and perhaps even pulling over.

In the second case — driverless — the challenge for manufacturers is to expand the areas and conditions in which the car can operate in full control, with a managed handover between modes. This needs to operate like a handover between pilots, where the car alerts the person that it is approaching an area and/or conditions (such as impending rain) where the car is not rated to operate in driverless mode. The person will require time to orient themselves and to formally acknowledge they are back in control.

This may need to include monitoring of the person’s level of alertness before making the handover and, if there is any concern, having the car pull over. The problem could arise, for example, if the person has been in a deep sleep on a long highway journey.

There should never be any doubt as to who is in control.

A clear understanding of who — or what — is in control of the vehicle is important not only for operational but also legal reasons.

If it is the driver, they will be primarily responsible for any accident — unless they can show equipment failure. If it is the car, the manufacturer (and/or supplier and/or maintenance provider) will be liable.

No doubt, this takes the fun out of hands-free driving. But driver-assist technology is not there for fun — it is there to improve safety. As such, the autopilot must sit in the background, ready to respond to avoid a forward collision or any other threat it is programmed to manage.

Having the two classifications does not stop on-road testing of driverless mode.

It just means that while in driver-assist mode the manufacturer must put in place systems to compare what the driver actually does with how the car would have responded. In the case of the Tesla accident, with such a system, the software would detect that the driver braked unexpectedly on the highway. This would trigger an upload of the data which would show in simulation that the autopilot failed to detect the truck in its path, leading to the improvements just announced.

Under this scenario — as more and more learning and improvements take place — at some point, Tesla would become confident enough to release the autopilot for driverless mode. Perhaps, to start with, it would work only on specific highways, or in slow-moving traffic.

That’s when the fun starts.