Fatal Tesla Autopilot Accident Shows that Technology and Public Understanding Both have a Long Way to Go

KVH intertial systems driverless cars

KVH intertial systems driverless cars

This article was originally published on LinkedIn Pulse:

We’ve grown used to the idea of self-driving or “driverless” cars in the realm of science fiction. Increasingly, that flight of fancy among novelists and screenwriters is merging with the real world. Every day there’s news about driverless car technology, whether it be the Google Car learning to recognize cyclists or announcements by automakers and technology companies about new collaborations intended to enhance safety on the road and make the trek between points A and B ever easier.

Sadly, leading the headlines over the past several days has been the announcement of the first fatality in a Tesla vehicle equipped with its Autopilot driver-assist system. The driver and the vehicle’s onboard sensors failed to recognize the danger of a tractor trailer turning across their path at an intersection, with deadly results.

Understanding What We’re Talking About

In the days since the news broke, much of the coverage has ranged from whether the Tesla incident would put the overall move toward driverless cars on hold (unlikely according to observers) to simple click bait (“Tesla Autopilot Kills Its Driver”) that calls to mind a Terminator movie.

Unfortunately, what the coverage has also done is create confusion about the technology we’re actually talking about. Headlines like “Self-driving Tesla was Involved in Fatal Crash, U.S. Says” misrepresent what Tesla’s Autopilot system actually is.

“Self-driving car” has suddenly become a catch-all term that covers every degree of vehicular automation, ranging from helping you parallel park to a fully autonomous vehicle. (Whether or not Tesla’s “Autopilot” feature branding is itself misleading is the subject for a completely separate debate.)

It’s important, within the scope of the discussions underway, to understand exactly what Tesla’s Autopilot is and where it fits in the spectrum. What Tesla offers is really an Advanced Driver Assistance System (ADAS) option based on the use of ultrasonic sensors, optical systems, a front-facing radar, and digitally controlled brakes.

It was never intended to be a truly driverless vehicle, which is a very different beast when it comes to technology and expectations. Autonomous vehicles often incorporate a broader array of technology, including inertial measurement units (IMUs) or more advanced inertial navigation systems (INS) to provide the precision navigation and positioning information necessary for the vehicle to navigate on its own.

Members of the media, the public, and most of all drivers need to recognize what these tools are designed for, as well as their limits and their risks.

The National Highway Transportation Safety Administration (NHTSA) has published one of several guides to the levels of vehicle automation and what each is expected to deliver:

  • (Level 0): No-Automation. The driver is in complete and sole control of the primary vehicle controls – brake, steering, throttle, and motive power – at all times.
  • (Level 1): Function-specific Automation. Automation at this level involves one or more specific control functions. Examples include electronic stability control or pre-charged brakes, where the vehicle automatically assists with braking to enable the driver to regain control of the vehicle or stop faster than possible by acting alone.
  • (Level 2): Combined Function Automation. This level involves automation of at least two primary control functions designed to work in unison to relieve the driver of control of those functions. An example of combined functions enabling a Level 2 system is adaptive cruise control in combination with lane centering.
  • (Level 3): Limited Self-Driving Automation. Vehicles at this level of automation enable the driver to cede full control of all safety-critical functions under certain traffic or environmental conditions and in those conditions to rely heavily on the vehicle to monitor for changes in those conditions requiring transition back to driver control. The driver is expected to be available for occasional control, but with sufficiently comfortable transition time. The Google car is an example of limited self-driving automation.
  • (Level 4): Full Self-Driving Automation. The vehicle is designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip. Such a design anticipates that the driver will provide destination or navigation input, but is not expected to be available for control at any time during the trip. This includes both occupied and unoccupied vehicles.

In his article on Slate.com, “Is Autopilot a Bad Idea?”, Will Oremus notes that there’s dispute over where Tesla’s Autopilot fits on this spectrum (Tesla claims that Autopilot is Level 2, Ford Motor Company’s CEO believes it’s really a Level 3). Oremus also observes that other companies, such as Ford and Google, are avoiding the Level 3 style of support, precisely because of the risk that drivers will assume that the car is going to be handling all aspects of driving and decision making at all times, which is not the case.

[A]t some point in its testing, Google decided that Level 3 automation was not a good idea. The problem? When machines are doing most of the routine work, humans become the weak link. Their attention naturally wavers, leaving them unready to take over in the sort of emergency that would necessitate human involvement. For that reason, Google fundamentally rethought its approach to vehicle automation and decided to devote all its resources to Level 4 technology. Accordingly, it came out with a self-driving car prototype that was truly “driverless”—it doesn’t even have a gas pedal, brake, or steering wheel. Taking the human out of the loop, Google came to believe, was the only way to make self-driving cars truly safe.

A Matter of Trust

KVH inertial systems driverless carsDrivers trust that their vehicles will work properly – that their airbags will deploy, that their brakes will engage, and that the motor will start. With 100+ years of automotive experience behind us, there’s a track record upon which people can have faith. Any new technology being integrated into cars is usually tested to an incredible degree before it hits the road. Questions are already being raised whether Tesla erred in allowing drivers to use a feature that Tesla itself says is “beta”.

People make the assumption that once a piece of tech is integrated into a car, it’s roadworthy. But their inexperience with the tech can also lead to misplaced trust, as noted in Popular Mechanics:

We’re at a dangerous moment in vehicular autonomy. Our cars can’t drive themselves in every scenario; they’re filled with imperfect driver aids that still require human oversight. Often these systems are so good that we trust them too much—enough to induce the sort of dependence that seems to have factored into the Tesla crash.

The Undeniable Benefits of Intelligent, Autonomous Vehicles and the Road Still Ahead

Among the promises of driver-assist systems and eventually truly self-driving cars are greater safety and increased efficiency on the roads. According to its July 2016 report, the United States recorded 35,200 people killed in traffic accidents in 2015, or roughly 1.22 fatalities per 100 million miles of driving. This death, the first in a Tesla vehicle, came after 130 million driving miles where Autopilot was activated.

While the death in May was tragic, the trend thus far shows that driver assist systems are already a boon to drivers, even though we’re still in the early years of the technology. In many instances, it’s actually the human drivers who pose the most risk:

The self-driving car, that cutting-edge creation that’s supposed to lead to a world without accidents, is achieving the exact opposite right now: The vehicles have racked up a crash rate double that of those with human drivers.

The glitch?

They obey the law all the time, as in, without exception…

Driverless vehicles have never been at fault, the study found: They’re usually hit from behind in slow-speed crashes by inattentive or aggressive humans unaccustomed to  autonomous vehicles that always follow the rules and proceed with caution.

This is one of the many elements that will be addressed in years to come, along with the potential for increased network connectivity between individual cars and even the road itself, almost in the fashion of an air traffic control system; much like an aircraft that can virtually take off, travel to its destination, and then land itself without human intervention, if driverless cars are to be successful, then huge road infrastructure and network communications improvements will have to be made.

The emergence of driver-assist technology and the tremendous advances in driverless or autonomous vehicles also portends a revolution in how we travel.

Aging baby boomers with declining driving skills or disabilities will be able to climb into a driverless car, tell it to take them to their children’s or grandchildren’s home, and enjoy renewed mobility to visit family easily.

The risk of accidents will decline and, in addition to protecting the lives of driver and passengers, will reduce congestion on the highways (no more fender benders on the side of the road and gawking drivers in other cars slowing down traffic for miles behind). And 16-year olds will no longer need to fear the parallel parking aspect of the driver’s test.

For that promise to be achieved, the technology needs to continue to evolve. It is already remarkable – the blend of software, lasers, radars, cameras, inertial systems, ultrasonic sensors, and more is mind bending.

Here at KVH, we’re proud to have played a role from the very start when our fiber optic gyros were incorporated into robotic vehicles in the first DARPA Grand Challenge in 2005. Our inertial systems are already being used in the development and testing of new autonomous vehicle prototypes and we’re creating new low-cost inertial technology specifically for self-driving cars.

That said, we know there is still much work to be done. The Tesla crash was a tragic accident that could have been avoided. It is also one that illustrates how far the technology and the public’s understanding of it still need to go. And just as importantly, it is one that we believe will help make our roads and vehicles safer in the future.

Following the news of the Tesla crash, a colleague noted that she’d lost her mother 20 years ago in a fatal car crash:

I just have a different view of the common perception that driverless cars are dangerous; I understand how dangerous human driving can be.

As the technology develops, the public and media will gain a greater understanding of the nuances of these systems, what they really can do, where the technology has to yield and the human needs to get involved, and how these advances will benefit all of us.

 

You might also be interested in:

From Antique to High-tech, Backup to GNSS is in Great Demand

Guiding Intelligent Solutions

About Chris Watson 93 Articles
Chris is the senior director of marketing for KVH Industries. A lifelong sailor and storyteller, he's a self-professed geek who finds all of this technical stuff fascinating.

1 Trackback / Pingback

  1. Keep Calm and Hail that Self-Driving Bus - KVH Mobile World

Leave a Reply

Your email address will not be published.


*