Skip navigation
This Tesla Model S crashed on May 7 2016 while in selfdriving mode killing its driver Joshua Brown Photo The Star
<p>This Tesla Model S crashed on May 7, 2016, while in self-driving mode, killing its driver, Joshua Brown. (<em>Photo: The Star</em>)</p>

Autonomous technology may encourage a false sense of security

Is the public over-estimating the capabilities of today’s autonomous and semi-autonomous vehicles?

While many are hailing the potential benefits of fully- and semi-autonomous vehicle systems, Andrew Silver – co-founder and chief technology officer for Tango Networks, a mobile device management system developer – argues in the first of two guest columns that it might actually be creating more, not fewer, distracted driving issues. His second column will address potential solutions to such distraction.

The idea of self-driving cars has captured the popular imagination, promising to cut accidents and to stem the surge of distracted driving fatalities.

But is the public over-estimating the capabilities of today’s semi-autonomous vehicles?

In June, the U.S. National Transportation Safety Board (NTSB) released findings into the May 2016 fatal crash of a Tesla that killed the driver.

The driver had been using Tesla’s “autopilot” driving feature when the car slammed into a tractor-trailer that was crossing a highway in Florida.

The agency reported that the driver did not adequately respond to multiple warnings from the car to place his hands on the car’s steering wheel.

Andrew Silver

That report also deepens the mystery about what the driver was doing at the time of the crash.

In fact, one witness claimed that the driver might have been watching a movie. But the NTSB reported no evidence that this was the case.

What seems apparent, based on data extracted from the car, is that the driver may have had an unrealistic expectation of the car’s Autopilot and its level of driving automation.

”The data shows that out of those 37.5 minutes during which the Autopilot was active,” the report explained, “the system did not detect driver’s hands on the steering wheel for approximately 37 minutes.”

As a result of this finding, we must ask this question: Has the hype around “self-driving” vehicles created a false sense of security among drivers that can lead to very risky driving behaviors?

If it has, which safety enhancing technologies can we implement to reduce the potential for these behaviors to have deadly results?

Distracted Driving is on the Upswing

The Tesla Autopilot feature along with the autonomous car initiatives of Google, Uber and a host of other companies have arrived at a time when the issue of distracted driving has become critical.

In the U.S., motor vehicle deaths are escalating with the most dramatic rate reported in nearly half a century, according to data released by the National Highway Transportation Safety Administration (NHTSA).

In analyzing the trend, the National Safety Council (NSC) points to distracted driving – including mobile phone calls and texting – as one of the most significant contributors to accident risk.

The distracted driving issue has spawned legislation to criminalize deliberate use of distracting devices, like texting while driving. But laws and penalties can only go so far in deterring risky distracted driving behaviors.

Publicity around automated driving has hailed it as the “holy-grail” for solving the distracted driving issue. Yet this automated driving trend could potentially worsen this problem in the near term, according to a report in MIT Technology Review, published by the Massachusetts Institute of Technology.

Why? Because today’s semi-autonomous cars are a long way from “automated driving” or “driverless vehicles,” despite how they may be regarded in the popular imagination.

In fact, there is a wide range of capabilities in automated driving systems. Yet a lack of appreciation for these distinctions may lead some people to think they can engage in risky behaviors like texting while a vehicle is on “autopilot.”

Spectrum of Autonomy

Last year the NHTSA adopted a framework describing six levels of driving autonomy. As defined by SAE International, a global association of vehicle engineers formally known as the Society of Automotive Engineers, there are five levels within the international standard for automated driving:

  • Level Zero: No Automation. The human does all the driving all the time.
  • Level One: Driver Assistance. The vehicle’s system provides assistance with steering or acceleration and braking in certain circumstances, such as when something blocks the road.
  • Level Two: Partial Automation. The system performs steering, acceleration and braking with a human monitoring the road and environment.
  • Level Three: Conditional Automation. The system performs the driving tasks and monitors the environment while the human stands by to intervene if the system requests it.
  • Level Four: High Automation. The system performs all driving tasks and can continue do so even if a human driver fails to intervene if requested.
  • Level Five: Full Automation. The vehicle does all driving all the time.

Today’s production vehicles are a distance away from “full automation” as defined in the standard. In the case of Tesla, the system requires drivers to have their attention on the road and hands on the wheel even when autopilot is engaged, which means the system is Level Two at most.

The standard also makes it clear why today’s semi-autonomous vehicles are no cure for the problem of distracted driving. In fact, levels zero through four require the human to be alert and ready to take control immediately if requested.

So it will be some time before common production vehicles will have autonomous driving systems that are sophisticated enough to solve the distracted driving issue, among other challenges.

TAGS: News
Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.