The past few weeks have not been kind to Tesla’s Autopilot feature. Three separate high-profile crashes involving the autonomous driving system have been reported since June 30th, including one fatality, raising serious questions about self-driving cars and their implementation on public roads. Are autonomous cars safe? Are regulators doing enough to protect the public? How will advances in self-driving technology be affected?

Well, I’m here to tell you the hard truth – what we’re witnessing now are the growing pains of our inevitable autonomous future.

Perhaps it’s crass to label a deadly car accident as part of the “growing pains” of technological progression, but the reality is any tech has the potential to be dangerous. Add in a few tons of metal traveling at highway speeds, and you raise the stakes. Of course, the fear mongers will be quick to point fingers and assign blame, but when considered logically, each of these three incidents are the unavoidable result of humanity’s perpetual experiment for a better tomorrow.

Basically, it’s like this – it was bound to happen, it’ll happen again, and there’s nothing you can do about it.

Continue reading for the full story.

The Story So Far

Three High-Profile Crashes

On June 30th, it was reported that 40-year-old Joshua Brown, an active member of the Tesla community, was killed when his 2015 Model S collided with an 18-wheeler semi on U.S. 27 near Williston, Florida. The impact tore the roof off the Tesla sedan as it passed underneath the trailer. The incident occurred on May 7, but did not come to light until late last month.

Then on July 6th, another incident was reported, this time involving a Tesla Model X in Pennsylvania. The SUV apparently collided with the guardrail, bounced into the concrete median, and rolled onto its roof. Fortunately, both the driver and passenger were uninjured.

Finally, a third incident was reported earlier this week involving a second Model X. The incident occurred in Montana, and reportedly, the SUV swerved into a series of wooden stakes, mangling the right side of the car. Both the driver and passenger were uninjured.

In each of the three incidents, Autopilot mode was supposedly engaged at the time of the crash.

Tesla Responds

Following the revelation of the Model S crash in Florida and death of Joshua Brown, Tesla made a blog post in defense of its Autopilot feature, stating, “This is the first known fatality in just over 130 million miles where Autopilot was activated. Among all vehicles in the U.S., there is a fatality every 94 million miles. Worldwide, there is a fatality approximately every 60 million miles.”

The post continues: “What we know is that the vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of tractor trailer against a brightly lit sky, so the brake was not applied.”

The blog post acknowledges that Autopilot is considered “an assist feature” that’s still in the “public beta phase,” and that when activated, drivers are notified to keep their hands on the steering wheel and maintain control at all times.

Afterwards, following the Pennsylvania Model X rollover, Tesla was unable to confirm whether or not Autopilot was actually engaged at the time of the second crash. The driver was later charged with careless driving, raising further doubts as to whether or not the autonomous drive system was involved in the incident in Pennsylvania.

On Tuesday, Tesla issued a statement regarding the Montana crash, saying, “This vehicle was being driven along an undivided mountain road shortly after midnight with autosteer enabled. The data suggests that the driver’s hands were not on the steering wheel, as no force was detected on the steering wheel for over two minutes after autosteer was engaged (even a very small amount of force, such as one hand resting on the wheel, will be detected). This is contrary to the terms of use that are agreed to when enabling the feature and the notification presented in the instrument cluster each time it is activated.”

The statement goes on to state that the vehicle repeatedly warned the driver to reassume control, but that the driver failed to do so, leading to the crash.

The statement also points out that autosteer is “best suited either for highways with a center divider or any road while in slow-moving traffic,” reiterating that the feature should not be used “at high speeds on undivided roads.”

The latest news is that Tesla is prepping another blog post outlining how Autopilot works and how drivers should use it, information that’s already available in the Tesla owner’s manual.

Government Investigation

The National Highway Traffic Safety Administration (NHTSA) is currently conducting two investigations into the automated function of the Tesla Autopilot system and its involvement in the Model X rollover crash in Pennsylvania and the fatal Model S crash in Florida. The National Transportation Safety Board (NTSB) is also looking into the Florida crash.

Widespread Criticism

Unsurprisingly, the media was quick to criticize Tesla for what’s perceived as irresponsible behavior in the rollout and marketing of its “unproven” Autopilot technology.

Consumer Reports, for example, lambasted the automaker for promoting “a dangerously premature assumption that the Model S was capable of truly driving on its own,” and that the public shouldn’t be subject to testing of any technology still in the “beta” phase.

Many critics, including Consumer Reports, have called on Tesla to rename Autopilot to more accurately reflect the driver’s responsibility while the system is active, as well as provide training on its proper use. Some have even demanded Tesla disable the Autopilot system altogether.

Meanwhile, Tesla isn’t budging, stating it won’t disable the feature and will instead continue to refine the system based on internal testing and real-world data, rather than the advice of “any individual or group.”

Consumer Reports0

Tesla isn’t the only carmaker associated with the advent of autonomous passenger vehicles. Audi, BMW, Infiniti, Mercedes-Benz, Volvo – each is working furiously towards its own system for consumers. The difference is this – Tesla is by far the most aggressive.

While the competition is content to introduce the tech in a piecemeal fashion (automatic braking, adaptive cruise control, etc.), Tesla is one of the first to offer a single, cohesive package. Furthermore, the critics are right to call the company out for marketing Autopilot as a total self-driving tool, even though the fine print clearly states that the human behind the wheel is ultimately responsible for control of the vehicle.

So then – what we have here is an unorthodox, tech-savvy, upstart carmaker proudly leading the charge into the next phase of transportation. But here’s the thing – none of this is Tesla’s fault.

Why Tesla, And Why Now?

While we’ll have to wait until the NHTSA and NTSB conclude their reports for official confirmation, I’m confident the Autopilot system won’t be labeled as the cause for any of the three crashes currently in the headlines.

Why?

Let’s look at the Florida crash first. Tesla acknowledges Autopilot was engaged at the time, but neither the onboard sensors nor the driver saw the semi crossing the highway. The brakes were never applied. So if you take Autopilot out of the equation, would the outcome be any different?

Perhaps the driver was distracted, content that the autonomous system would handle everything without his inputs (a likely scenario considering he posted a video of Autopilot avoiding a collision only a month before he was killed). Either way, the fault is still attributable to the driver, not the system.

In Pennsylvania, it’s still unclear whether or not Autopilot was even engaged at the time of the crash, but considering the driver was charged with careless driving, it’s quite likely operator error played a significant role. The same goes for the Montana crash, where the driver engaged the system on exactly the sort of road Tesla advised against using it.

In each scenario, it was the brain behind the wheel calling the shots, not the Autopilot.

Our Inevitable Autonomous Future

Unfortunately, these three most recent Autopilot incidents are just the tip of the iceberg when it comes to controversy and autonomous vehicles. Their occurrence, and indeed the future occurrence of similar crashes and tragedies, is the inevitable result of humanity’s push for more convenience, more efficiency, and more safety. A fully autonomous future promises all those things, but how we get there from here is going to be messy.

That’s just the nature of the beast, and it’s entirely reasonable to assume there will be future incidents where the autonomous system is completely at fault.

The reason is simple. When dealing with something as complicated as driving, it’s literally impossible to test for every single possible scenario in a lab. The real world just has too many surprises in store.

I’m reminded of a recall that Acura issued last year that saw 48,000 MDX and RLX models sent in for a malfunctioning Collision Mitigation Braking System. When working properly, the system would use radar to identify a potential collision, then apply the brakes to stop short. However, an issue arose wherein if the system detected a vehicle accelerating in front while traveling parallel to an iron fence or metal guardrail, it would unnecessarily apply the brakes, potentially causing a rear-end collision.

This scenario may sound pretty basic, but it’s a good example of a common occurrence that can go unidentified before technology is released to the public.

If the public demands autonomous cars (and it does), it's going to have to be comfortable with the fact that, to some extent, the system is imperfect. That includes anyone on the road, not just those driving an autonomous car, as the transitional phase to full autonomy will be gradual, with self-drivers mixing equally with fully manual vehicles.

And that’s where it can get messy.

Adding to the danger is inherent technology complacency. As self-driving cars become more commonplace, folks will become overly reliant on it and start to abuse it. It’s something we’re already seeing today, as videos of Autopilot pranks and sleeping in traffic surface on YouTube.

But that’s just how it goes, and you gotta take the good with the bad. But of course, pinpointing the benefits of autonomous systems will be a lot trickier. We’ll eventually see it in statistics as the tech becomes more and more widespread, but for now, tragedies like the Joshua Brown story make the headlines.

And to answer the questions posed in the introductory paragraph –

Are autonomous cars safe?

Yes, if they are used properly, autonomous cars are actually safer than fully manual cars.

Are the regulators doing enough to protect the public?

As is often the case, the bodies regulating autonomous cars are completely clueless and reactionary. Odds are we’ll see more government involvement in the near future, but for now, the biggest check comes from the threat of a lawsuit and public perception of what is and what isn’t safe.

How will these crashes affect technological progression?

If anything, they’ll speed up progress. Demand for the benefits of autonomy aren’t going away, even in the face of potential lethal danger, and with each incident comes an improvement, be it to the system, or to the way it interacts with the operator.

Conclusion

A fully autonomous future is inevitable, but we won’t get there overnight. Personal and corporate greed, unforeseen circumstances, apathy, and miscalculations all stand in the way. But, as has happened in the past, humanity will push past the roadblocks and strive for something better.