At this point, you already know about Uber’s fatal crash, and you know that neither the vehicle (artificial intelligence) or the person behind the wheel applied the brakes. Well, a report coming out of The Information cites sources close to the matter, saying that the software identified the cyclist but chose to ignore it. Let me repeat that. The car’s sensors realized there was a pedestrian in the road, but made a decision not to react right away.

This sounds like a clear-cut case AI taking the chance kill (let’s not forget about that robot that said it wanted to kill the human race) but the truth is that Uber’s software was “tuned” to ignore false positives. So, what is a false positive? Think about a plastic bag in the road or somebody’s old beer can rolling around in the street. It happens, and we all ignore it too. Uber claims that it’s simply a case of tuning gone wrong, or in other words, Uber’s software was set to react less to certain objects in the road. So much for erring on the side of caution.

No, AI Isn’t Trying to Kill Us Yet

This is just an issue with software design and how the software was tuned, but it does give us a huge glimpse of how easy it is for artificial intelligence, or in the case, autonomous cars, to turn into pedestrian killing machines that suffer zero remorse. Since the report from The Information, Uber has released the following statement:

“We’re actively cooperating with the NTSB in their investigation. Out of respect for that process and the trust we’ve built with NTSB, we can’t comment on the specifics of the incident. In the meantime, we have initiated a top-to-bottom safety review of our self-driving vehicles program, and we have brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture. Our review is looking at everything from the safety of our system to our training processes for vehicle operators, and we hope to have more to say soon.”

Of course, that’s a bunch of PR and a hope to save face, but the truth is, these kinds of things are bound to happen, and this isn’t the last time something like this will happen. Uber is, of course, committed to self-driving cars but, in the end, autonomous technology just isn’t there yet. So, no, AI hasn't set out to kill us yet, and this isn’t SkyNet in the making, but if some simple mistuning can cause this to happen, how easy will it be for machines to really make this decision for themselves for whatever reason? What about hacking? Will hackers be able to send out a simple virus that simply turns off the ability for self-driving cars to recognize pedestrians or, even worse, go rogue and start running people down? These are questions we all need to ask ourselves.

Sure, none of this is really all that possible at this point, but it’s something that could be in our future if automakers aren’t careful. The sheer fact that Uber’s “safety” driver didn’t do their job (Facebook was just too important I guess,) goes to show how little automakers and those developing autonomous technology have taken into consideration. A simple piece of miswritten binary code, a semicolon out of place, or an inattentive driver at the wheel (in a self-driving car no less,) and people die.

But, I digress. In the end, this is all about some improperly tuned software and a safety driver that couldn’t be bothered with doing their job. Self-driving cars aren’t out to kill us yet, but it’s probably best to continue looking both ways before you cross the street.

Video of Uber’s Fatal Accident

References

Read more Uber news.

Read our full review of the 2018 Volvo XC90.

Read more autonomous cars news.