Will Your Car Be Programmed To Kill You?
An ethical dilemma is brewing and it could have dire effects on the future of autonomous carsby Robert Moore, on
Remember that Movie iRobot? You know, the one where robots attempt to take over and protect humans at all costs? Well in that movie, the main character, portrayed by Will Smith, had a sincere hatred for robots before they started revolting against the human population, and he had good reason. At one point in the movie, it is revealed that a robot lets a child drown, while saving Smith’s character because he was a little more likely to survive. The robot had no soul; just algorithms that helped it decide which person to save in the middle of a tragic situation.
There’s that word: Algorithm… does it sound familiar? Well, it should. Algorithms are exactly what makes autonomous technology work. And, as we transition into cars with better autonomous technologies, the artificial intelligence in those cars will be tasked with making more, potentially deadly, decisions. Think about that for a minute. Potentially deadly. But, aren’t autonomous cars supposed to make the roads safer for everyone? They are, but there is a serious dilemma unfolding with each day that passes.
What happens if an autonomous car has to make a decision between itself and a pedestrian? At a quick glance, the answer seems pretty obvious, right? The pedestrian has the right of way, so pedestrian should get the long straw. Well, that’s not the way everyone sees it, and a recent study shows that the general public is torn between whether or not they are willing to give their life to save others in the event that an autonomous car can’t avoid an accident. Furthermore, it could even lead to the roads and use of autonomous cars to be more unsafe than they are now.
Keep reading for the full story
Let this sink in for a minute. You’re riding down the road in your 2025 Chevy Cruze. This baby is brand new and is smart enough that you can turn on autodrive and kick back while the car does the work of getting you to your next destination. It’s been a long day at work, and you figure “why not relax for the next 40 minutes and get a good nap in before dealing with that nagging wife and annoying kids?” As it turns out, this might be the last time you drift off to sleep because 10 minutes into your trip, your Cruze’s ethical subroutine is called into play as a serious situation has just presented itself.
Ten feet up the road, a group of pedestrians is standing in the middle of the street as your Cruze travels at 60 mph on its chosen path. The car is able to make decisions extremely quickly, but that doesn’t change the fact that the car simply can’t stop in time without nailing one of those pesky pedestrians on the road. What should the car do? Should it hit the brakes as effectively as possible and hope that the pedestrians it hits manage to survive or should it swerve out of the way and smash itself into the building next to you, potentially injuring or killing you? The car only has a few milliseconds to make a decision that will impact at least one life in one way or another.
The Study was published in Science Magazine not that long ago, and it goes to show that we’re not sure if we can accept the type of ethical programming autonomous cars need to make travel safer. In short, it was found that people agree with the idea that an autonomous car should be able to crash itself to save a crowd of pedestrians but, at the same time, the study also revealed the respondents wouldn’t actually ride in a car programmed to kill them before a crowd of people. Needless to say, something has to give.
Iyad Rahwan, a professor at MIT and one of the study’s co-authors, said, “Most people want to live in in a world where cars will minimize casualties, but everybody wants their own car to protect them at all costs.
Obviously, this isn’t the way things can work. There is no way to program vehicles with an algorithm that combines moral values with our nature desire to keep breathing. This isn’t all the study showed, however, as people also don’t like the idea of having the government enforce a utilitarian principle like favoring a pedestrian’s life over those riding in the car. In short, this could lead to a slower adoption of safer technology, which could ultimately result in the possibility of roads that aren’t as safe.
Solving the Dilemma
At this point, there is no real solution. Obviously, the idea is to minimize casualties as much as possible. So, autonomous cars will probably be programmed to review the situation at hand as it happens and will choose a course of action that is less likely to cause a casualty or to keep them at a minimum. So, with four passengers in the car and one lonely pedestrian in the road, the car may choose to protect the occupants and try its best to slow down before hitting the pedestrian. Then again, with one person in the car and several pedestrians on the road, the car will likely sacrifice itself and its passenger to save the many.
But, what about a one-on-one situation – one pedestrian and one rider. Autonomous cars probably won’t have the same desire to live that we do. If so, then the pedestrian is going to get smoked every time. In a one-on-one situation, the car would have to weigh the chances on both sides of the spectrum. Maybe it can slow to as little as 20 mph before hitting the pedestrian resulting in say a 40 percent chance of death or serious injury, but would hit a light pole or parked vehicle at 60 mph to avoid the pedestrian resulting in a 90 percent likelihood of serious injury or death. In that case, the pedestrian gets the short stick, but it could also go the complete opposite way.
The bottom line
The bottom line is that if we’re going to have autonomous cars – at this point, I don’t think it’s much of a choice, as they will probably be forced on us – an ethical, unbiased system needs to be in place for it to made decisions that are deemed best. The lesser will always be sacrificed for the many; that’s just the way it goes. In other words, you’re just going to have to deal with the fact that your car, when they are fully autonomous, will make a decision to kill you if it needs to.
As scary as it sounds putting it that way, that’s really the basic principal right? At the end of the day, the chances are that you would ultimately make the gut decisions to sacrifice yourself over a crowd too, just out of instinct. So, is it really that big of a deal that it could be artificial intelligence making that decision? For me, I’m not a fan of the idea, but I understand the concept even if I don’t trust artificial intelligence. Then again, I’m not one to put my life in the hands of a computer, so don’t feel bad if you’re not that thrilled about it either.