Innovation or Desolation: The Self-Driving Car
April 25, 2018
About sixty years ago, the world dove into one of the most exciting aerospace engineering competitions of all time; the Space Race. Across the globe, companies and organizations put their most brilliant minds together to make the impossible, possible. People speculated and doubted but after 18 years of innovation and work, the Race ended. A race of such epic proportions has not been seen until now. This century’s new and improved Space Race is the self driving car. It’s viewed as the new buzzing topic, the marvel of societal advancements, the ultimate innovation. Autopiloted vehicles have become every innovator’s fantasy. However, when mainstream companies get excited about ideas as rewarding as this one, everyone wants to be the first to accomplish it. As a result, corners are cut, products are rushed, mistakes are made, and every possible contingency out on the road is just not accounted for.
Mid-day May 7, 2016, Joshua Brown became the first person to die in a self driving car. Driving in his Tesla Model S on the highway in Williston, Florida, Brown had set the car to autopilot. While cruising down the highway, a tractor trailer protruded out of an intersecting road. Brown’s self driving car never slowed down and slammed directly into the trailer at 74-mph, killing Brown. After the accident, the National Highway Traffic Safety Administration found Brown at fault. This was because though the car’s autopilot feature was designed to prevent rear-ending other vehicles, it did not account for cars that cross in different directions.
Yet the disturbing part is that while the car itself supposedly was never intended to fully replace a human driver, the car still was unable to handle a simple task of another vehicle crossing the road. Any driver knows that anything and everything can happen while on the road, and being constantly aware is the key to making sure nothing bad will happen. A program and high-tech sensors on a car can only do so much. The number one problem programmers have to figure out with these cars is a human discretion. While these cars may do a great job with precise parking and staying within the lane lines, what about a situation where a accident is inevitable? How can you program a car to make a judgement call?
In an accident situation, a car may have to choose whether to harm the occupant of the car, the pedestrians outside, or those in the car beside it. Jean-Francois Bonnefon, researcher of economics at the University of Toulouse in France, sought a public opinion on how a car chooses who lives and who dies. Surprisingly, the results showed that people were comfortable with the pedestrians being spared rather than themselves, if the car had to make the call. However, the catch is that although most are comfortable with this idea, they would not want to be the passenger that was sacrificed in this situation. In Brown’s situation, had the car noticed the trailer at the last second, but also noticed people on the side of the road, the car would have to make a decision to either hit the trailer and kill Brown or possibly veer off the road and hit pedestrians.
Moral hypotheticals do not just end with these worst-case-scenarios in which case someone might die. There are also other variables to consider, such as weather and road quality. In an article from Vox, reporter David Roberts explained how, “each autonomous vehicle will gather experiences and, crucially, share them with all the others.” It is practical and helpful that a car can share its mistake with others but that means that somewhere along the way, many people will fall victim to the first mistake in a scenario that the cars have not yet become aware of. Hopefully, in Tesla’s case, they have tweaked their autopilot system in light of Brown’s death. However, fixing that mistake isn’t going to bring Brown back to life. Furthermore, as many things can go wrong on the road, there are a lot of mistakes people would have to suffer from before the cars are wise enough to know everything there is to know.
Due to the pressure of being the first company to create a successful running product, it is hard for mistakes to be avoided. Of course, humans are still humans, and will likely get into many accidents that a computer probably would not, but the computers in question are still programmed by a human. A human that makes mistakes or misses things, just like any normal person.
It is hard to say what kind of future self-driving cars have now, but at the rate programs are pushing products and experimental cars for people to try out, we could be looking at a lot more fatalities like Brown’s.