You have probably heard these phrases your entire life, “trust your instincts”, and “trust your gut intuition”. Those moments when you felt something was erroneous and decided not to do it, in the end, you realized that you dodged a bullet. But, when it all comes down to your instincts reacting to a scenario, are you willing to trust a programmed machine to swap out your instincts for a decision?
Picture this, it’s a cold winter day and you’re going to work in your self-driving vehicle, black ice adorns the tight four-lane highway roads. You’re surrounded by cars on one side, and a barrier on the other, unexpectedly a truck abruptly hits on the brakes and swerves unsteadily in front of you. If you were in manual control of the car, the first thing you would act out isn’t something you pre-determined. It’s a reaction, it is your human instinct and impulse. But, since you are sitting in a programmed car and have zero control over the vehicle, the autonomous vehicle can’t stop in time to avoid the collision, so it has to make a choice.

The self-driving car can choose to crash into the barrier on its left, putting the passenger’s and driver’s life in harm, or swerve right into a family SUV. It can also decide to prioritize other people’s lives, by not veering around but instead hitting the truck ahead, which again puts the passenger’s and drivers’ lives at stake. Robots may hold the upper hand in predictable situations and can react faster and more precisely than a person. Yet, humans excel at dealing with unexpected situations, because in the end, it comes down to your instincts, experience, and reactions.
In that hypothetical situation stated above, the self-driving car would have to be required to make a deliberate decision. The ultimate choice made by the autonomous vehicle would have no intentions of malice and purposeful intent. Although, a programmer can predetermine the decision a car like that will make in a situation similar to that. In that case, what is the difference between planting morals to improve road safety by minimizing human error while still being left with one of the biggest moral problems?
You say “but they lessen emissions” “minimize human error, and also dramatically reduce traffic.”That is true, studies show that 94% of car crashes are due to human error. Nevertheless, even with all those added positives, there is still a bigger issue at hand, an issue that has the self-driving car make a very difficult ethical decision in a brisk matter of time. State-of-the-art innovations like the autonomous vehicle aren’t things that are truly needed by every person, instead, it is something that is wanted out of comfort.
The benefits of self-driving cars range from higher mobility for seniors and the disabled, to diminished traffic. But for people who don’t have any notable difficulties holding them back from driving, it is all just a matter of convenience.
Our world has been shaped for thousands of years for the benefit and convenience of people. Since the beginning of the automobile, man has been working diligently to make the car easier to drive, safer, and faster. In the early 20th century, most early automobiles did not have synchromesh gearboxes, or had preselector transmissions, so it was difficult for people to learn how to drive. As innovation evolved, technology evolved, the manual gearbox was improved and perfected, and synchronized gears were included in cars. After manuals dominated the game, the automatic car took over and started ruling the industry. Brakes began to be added to front wheels, shortly after drum brakes were switched out for disc brakes. Dependent suspension lead to independent suspension on all four wheels. Skinny tires became wider as fuel injection replaced carburation. What has been happening is an ongoing cycle of development from the late 1800s to now.
As the decades went on, automobiles were developed and evolved year by year for the benefit of us people. We want things now, we want things easy, and some people long for a time where they don’t even need to touch a steering wheel and leave the car to do it all by itself. But, convenience also comes at a cost.

Imagine a scenario where there’s an SUV to your left and your right, suddenly something occurs in front of you and the driverless car has to make a decision. The family in SUV on the right has all their seatbelts buckled, but some people in the SUV to the left aren’t wearing their seatbelts. Does the autonomous vehicle hit the car on the right because the people are more likely to survive? Or does the car hit the people on the left because they made the wrong choice to not wear their seatbelts? You slam the brakes when a pedestrian jaywalks on the street making a moral judgment, self-driving cars will have to make those same ethical choices on their own too – but it isn’t as straightforward to create a universal moral code for an algorithm.
The vehicles surrounding this car will suffer the consequences of a programmed machine that does not have the same instincts as a human does. There are no universal regulations to create a perfect set of rules for these machines. A survey, called the Moral Machine created by Iyad Rahwan, a computer scientist at the Massachusetts Institute of Technology, gave 13 scenarios in which someone’s death was unavoidable. People were asked to choose who to spare in those situations. it involves a combination of people that are young, old, rich, poor, and-so-forth.

Subtle moral decisions are made by drivers every day, both carmakers and the government have to take that into account if they want to urge the normalcy of self-driving cars. Many social scientists argue against big tech companies like Google and Tesla because these autonomous cars raise ethical issues that can impact people negatively in the near future. Convenience and competition shaped cars into what they are today, but is too much convenience good? Or does it come with a dangerous cost?