Algorithmic decision making has been shown to have a tremendous impact on human behavior and society. Algorithms are used to address complex societal problems. With the vast amount of data available in the world, algorithms allow for this data to be collected, analyzed and later enabled by companies and governments to conquer complex problems. This type of machine learning is used in hiring, lending, policing, stock trading, and even criminal sentencing. If you can think of an industry there is an algorithm for it. Algorithms allow for enhanced efficiency, speed, real-time feedback, and predictions based on previously recorded data. Some argue that human predictions and decision making is full of bias and flaws, but algorithms allow for a more objective approach to tackling problems compared to a subject approach to human psychology. (Lepri, 2018)
In this page we will be discussing the impacts of self-driving cars and the algorithms associated with them. Self-driving cars have an impact on safety, equity, and the environment. They are known to save lives, reduce traffic, and lower emissions. (USCUSA, 2017). On the flip side, algorithms used in self-driving cars can pose a threat if not properly implemented.
Unfortunately, algorithms are not without their flaws, displaying a variety of social and ethical issues. Because algorithms are designed by humans it is difficult to not have human influences seep into the algorithmic design. Algorithms over the years have been criticized for displaying discrimination, bias, and the lack of transparency and accountability. These sorts of flaws can lead to major impacts on social equality, civil rights, corporate responsibility, etc. We must address the issues of algorithmic design to create a better future with technology advancements.
In “Weapons of Math Destruction”, O’Neil speaks about the idea of transparency. Transparency refers to whether or not the participant is aware of being modeled or not. Large companies have a history of not being completely transparent with the people. (O’Neil, 2016)
According to the news article on self-driving cars, The “Predictive Inequity In Object Detection” report done by researchers, did not test object detection models used by self-driving cars. The models were from publicly available datasets. This is because many companies do not allow their data to be used for scrutiny. This directly affects company transparency with the people. Protecting data so that it cannot be further studied by scientists and researchers is a giant red flag when it comes to participant knowledge. This can be extremely problematic for both large corporations and the customer. If companies are not willing to share their findings, they will lose the trust of the people who may someday want to invest in self-driving cars. (Samuel, 2019)
One of the biggest problems that we face when using human oriented object detection algorithms in self-driving cars is the issue of bias. This algorithmic bias, when human bias seeps into algorithmic decision-making systems, can pose a serious threat to safety. It was found that the algorithm was better at detecting lighter-skinned individuals than detecting dark-skinned pedestrians. If a car is unable to detect a darker-skinned person, statistically they are more likely to get hit by one of these cars posing a huge public safety concern. This is likely because the object- detection system as fed more examples of light-skinned people. This issue opens a much larger door into the possibility of discrimination. If algorithms start discriminating against people of color, this becomes a social inequality issue in the automotive industry.
Accidents are still possible with self-driving cars, and the outcomes may be determined in advance by programmers and policymakers. For example, If your car is blocked on all four sides with vehicles, and large debris falls off the truck in front of you what should the car do? Should the car continue forward and hit the debris causing harm to you, or swerve right or left causing harm to others in the next lane but decreasing harm to you? These types of questions pose ethical issues that are in the hands of the programmers. It also gives a computer more powerful than the driver himself, and no matter the decision, it was preset into the algorithm itself targeting or discriminating against a target to crash into. Are we putting too much faith into the hands of programmers or companies? And should we trust them to make such decisions? (The ethical dilema of driving cars, Ted).
Algorithms are not inherently biased, so it is up to humans to be able to provide algorithms a variety of data, in this specific case, examples of all types of people varying in race, gender, and physical features, to achieve accurate predictions from object-detection algorithms. It is a much simpler fix to the issue than getting rid of object detection altogether. (Yash, 2019)
Object-oriented detection has been used in a variety of industries and continues to have the potential to grow. Scalability refers to whether a model can grow exponentially. Object detection has can scale at an extremely fast rate, because it is versatile in many fields. For example, image recognition technology has been used when tracking down suspected criminals from a vast collection of facial data throughout the world. Facial recognition is used by the beauty industry who use your facial features and compare them against thousands of other people, to predict your age and what products you should use on your face to get “a more youthful look.” The military also uses object recognition software to identify potential targets. When conducting aerial surveillance, the military can collect data and identify people and compounds that could be of threat to national security. As you can see object detection has the potential to be used and grow at exceptional rates. Because we now live in almost a completely digital age, more algorithms have the potential to replace human labor.
It is also difficult to scale an item that is not completely accepted by the public. According to the Washington post, “ some silicon valley residents want self-driving cars off the streets.” The majority of the residents feel it poses a safety issue because it has not been widely tested, and as a result does not sit well with the residents. (Washington Post, 2019).
Object-Oriented Detection in self-driving cars is a relatively new technology that needs to be studied more before it can grow and be properly marketed. Being able to detect a person walking on the street and automatically hit the breaks has the real potential to save lives. At the same time, if self-driving cars are only able to detect light-skinned people and fail to detect darker-skinned individuals, we as humans are allowing for discrimination to persist. This poses an extreme public safety risk for a large proportion of the country, not to mention the world.
For algorithms to work for our benefit, we need to feed them a variety of data so that no one group is discriminated against. We also need to make sure that it is transparent to the public, and we hold large companies accountable for what they create.
SOURCES:
Lepri, Bruno, Oliver, Nuria, Letouzé, Emmanuel, Pentland, Alex, Vinck, Patrick. Fair, Transparent, and Accountable Algorithmic Decision-making Processes. Philosophy & Technology. 2018;31(4):611-627. doi:10.1007/s13347-017-0279-x
O'Neil, C. (2016). Weapons of Math Destruction: How Big Data increases inequality and threatens democracy. Broadway Books.
Samuel, S. (2019, March 6). Study finds a potential risk with self-driving cars: failure to detect dark-skinned pedestrians. Retrieved from https://www.vox.com/future-perfect/2019/3/5/18251924/self-driving-car-racial-bias-study-autonomous-vehicle-dark-skin.
Schwartz, Samuel I., Kelly, Karen. No One at the Wheel : Driverless Cars and the Road of the Future . First edition. New York, NY: PublicAffairs; 2018.
Self-Driving Cars Explained. (n.d.). Retrieved from https://www.ucsusa.org/resources/self-driving-cars-101?gclid=Cj0KCQiAtrnuBRDXARIsABiN-7B6FCGR5RXXEbRGXl053q0i-T2XT1mMs4b3AL1_2AGNHFu7-af4OpwaAr9YEALw_wcB&utm_campaign=CV&utm_medium=search&utm_source=googlegrants.
Siddiqui, F. (2019, October 3). Silicon Valley pioneered self-driving cars. But some of its tech-savvy residents don't want them tested in their neighborhoods. Retrieved from https://www.washingtonpost.com/technology/2019/10/03/silicon-valley-pioneered-self-driving-cars-some-its-tech-savvy-residents-dont-want-them-tested-their-neighborhoods/.
The ethical dilemma of self-driving cars - Patrick Lin. (n.d.). Retrieved from https://ed.ted.com/lessons/the-ethical-dilemma-of-self-driving-cars-patrick-lin.
Yash Raj Shrestha, Yongjie Yang. Fairness in Algorithmic Decision-Making: Applications in Multi-Winner Voting, Machine Learning, and Recommender Systems. Algorithms. 2019;12(9). doi:10.3390/a12090199