Humans are to blame for the majority of autonomous car accidents. Data analyzed by the state of California shows that of self driving fender bender accidents, the most common- nearly two thirds - are rear ends. [1] This is believed to happen because humans simply just don't expect autonomous cars to be such good drivers. A good driver, in this case, is defined by following each and every law. This includes stopping at a yellow, making a complete 3 second stop at stop signs, and cruising exactly at the speed limit. Looking at stop signs specifically, humans do not make any attempt to perform a full stop at stop signs. We also expect, even rely on, the car in front of us to do the same.
The main point of this article is that human drivers drive differently every day. This article attempts to categorize these two drivers into two groups, egoistic and altruistic, that is - the selfish from the generous. Researchers are attempting to derive a formula based the patterns of sociology and psychology of drivers to differentiate them between egoistic and altruistic in under of two seconds, which could improve the vehicles performance by 25 percent.[2]
Although research and testing has been underway, the reality of teaching robot cars human behavior is improbable. We are humans after all, there is simply no way to predict what humans are going to do. We make sudden changes, and can go from an altruistic to egoistic driver in under a second. The current 2 second time frame for the robot to detect is just too long. Of course, with more data we can better train these autonomous cars to better model human behavior, but this is going to take time. Humans can't even understand humans, how are we supposed to train robots to?
Aarian Marshall wrote this article to explain the complicated relationship between autonomous vehicles and humans, and if there will ever be a perfect solution to this. In a rather short article, Marshall opens with a few paragraphs explaining how most AV accidents in California are due to a human error. The news today is full of clickbait articles attempting to degrade the effectiveness of autonomous vehicles. AV’s crashing is news some writers want to report, as it sways the public opinion away from liking the cars. Marshall is trying to do the opposite, by telling the reader they are safe and it is actually humans who are at fault. This is an important key to making the readers want to keep reading as they are now interested in why humans are at fault. The title, “Teaching Self-Driving Cars to Watch for Unpredictable Humans”, is also key to drawing the reader to the article from the WIRED homepage.
The bulk of this piece educates the reader on the feasibility of teaching robots how to interpret human behavior. Marshall begins this section with the words “But first, game theory”. It is clear that Marshall is not writing this article for a specific Discourse, but for anyone interested in reading on a neat topic. He does not throw mathematical formulas at us, or introduce very complex ideas. Instead, he makes the subject of machine learning easy to digest, by comparing drivers to ‘players’ of a game, and in this case, explains the players will first watch the movement of the opponent before optimizing their move. He then quotes some easy numbers that display the advantage of predicting human drivers and how it can increase the efficiency of autonomous vehicles.
This article is not directed at computer science students or researchers. Marshall does not dive deep into the topic of AI, he stays shallow enough for anyone to understand the current issues with AV’s and how researchers are working to fix it. This allows any reader to feel more educated about a very debatable and upcoming topic.
A gif Marshall included in the article. Its purpose is to help the reader visually represent the different driving styles and how a self-driving car can base its decisions off of that.
The PNAS article, written by Wilko Schwartinga, Alyssa Piersona, Javier Alonso-Morab, Sertac Karamanc, and Daniela Rusa gets straight to the point. “...It requires understanding the intent of human drivers and adapting to their driving styles.... We integrate tools from social psychology into autonomous-vehicle decision making to quantify and predict the social behavior of other drivers and to behave in a socially compliant way…” [3]
The researchers immediately talk about SVO, which stands for Social Value Orientation. This measure quantifies the degree of other drivers selfishness or altruism, and this can help us predict how this driver will act on the road and interact with others.
AV’s lack an understanding of human behavior, thus are overly cautious, they take slow turns, etc. This causes traffic. Humans make left hand turns when the other driver slows down to yield. Humans recognize this, then proceed with the left hand turn. AV’s don't yet recognize this behavior.
To clear traffic and the boost the effectiveness of autonomous vehicles, researchers are attempting to train the robotic cars to understand human driving patterns and put them into two different groups of drivers, selfish and selfless. Once this measure is calculated, the car should base its decision making when performing operations on which type of driver is involved with its next move.
The article published to PNAS was written by researchers, for researchers. It is written in such a way that this paper is not meant to be read by any average person. For example, below are a few of the many formulas that the writers of this paper supply us with. Any paragraph supplied with the formulas do not, to the common eye, help us in anyway to further understand them.
Obviously, the majority of people do not understand these formulas, and certainly cannot apply them. For this, it is clear that the paper is designed specifically for people within the computer science/robotics Discourse. The writers of this paper use formulas and diagrams that are very clearly intended for this specific Discourse. The writers do not use any easy to digest examples. There is no interesting title to draw in readers. People reading this article are meant to use the research to benefit their own research. It's meant as a building block for other researchers to use and grow more ideas to benefit autonomous vehicles.
This paper is a tough read. It is almost 10 pages of incomprehensible terms, small compresses text and little graphics. Any graphics included are graphs that plot some aspect of the formulas mentioned above. It is full of mathematically intense formulas, Discourse specific jargon - such as Nash Equilibrium and Karush-Khan-Tucker conditions. The writers expect the reader has previous knowledge of these terms as they do not define them anywhere throughout the paper.
Easy and fun to read. This is the main goal of the WIRED article. User friendly examples and pictures, no matter the Discourse of the user. The PNAS article is for specific Discourse, and if you are not part of this Discourse, you will not understand 99 percent of this paper. When dissecting the titles alone: “Teaching Self-Driving Cars to Watch for Unpredictable Humans” for WIRED and “Social behavior for autonomous vehicles” for PNAS, it is clear there is going to be a difference in the writing styles. “Self-Driving Cars” is a word almost everyone has heard about, whereas “Autonomous Vehicles” is a word a lot of people might not be able to define. Even these small differences in titles is an indicator of how each article is written.