Machine Learning - AI
06
AI
Artificial intelligence (AI) is sometimes described as getting a computer to do complex tasks that humans find easy. Examples would be walking, seeing, and understanding speech. These activities, which come naturally to us, are very difficult to develop traditional step-by-step algorithms for.
But AI researchers have developed an approach known as machine learning that enables computers to perform these complex tasks. With machine learning a computer learns how to perform a task or solve a problem not by being given a traditional program to solve the problem, but by being given lots of examples of correct and incorrect solutions to the problem. Machine learning are just algorithms that learn intelligent behavior from lots of training data.
Artificial intelligence (AI) is a field of computer science loosely defined as "trying to get computers to think."
This definition has led to a lot of arguments about whether a computer can ever really think, so John McCarthy, one of the founders of AI, defined AI as "getting computers to do things that, when done by human beings, are said to involve intelligence." This definition allows AI researchers to do their work instead of spending time arguing about what "thinking" means.
Interestingly, tasks that human beings generally consider to be hard to do (like playing chess) have turned out to be easier for computers than tasks people think of as being so easy that we do them "without thinking," like walking. Another example is seeing, that is, recognizing images, which is a big field of research in AI.
Intro to AI
Machine Learning
Neural Networks
Training AI & Bias
Computer Vision
Social Impact of AI
Equal Access & Algorithmic Bias
Privacy & The Future of Work
Algorithmic Bias
Can an algorithm be biased? Yes, even though computers are machines, they are not free from the intentional or unintentional bias of the people who program them and the input data generated by humans. Computing innovations can reflect existing human biases because of biases written into the algorithms at all levels of software development or biases in the data used by the innovation. Machine learning and data mining have enabled innovation in medicine, business, and science, but information discovered in this way could be biased depending on the data source and the information can also be used to discriminate against groups of individuals. Programmers need to take action to reduce bias in algorithms used for computing innovations as a way of combating existing human biases.
Five Types of Algorithmic Bias
The data reflects an existing bias in society. For example, an image search for nurses may return more female nurses than male nurses.
The training data is biased or incomplete. For example, facial recognition algorithms that are trained on photos of mostly white faces may not work as well for other skin colors.
The data is oversimplified into quantitative values. The data may be too complicated to measure so simpler quantitative measures are used that may cause bias, for example counting the sentence length as an oversimplified measure of good writing.
Data can be affected by a feedback loop. If biased data is fed back into the algorithm that then generates new data, it causes a feedback loop of more biased data. For example, predictive policing software may recommend an increased police presence in neighborhoods based on previous arrests, ignoring other neighborhoods, but this could form a feedback loop where the increased police presence leads to more arrests and more bias in the decision.
Data can be manipulated. In 2016 Microsoft launched the virtual assistant Tay. People on Twitter bombarded Tay with racist comments and soon many of the responses were racist in nature. Microsoft pulled the plug on Tay after 24 hours.
Still Curious?
In this video two Googlers, Nat and Lo, interview a couple of Google AI researchers who describe how machine learning works.
This video was made as part of their "20% project". One of the cool features of working at Google and other technology companies is that employees get to spend part of their time (1 day per week in this case) working on projects that they themselves choose.The Google Self-Driving Car is an example of the research being done by car industry researchers to create fully autonomous vehicles. As the video points out, an autonomous vehicle is much different than the computer-assisted vehicles that are currently available today.
Computer vision is a long-standing AI research area. In this TED talk, Wei-wei Li from Stanford University describes how she used machine learning and crowd source to to teach a computer to understand pictures.
Here is a Ted Talk video on Bias in Facial Recognition by Joy Buolamwini and another on Blind Faith in Big Data Must End by Cathy O'Neil.
This is a report on police crime prediction software and bias.