Machine Learning - AI

06

AI

Artificial intelligence (AI) is sometimes described as getting a computer to do complex tasks that humans find easy. Examples would be walking, seeing, and understanding speech. These activities, which come naturally to us, are very difficult to develop traditional step-by-step algorithms for.

But AI researchers have developed an approach known as machine learning that enables computers to perform these complex tasks. With machine learning a computer learns how to perform a task or solve a problem not by being given a traditional program to solve the problem, but by being given lots of examples of correct and incorrect solutions to the problem. Machine learning are just algorithms that learn intelligent behavior from lots of training data.

Artificial intelligence (AI) is a field of computer science loosely defined as "trying to get computers to think."

This definition has led to a lot of arguments about whether a computer can ever really think, so John McCarthy, one of the founders of AI, defined AI as "getting computers to do things that, when done by human beings, are said to involve intelligence." This definition allows AI researchers to do their work instead of spending time arguing about what "thinking" means.

Interestingly, tasks that human beings generally consider to be hard to do (like playing chess) have turned out to be easier for computers than tasks people think of as being so easy that we do them "without thinking," like walking. Another example is seeing, that is, recognizing images, which is a big field of research in AI.

Intro to AI

Machine Learning

Neural Networks

Training AI & Bias

Computer Vision

Social Impact of AI

Equal Access & Algorithmic Bias

Privacy & The Future of Work

Algorithmic Bias

Can an algorithm be biased? Yes, even though computers are machines, they are not free from the intentional or unintentional bias of the people who program them and the input data generated by humans. Computing innovations can reflect existing human biases because of biases written into the algorithms at all levels of software development or biases in the data used by the innovation. Machine learning and data mining have enabled innovation in medicine, business, and science, but information discovered in this way could be biased depending on the data source and the information can also be used to discriminate against groups of individuals. Programmers need to take action to reduce bias in algorithms used for computing innovations as a way of combating existing human biases.

Five Types of Algorithmic Bias

Still Curious?