- AlphaGo - A breakthrough system for playing the boardgame Go, developed by a team at DeepMind. Beat world-champion Go player Lee Sedol four games to one in a tournament in Seoul, Korea, in March 2016.
- Ann - A character that appears in one of our examples. She has an unfaithful partner, Bob.
- antecedent - In an “IF…THEN…” rule, as used in an expert system, the antecedent is the condition — the part immediately after the “IF”. For example, in the rule “IF animal has udder THEN animal is mammal”, the antecedent is “animal has udder”.
- Artificial General Intelligence (AGI) - The ambitious goal of building AI systems that have the full range of intellectual abilities that humans have: the ability to plan, reason, engage in natural language conversation, make jokes, tell stories, understand stories, play games — everything.
- Asilomar principles - A set of principles for ethical AI developed by AI scientists and commentators in two meetings held in Asilomar, California, in 2015 and 2017.
- augmented reality - The idea that we perceive the world through some mechanism (such as Google Glass) that augments it, for example by identifying people we are looking at. Not restricted to AI, but AI provides rich new opportunities for augmented reality.
- Autonomous Vehicle Disengagement Report - In California, companies testing driverless car technologies on public highways are required to file reports indicating how many miles they have driven autonomously, and how often they were forced to have a human take control. One of the ways in which we can find out how well this technology is improving.
- Autopilot - Tesla’s driverless car technology. At the time of writing, some way from “full” (level 5) autonomy, although anecdotally it seems many Tesla owners seem to view it as having a much higher level of autonomy than it actually does.
- axons - The components that connect neurons together. See also synapse.
- backprop/backpropagation - The most important algorithm for training neural nets.
- backward chaining - In knowledge-based systems, the idea that we start with a goal that we are trying to establish (e.g., “animal is carnivore”) and try to establish it by seeing if the goal is justified using the data we have (e.g., “animal eats meat”). A counterpart to forward chaining.
- Bayes’ Theorem/Bayesian inference - Baye’s theorem is a core result in probability theory, which in AI gives us a way to adjust our beliefs about the world when given new data or evidence. Crucially, the new evidence may be “noisy” or uncertain: Bayes Theorem gives us the proper way to handle such uncertain information. These are now the main methods in AI for reasoning with uncertain information.
- Bayesian networks - A knowledge representation scheme for capturing complex networks of connected probabilistic data, yielding a form of Bayesian inference using Bayes Theorem.
- behavioural AI - An alternative to symbolic AI that gained a lot of attention in the period from about 1985 to 1995. The idea was to build a system by focusing on the behaviours that the system should exhibit, and then to worry about how these behaviours are interrelated. The subsumption architecture was the most popular approach to behavioural AI.
- beliefs - the information an AI system has about its environment. In logic-based AI, the beliefs the system had would be the contents of its knowledge-base and working memory.
- Blocks World - A simulated “micro-world” in which the task is to arrange various objects like blocks and boxes. Most famously used in SHRDLU, Blocks World scenarios were subsequently criticised because they abstracted away many of the really hard problems that an AI system would face in the real world, notably perception.
- Bob - a character that appears in one of our examples. He is an unfaithful partner to Ann.
- branching factor - When solving a problem, the number of alternatives you have to consider every time you make a decision. Thus, when playing a game, the branching factor will be the number of moves you can make on average from any given board position. In TIC-TAC-TOE, the branching factor is about 4; for chess, it is about 35; for Go, it is about 250. Larger branching factors lead to search trees that very quickly get impossibly large, necessitating the use of heuristics to focus search.
- certainty factors - The basic technique used in MYCIN for handling uncertain information, using degrees of belief and disbelief in pieces of information. In contemporary AI, Bayesian approaches are used instead.
- ceteris paribus preferences - The idea that when we specify our preferences to an AI system, we do so on the assumption that “all other things are held equal” (i.e., as close to how they are now).
- chatbot - Simple programs that attempt to converse with users in something like natural language. Usually, they use naïve canned scripts to drive the dialogue. Not regarded as AI.
- Chinese room - A scenario proposed by the philosopher John Searle in an attempt to show that strong AI is impossible.
- choice under uncertainty - A situation in which we must make a decision that has multiple possible outcomes, and all we know, for every possible choice we might make, the probability at each outcome will occur. See expected utility.
- closed environment - An environment in which the only thing that can cause change is the decision-maker itself. An example is SHRDLU. Not usually a realistic assumption.
- combinatorial explosion - Where we must make a succession of choices, and each successive choice multiplies the number of possibilities we need to consider. A fundamental problem in AI, which arises in search: it causes the size of search trees to grow very, very rapidly.
- common-sense reasoning - A broad term, but basically the kind of informal reasoning about the world, of which we are all capable, but which proved very hard for logic-based AI.
- Computers and Thought Award - The leading award that can be given to a young AI scientist.
- confirmation bias - The tendency to look for evidence supporting our existing beliefs. AI can give rise to confirmation bias, for example, by giving us news stories that we like.
- consensus reality - The view of reality as accepted by society. Augmented reality systems run the risk of destroying consensus reality, because they might allow us all to perceive the world in different ways.
- consequent - In an “IF…THEN…” rule, as used in an expert system, the consequent is the conclusion — the part immediately after the “THEN”. For example, in the rule “IF animal has udder THEN animal is mammal”, the antecedent is “animal is mammal”.
- contradiction - A situation that arises in a logical reasoning situation where you conclude that something is both true and false. Logical reasoning utterly fails in the presence of contradictions: it simply cannot cope.
- credit assignment - A problem that arises in machine learning: deciding which of your actions were the good ones and the bad ones. For example, your machine learning program plays a game of chess and loses: how does it know which were the critical moves?
- curse of dimensionality - In machine learning, the problem that including more features in your training data necessitates vastly more training data and training time.
- Cyc hypothesis - The hypothesis that general AI is primarily a problem of knowledge, and that a suitably equipped knowledge-based system will be capable of general AI.
- Cyc project - A famous (or notorious) experiment from the days of knowledge-based AI, which attempted to construct a generally intelligent AI system by giving it all the knowledge about the world that a reasonably educated person has. It didn’t work.
- decidable problem - A decision problem that can be solved by some algorithm.
- decision problems - A decision problem is a mathematical problem that has a yes/no answer. Examples might be “is it the case that the square root of 16 is 4?” and “is it the case that 7920 is a prime number?”. The Entscheidungsproblem, solved by Alan Turing, asked whether or not there are decision problems that cannot be solved by some algorithm. He showed that there are decision problems (notably the halting problem) for which there is no algorithm to solve them. Problems of this type are said to be undecidable.
- deduction - Logical reasoning: deriving new knowledge from existing knowledge.
- deep learning - The breakthrough technique that has driven machine learning research this century. Characterised by deeper, more interconnected neural nets, the use of larger, carefully curated training data sets, and some new techniques.
- DeepFakes - Pictures that have been altered by an AI system in ways that are undetectable to humans, for example inserting pictures of celebrities into pornographic movies.
- DENDRAL - A classic early expert system, which helped users to identify unknown organic compounds.
- depth-first search - A type of search technique used in problem solving, in which instead of expanding the entire search tree layer by layer, we just expand one branch of the tree.
- design stance - The idea that we try to understand and predict the behaviour of some entity with reference to what it was designed to do. A clock, for example, is designed to show the time, so we can understand the numbers it displays as the time. Contrast with the physical stance and the intentional stance.
- desktop metaphor - A common approach to building user interfaces, in which the user interface is supposed to appear like a desktop in the physical world, with documents, folders, trash can, and so on.
- disengagement - When an autonomous vehicle decides to hand control back to a human driver, typically because it has encountered a situation that it doesn’t know how to handle.
- drones - A remote controlled unmanned aerial vehicle.
- Dunbar’s number - The social group size for humans — the number of close relationships that a typical human can manage. Usually quoted as 150. Named for Robin Dunbar.
- ELIZA - A seminal experiment in conversational AI from the 1960s, developed by Joseph Weizenbaum. ELIZA used simple canned scripts to simulate a psychotherapist.
- emergent phenomena - When a system composed of multiple components exhibits some property that arises, typically in an unexpected or unpredictable way, from interactions of the component systems.
- epiphenomenalism - In the study of the mind, the idea that mind and conscious experience do not drive behaviour, but are the by-product of the processes that actually govern behaviour.
- error correction procedure - A technique to train perceptrons, developed by Frank Rosenblatt. See also backpropagation.
- expected utility - In a problem of decision-making under uncertainty, the expected utility of a particular course of action is the average utility that you could expect to gain from that choice.
- expert system - A system that uses human expert knowledge to solve problems in a tightly constrained area. Classic examples are MYCIN, DENDRAL, and R1/XCON. Building expert systems was a key focus of AI research from the late 1970s to mid 1980s.
- fake AI - When systems that don’t have any AI are passed off as being AI, typically by having humans controlling the system behind the scenes.
- feature extraction - In machine learning, the problem of deciding which
- Fifth Generation Computer Systems Project - A large research and development programme in 1980s Japan, centred around computing technologies. PROLOG was a core element.
- fires - In the context of knowledge-based systems, a rule fires if the information we have in working memory correctly matches the antecedent of the rule, allowing us to add the consequent of the rule to working memory.
- first-order logic - A very general language and reasoning system that was developed to give a precise foundation for mathematical reasoning. Widely studied in the paradigm of logical AI.
- forward chaining - In knowledge-based systems, reasoning from information to conclusions. Contrast with backward chaining.
- game theory - The theory of strategic reasoning. Widely used in AI as a framework to understand how AI systems can and should interact with one another.
- gamification - The process of turning a task into a game, with the goal of encouraging participation in the task.
- general AI - See Artificial General Intelligence.
- gig economy - An increasingly common mode of employment, characterized by short-term contracts, piece work, and casual contracts. Some AI technologies may facilitate the gig economy.
- goal state - In problem solving, a goal state describes how we want our problem to look when we have successfully completed the task.
- Golden Age of AI - The early period of AI research, from about 1956-75 (followed by the AI Winter). Work in this period focussed on the “divide and conquer” approach: build systems that demonstrate the components of intelligent behaviour, in the hope they can later be integrated.
- gradient descent - A technique when training neural nets. See also backpropagation.
- Grand Challenge - A competition for driverless cars, organised by US military funding agency DARPA, which led to the triumph of robot STANLEY in October 2005, and which heralded the age of driverless cars.
- Graphics Processing Units (GPUs) - A computer processor, originally developed (as the name suggests) to support computer graphics, but now widely used for training neural nets.
- hard problem of consciousness - The problem of understanding how and why physical processes lead to subjective conscious experiences. See also qualia.
- Harm Assessment Risk Tool (HART) - A machine learning system developed to help the police force in Durham, UK, decide whether someone should be detained in custody.
- heuristic, heuristic search - A heuristic is a “rule of thumb” to focus search. Heuristics are rules of thumb in the sense that they are not guaranteed to focus search in the right direction. See also A*.
- high-level programming languages - A programming language that abstracts away the low-level details of the actual computer the program is running on. High level programming languages are machine independent, at least in principle: the same program will work on different types of computers. Examples include Python and Java. John McCarthy’s LISP was an early example.
- HOMER - An agent developed in the 1980s, which operated in a simulated “sea world” environment. HOMER could converse in (a subset of) English, be given tasks to accomplish in the sea world, and had some common sense understanding of its actions.
- homunculus problem - A classic problem in the theory of mind, which occurs when we try to explain the problem of mind by inadvertently delegating it to another mind.
- human-computer interaction (HCI) - How we interact with computers. The graphical user interface is a widely-used paradigm for human computer interaction. See also agent-based interface.
- image captioning - A classic problem for machine learning: we give the computer a picture, and we want it to come back with an appropriate caption (e.g., “a woman smiling at a dog”).
- ImageNet - A database of labelled images, developed by Fei Fei Li, which was enormously influential in deep learning for training programs to do image captioning.
- inference engine - The part of an expert system that does reasoning, deriving new knowledge from rules and facts in working memory.
- initial state - In problem solving, the initial state describes what the problem looks like before we have carried out our task. See also goal state.
- intentional stance - The idea of predicting and explaining the behaviour of some entity by attributing to it mental states such as beliefs and desires, and assuming that it will act rationally on the basis of these beliefs and desires.
- intentional system - Any system that is amenable to an intentional stance characterisation.
- inverse reinforcement learning - When a machine learning program observes what a human does, and tries to learn a reward system from these observations.
- knowledge base - In an expert system, the knowledge base consists of the human expert knowledge, typically encoded in the form of rules.
- knowledge elicitation - The process of extracting and encoding human expert knowledge from the relevant experts when building an expert system.
- knowledge engineer - Someone trained to construct knowledge-based systems. A knowledge engineer will spend a lot of time working on knowledge elicitation.
- knowledge graph - A very large knowledge-based system developed by Google by automatically extracting knowledge from the World-Wide Web.
- Knowledge is Power - The doctrine that explicitly capturing and using human knowledge is the key to AI.
- Knowledge Navigator - A concept video developed by Apple in the 1980s, which introduced the idea of the agent-based interface.
- knowledge representation - The problem of explicitly encoding knowledge in a form that can be processed by computers. In the era of expert systems, the dominant approach was to use rules, although logic was also widely
- knowledge-based AI - The dominant paradigm for AI from about 1975-85, focussed on using explicit knowledge about problems, often in the form of rules.
- layered neural net - The standard way of organizing a neural net, into a series of layers, where the outputs of each layer feed into the next layer. A key problem in the early days of neural net research was that there was no way to train layered neural nets, and networks with a single layer are greatly restricted in terms of what they can do.
- Lighthill Report - A UK report into AI in the early 1970s, which was fiercely critical of AI research at the time. The report led to funding cuts, and is generally recognised as one of the factors that led to the AI winter.
- LISP - A high-level programming language that was widely used in the era of symbolic AI. Developed by John McCarthy. See also PROLOG.
- LISP machines - Computers that were designed specifically to run the LISP programming language.
- Loebner Prize Competition - An annual competition in which programs attempt to pass the Turing Test. The competition is not well-regarded within the AI community.
- logic - A formal framework for reasoning. See also first-order logic and logic-based AI.
- logic programming - An approach to programming in which we simply state what we know about a problem and what our goal is — the machine does the rest. See also PROLOG.
- logic-based AI - An approach to AI in which intelligent decision-making is reduced to logical reasoning, for example in first-order logic.
- Luddites - A short-lived movement in the early 19th century, where workers rebelled against the automation of labour and the emerging factory system.
- machine learning - One of the core capabilities of an intelligent system. A machine learning programs learns an association between inputs and outputs without being explicitly told how. Neural nets and deep learning are popular approaches to machine learning.
- maximising expected utility - In decision-making under uncertainty, the principle that, given a choice between multiple alternatives, a rational agent will choose the alternative which gives the maximum utility on average, i.e., the one which maximises expected utility.
- mean social group size - See Dunbar’s number.
- mental states - A key component of mind: beliefs, desires, and the like. See also intentional stance.
- mind-body problem - One of the most fundamental problems in science: how are the physical processes in the brain/body related to the mind and conscious experience?
- minimax search - A key search technique in game playing, in which you seek to maximise your benefit assuming that an adversary is trying to make you do as badly as possible. See also search trees.
- Monte Carlo tree search - A search technique involving random choices, used in the AlphaGo system.
- Moore’s law - A well-known dictum in computing, which says that every two years, the number of transistors you can fit on a processor will double. Implies that computers get more powerful and cheaper at roughly the same rate. Moore’s law held good for decades, although some processor technologies are now hitting physical limits.
- moral agency - An entity is a moral agent if it can understand the consequences of its actions and the distinction between right and wrong, and can therefore be held accountable for its actions. The prevailing view is that AI systems should not and indeed cannot be treated as moral agents. Responsibility lies with the people that build and run an AI system, not with the system itself.
- Moral Machines - An online experiment in which users were asked what choices should be made in various Trolley Problems.
- multi-agent systems - Systems in which multiple agents interact with one-another.
- multi-layer perceptron - An early form of layered neural net.
- MYCIN - A classic expert system from the 1970s, which was intended to act as a doctor’s assistant, diagnosing blood diseases in humans.
- naive exhaustive search - The simplest type of search, in which we try to solve a problem by looking at every possible solution. Doesn’t work for anything but the simplest problems. See also combinatorial explosion and heuristics.
- narrow AI - In contrast to general AI, the idea of building AI systems that focus on very specific problems, rather than trying to be capable of the full range of human intellectual abilities. The term is mainly used in the media: it isn’t really used in AI itself.
- Nash equilibrium - A core concept in game theory, where a group of decision-makers are all simultaneously satisfied that they did the best they could given the choices made by others.
- natural language understanding - Programs that can interact in ordinary human languages like English.
- neural networks/neural nets - An approach to machine learning using “artificial neurons”. The basic technique used in deep learning. See also perceptron.
- neurons - A nerve cell that is connected to other nerve cells, communicating with them by an axon. The basic information processing unit of the brain, and the inspiration for neural nets.
- Nixon diamond - A classic problem in common sense reasoning, in which we are forced to conclude that something both is and is not true — such inconsistencies are very hard to handle in logical AI.
- NP-complete - A class of computational problems that resist attempts to solve them efficiently. The theory of NP-completeness was developed in the 1970s, and in this time many AI problems were discovered to be NP-complete. See also P vs NP problem.
- ontological engineering - In expert systems (and more generally, knowledge-based systems), the task of defining the conceptual vocabulary you will use to represent knowledge in your system.
- opaqueness of neural nets - The problem that the expertise a neural network has is encoded in a series of numeric weights: we have no way of being able to tell what those weights “mean”. This means that current neural nets can’t explain or justify their decisions.
- optimisation problems - A class of problems in which the challenge is to find the “best” (optimal) solution, typically aiming to minimise cost or maximise reward. Optimisation problems are very widely studied in mathematics.
- P vs NP problem - The question of whether NP-complete problems definitely can or cannot always be solved efficiently. One of the biggest open problems in mathematics today. Not likely to be settled any time soon. (“P” stands for “polynomial time”; “NP” stands for “non-deterministic polynomial time”: technically, the P vs NP problem is whether problems that can be solved in non-deterministic polynomial time can be solved in polynomial time.)
- perception - The process of understanding what is around you in your environment. This was a fundamental sticking point for symbolic AI.
- perceptrons - A type of neural net, studied in the 1960s but still relevant today. Research in perceptrons died out in the early 1970s when it was shown that there are severe limits to what one-layer perceptrons can learn.
- perverse instantiation - When an AI system does what you asked it to, but not in the way you anticipated. For example, imagine a robot that burns down your house to prevent it being burgled.
- physical stance - The idea that we try to predict and explain the behaviour of an entity with respect to its physical structure and physical laws. Contrast with the intentional stance.
- planning - The problem of finding a sequence of actions that will transform an initial state into a goal state. See also search.
- preferences, preference relation - A description of your preferences, in which you rank every possible pair of alternatives. If you want an agent to act on your behalf, it needs to know your preferences so that it can make the best choice possible on your behalf.
- premises - In logic, the premises are the knowledge you start from. You then use logical reasoning to derive conclusions from these premises.
- prior probability - The probability that you attach to an hypothesis before receiving any further information. “Prior” in this sense thus simply means “before you get any more information”.
- proactive - The ability of an agent to exhibit goal-directed behavior. See also reactive.
- problem solving - In AI, problem solving means finding the right sequence of actions to transform a problem from an initial state to a goal state. Search is the standard approach to problem solving in AI.
- program - A (computer) program is an algorithm expressed in an actual programming language like Python or Java.
- PROLOG - A programming language based on first-order logic, which was particularly popular in the era of logic-based AI.
- PROMETHEUS - A seminal European experiment in driverless car technology from the 1980s.
- qualia - Personal mental experiences. An example might be smelling coffee, or drinking a cold drink on a hot day.
- R1/XCON - A classic expert system from the 1970s, developed by DEC for configuring their VAX computers. An early example of profitable AI.
- reactive - The ability of an agent to be in tune with and respond to changes in its environment. See also proactive.
- reinforcement learning - A form of machine learning, in which an agent acts in its environment, and receives feedbacks for its actions in the form of rewards.
- representation harm - A form of bias, in which stereotypes are developed or reinforced.
- reward - See reinforcement learning.
- rule - A discrete piece of knowledge, expressed in the form of an “IF…THEN…” expression. For example, consider this rule: “IF animal has udder THEN animal is mammal”. This rule tells us that if we have the information that an animal has an udder, then we can derive some new information, namely that the animal is a mammal.
- Sally-Anne test - A test which aims to determine whether some entity has a Theory of Mind, i.e., the ability to reason about the beliefs and desires of others. Originally developed as a test for autism.
- scripts - A knowledge representation scheme developed 1970s, which aims to capture stereotypical sequences of events in common situations.
- search, search tree - A fundamental AI problem solving technique, in which try to find how to achieve a goal starting from some initial state, using a repertoire of actions, generating a search tree.
- semantic nets - A knowledge representation scheme, in which we capture relationships between concepts and entities using a graphical notation.
- sensors - Devices that give robots raw perceptual data. Typical sensors are cameras, laser radar, ultrasonic range finders, and bump detectors. Interpreting the raw perceptual data is a major challenge.
- SHAKEY - A seminal experiment in autonomous robots, developed at SRI in the late 1960s, which pioneered several key AI technologies.
- singularity - The hypothesized point at which machine intelligence exceeds that humans. After that, the fear is that machines will be out of control.
- situated agent - A central idea in the era of behavioural AI, to the effect that progress on AI requires developing agents that are actually embedded in and acting on some environment, rather than being disembodied (as is usually the case with expert systems).
- social ability (in agents) - The Golden Age of AI emphasized individual capabilities for intelligence — planning, reasoning, problem solving. The era of agent-based AI emphasized social abilities: cooperation, coordination, negotiation, which would be required if agents were to interact with one-another.
- social welfare - Any attempt to measure the aggregate utility of a society, i.e., how well society is doing as a whole, is a measure of social welfare.
- software agents - Agents that inhabit a software environment rather than the physical world as in robots. Think of them as software robots.
- solution concept - In game theory, a solution concept tries to formulate what will happen if every player acts optimally (rationally), taking into account the preferences and choices of other players, and the fact that they will also act rationally. Nash equilibrium is the best-known solution concept.
- sound reasoning - Reasoning is said to be sound if the conclusions derived are warranted from the premises.
- STANLEY - The robot that won the 2005 DARPA Grand Challenge for driverless cars, autonomously driving approximately 140 miles, averaging about 19 miles per hour. Developed at Stanford University.
- STRIPS - A seminal planning system, developed as part of the SHAKEY robotics project at SRI.
- strong AI - The goal of building AI systems that really do have mind, consciousness, awareness, and so on in the way that we do. See also weak AI and general AI. Nobody knows whether strong AI is possible or what it would be like.
- subsumption architecture and subsumption hierarchy - An architecture for robots from the era of behavioural AI, in which we organize the desired behaviours of the robot into a hierarchy, with lower layers taking precedence over higher layers.
- supervised learning - The simplest form of machine learning, where we train a program by showing it examples of inputs and desired outputs. See also training.
- syllogism - A very simple form of logical reasoning, introduced in antiquity. An example is: “All humans are mortal; Emma is a human; therefore Emma is mortal”.
- synapses - The “junctions” that connect neurons allowing them to communicate.
- Theory of Mind (ToM) - The everyday ability that clinically normal human adults have to reason about the mental states (beliefs, desires, intentions) of other people. See also intentional stance and Sally-Anne test.
- Three Laws of Robotics - Introduced by science fiction writer Isaac Asimov in the 1930s, the three laws are a kind of ethical framework governing AI behavior. While they are very ingenious, it isn’t possible to directly implement them in practice — and it’s not even clear what they would mean.
- ToMnet - An experiment in building machine learning systems that develop a primitive Theory of Mind.
- TouringMachines - A typical agent design from the mid 1990s, in which control of the agent is divided into three layers, responsible for reacting, planning, and modelling, respectively.
- Towers of Hanoi - A recreational puzzle that was widely studied in AI, particularly in the context of search.
- tractable problem - A problem is said to be tractable if we have an efficient algorithm to solve it. NP-complete problems are not tractable: we don’t have algorithms that are guaranteed to solve them efficiently. See also P vs NP problem.
- training (in machine learning) - The task of a machine learning program is to learn associations between inputs and outputs without being told how to compute the association. To do this, the program is typically trained by giving it examples of inputs and the desired corresponding outputs. See also supervised learning.
- travelling salesman problem - A classic NP-complete problem, involving determining whether a salesman can tour a number of cities and return to his starting point using a fixed fuel budget.
- Trolley Problem - A problem in ethical reasoning, originally posed in the 1960s: if you do nothing, five people will die, while if you act, then only one person will die; should you act? Often discussed in the context of driverless cars, although mostly dismissed as irrelevant by that community.
- Turing machines - A mathematical problem-solving machine — embodies a particular recipe for solving a problem. Any mathematical problem that can be solved by a computer can be solved by a Turing machine. Invented by Alan Turing to solve the Entscheidungsproblem. See also Universal Turing Machine.
- Turing test - A test proposed by Turing to address the question of whether machines can “think”. If, after interacting with some entity for some time, you can’t be confident whether it is a machine or person, you should accept it has human-like intelligence. Ingenious and very influential, although not to be taken too seriously as an actual test for AI.
- uncertainty - A ubiquitous problem in AI: the information we receive is rarely certain (definitely true or false) — there is usually some uncertainty associated with it. Similarly, when we make decisions, we rarely know with certainty what the consequences of those decisions are: there are usually multiple possible outcomes, with differing degrees of likelihood. Dealing with uncertainty is therefore a fundamental topic in AI. See also Bayes net, Bayes theorem.
- undecidable problem - A problem that we know, in a precise mathematical sense, cannot be solved by a computer (or, more precisely, by a Turing machine).
- universal basic income - The idea that everyone should receive a basic income, irrespective of their personal circumstances. It has been suggested that the economic benefits of AI and other technologies will make universal basic income feasible and desirable. I think this is highly unlikely any time soon.
- Universal Turing Machines - A general type of Turing machine, which provided the template for the modern computer. While a Turing machine encodes just one specific recipe/algorithm, a Universal Turing Machine can be given any recipe/algorithm.
- unsound reasoning - In logic, where we derive conclusions that are not warranted by the premises. See also sound reasoning.
- Urban Challenge - A 2007 follow-on to the DARPA Grand Challenge, in which autonomous vehicles were required to autonomously traverse a built-up urban environment.
- utilitarianism - The idea that we should choose to act so as to maximise the benefit of society. In a Trolley Problem, a utilitarian would choose to kill one person in order to save five lives. See also virtue ethics.
- utilities - A standard technique for representing preferences in AI programs: we attach numeric values, called utilities, to all possible outcomes. The AI system then tries to compute the course of action which would lead to the outcome which maximises utility, i.e., is the most preferred outcome. See also expected utility and maximising expected utility.
- utopians - Those who believe that AI and other new technologies will lead us to a utopian future (where technology frees us from work, etc).
- virtue ethics - The idea that, when facing ethical problems, we identify an ethical person who embodies the ethical principles that we value, and that we should then choose to do what that ethical person would do.
- weak AI - The goal of building machines which appear to have understanding (consciousness, mind, self-awareness, etc) without claiming that they actually have these things. See also strong AI and general AI.
- wearable technology - Devices such as fitness trackers, which we carry with us and wear, and which continually monitor aspects of our physiology. Expected to play a role in AI-driven healthcare in the years to come.
- weights (in neural nets) - In neural nets, connections between neurons (axons) are given numeric weights. The higher the weight, the more the connection will influence the neuron to which it is linked. A neural network ultimately boils down to these weights, and the task of training a neural net is all about finding appropriate weights.
- Winograd schemas - An alternative to the Turing test. We are given two sentences that differ in just one word, but which have fundamentally different meanings. The test requires understanding the difference. The idea is that Winograd schemas resist cheap tricks often used in the Turing test, because they require comprehension of the text.
- WordNet - A freely available computational thesaurus, widely used in AI research.
- working memory - In an expert system, the part of the system that has information about the current problem being solved (as opposed to the knowledge about the problem encoded in rules).