AI - Artificial Intelligence

This topical guide is designed to provide writers and communicators with some terminology to explain the societal impacts of AI and generative AI models – including their potential, their inherent risks, and their varying effects on different groups.

Some key points, as noted in the first entry below: Avoid language that attributes human characteristics to these systems, since they do not have thoughts or feelings but can respond in ways that give the impression that they do. Do not use gendered pronouns in referring to AI tools. And keep in mind that such systems are built by people who have their own human biases and aims.


Below are selected entries from The Associated Press Stylebook.

artificial intelligence: Computer systems, software, or processes that can emulate aspects of human work and cognition. Such systems are not conscious but are trained on vast datasets to accomplish tasks such as visual perception, analyzing and using speech, and learning — although in many cases, only to a limited extent. The term itself has been the subject of debate over the definition of intelligence as the technology has evolved in scope and influence.

AI is acceptable in headlines and on second reference in text.

Avoid language that attributes human characteristics to these systems, since they do not have thoughts or feelings but can respond in ways that give the impression that they do. Do not use gendered pronouns in referring to AI tools. And keep in mind that such systems are built by people who have their own human biases and aims.

The terms artificial intelligence and artificial general intelligence are not synonymous.

artificial general intelligence: An emerging branch of artificial intelligence that aims to build AI systems that can perform just as well as — or even better than — humans in a wide variety of tasks, including reasoning, planning, and the ability to learn from experience.

Some developers say the technology could result in broadly intelligent, context-aware machines that could adapt to be used effectively in a variety of settings. However, other researchers say that it would take a very long time for such systems to achieve "human-level" intelligence, which relies on inherent human traits such as sensory perception, creativity, understanding emotion, and critical reasoning.

Right now, AI systems can emulate only aspects of human work and cognition, but are not sentient per se.

The definition of artificial general intelligence, and concepts about how it differs from human intelligence, have changed over the years. When evaluating claims of artificial general intelligence, consider the source and their motivations and beware of so-called breakthroughs, because few things truly are.

Don’t use AGI on second reference unless necessary in a direct quotation, in which case explain the term.

algorithm: Detailed computational instructions that describe how to solve a problem or perform a specific task. A simple, real-life example is a recipe, which describes both a set of inputs — i.e., ingredients — and an output consisting of the dish itself. Machine learning algorithms are tools that are "trained" with large datasets to improve the predictions they generate. For example, some algorithms try to predict which posts a person wants to see on a social media platform, while other algorithms dispense targeted recommendations to each user who visits a shopping website. Some of the most highly complex machine learning algorithms are not always fully understood by their creators.

Algorithmic bias, which are decisions guided by AI systems that result in discriminatory or disproportionate effects on certain groups of people, has emerged as a major issue raised by critics and government agencies alike.

algorithmic bias/AI bias: Decisions guided by AI tools that replicate and amplify human biases, leading to discriminatory outcomes that can systematically impact specific groups of people.

Algorithmic bias is often used to describe the negative impacts of tools that draw from large datasets that are skewed by historical or selection bias.

For instance, an AI sentencing tool trained on historical data showing that Black offenders typically received longer prison terms than white offenders for the same crimes could yield predictions that incorporate that bias — in effect, reproducing past injustices.

In another instance, U.S. civil rights agencies have found that some computer-based tools for hiring workers or monitoring their performance may disadvantage people with disabilities.

Such bias can occur along lines of age, race, color, ethnicity, sex, gender identity, sexual orientation, religion, disability, veteran status, class, and many other variables.

These technologies also can perpetuate AI bias, a different class of errors that can result from the creation or use of AI tools. For example, AI bias can arise as a result of decisions made in the design of the AI tool or the historical models it draws from, or in the societal context in which humans use the AI system.

Explain either term if writing for general audiences.

ChatGPT: An artificial intelligence text chatbot made by the company OpenAI that was released as a free web-based tool in late 2022. It relies on technology known as a large language model, which is trained to mimic human writing by processing a large database of digitized books and online writings and analyzing how words are sequenced together.

People can ask ChatGPT — and similar chatbots made by the company's many other competitors — to answer a question or generate new passages of text, including poems, letters, and essays. It responds by making predictions about what words would answer the prompt it was given.

Tools such as ChatGPT show a strong command of human language, grammar, and writing styles but are sometimes factually incorrect. Avoid language that attributes human characteristics to these tools, since they do not have thoughts or feelings but can sometimes respond in ways that give the impression that they do.

Like other AI models, ChatGPT can be prone to algorithmic bias that may skew its responses and analyses. Outside researchers’ inability to probe its training data set also complicates efforts to understand how it settles on its responses, what information it relies on, and how it reaches conclusions.

ChatGPT’s popularity after its release helped spark public fascination and commercial interest in similar technologies, and numerous competitors also have introduced their own chatbots built with large language models. Some companies, including Google, Microsoft, and startups, have released their own publicly accessible chatbots, while others use the technology internally or sell it directly to businesses.

Some but not all commercially available chatbots are powered with GPT, which is an abbreviation for generative pretrained transformer. Use chatbot as the generic term. Don’t use GPT or ChatGPT to refer to all chatbots.

face recognition: A technology for automatically detecting human faces in an image and identifying individual people. It is a form of biometric technology that relies on comparing aspects of a face against a database of images to find a match. Techniques for comparing facial features to recognize individual faces have existed since the 1960s, but the technology has improved through advancements in computer vision, machine learning, and data processing.

Face recognition raises privacy and accuracy concerns because governments and others can scan images from video cameras or the internet and track individual people without their knowledge, and some systems have been shown to work unevenly across demographic groups. Some lawmakers have sought to curtail the technology as it becomes more widely used by law enforcement, businesses, and consumers.

Similar technologies include gait recognition, for detecting people in video images based on their body shape and how they move; and object recognition, for detecting objects in an image, such as a traffic cone in the path of a self-driving car.

Face recognition technology is sometimes called facial recognition technology or face scanning.

generative AI: A term for AI systems capable of creating text, images, video, audio, code, and other media in response to queries. Humans can interact with generative AI models in a seemingly natural way, but the models aren’t reliably capable of distinguishing between facts and falsehoods or fantasy. Generative AI systems often are powered by large language models. They sometimes generate inaccurate or fabricated responses to queries, an effect AI scientists call hallucination. If using the term hallucination, describe it as an issue associated with the technology that produces falsehoods or inaccurate or illogical information. Some in the field prefer the term confabulation or simpler terms to describe the inaccuracies that don’t draw comparisons with human mental illness.

large language models: AI systems that use advanced statistics to uncover patterns in vast troves of written texts that they can apply to generate responses. Such systems are increasingly capable of applying the syntax and semantics of human speech and can also be used to generate a variety of media (see generative AI). The models work based on the probability that certain words and phrases appear together, and their level of sophistication and accuracy can vary across human languages. GPT, an AI system created by the Microsoft-backed company OpenAI, is a large language model. Do not abbreviate as LLM outside technical contexts.

machine learning (n., adj.): An AI process in which computer systems identify patterns in datasets to make or refine the decisions and predictions that they generate without being explicitly programmed to do so. Examples of machine learning applications include face recognition, language translation, and self-driving cars.

Explain the term if writing for general audiences. Machine learning is not a synonym for AI.

training data: A dataset used to teach an algorithm or a machine learning model how to make predictions. Because the models learn to find patterns from the training data, it is important to consider the specific information it may contain. The types of training data used in different AI tools can vary widely, from large quantities of written texts to vast digital libraries of images of human faces to historical arrest records from specific geographic areas.