A machine learning technique in which the model actively queries the user or an oracle for labels or feedback on specific instances, often used to reduce the amount of labelled data required for training and improve model performance.
Specially crafted input data that can deceive and manipulate the output of a machine learning model, often used to test the robustness and security of AI systems.
A machine learning technique used to identify unusual patterns or outliers in data that deviate from the norm, often used for fraud detection, network security, or quality control.
The development of computer systems that can perform tasks typically requiring human intelligence, such as visual perception, speech recognition, decision-making, and natural language understanding.
A specialized hardware device or chip designed to accelerate the computation and processing of AI workloads, such as training and inference in neural networks, often used to improve the performance and efficiency of AI systems.
The study and development of ethical principles, guidelines, and practices for the design, deployment, and governance of artificial intelligence systems, addressing issues such as fairness, accountability, transparency, and privacy.
The study and development of methods, techniques, and best practices to ensure that artificial intelligence systems are safe, reliable, and aligned with human values, addressing risks such as unintended consequences, adversarial attacks, or loss of control.
A technique used in neural networks, particularly in natural language processing, to selectively focus on specific parts of the input data, allowing the model to weigh the importance of different elements and improve its performance on tasks such as translation, summarization, or sentiment analysis.
Machines or systems that can operate independently, without human intervention or supervision, by perceiving their environment, making decisions, and taking actions based on their goals and objectives.
Autonomous agents can understand objectives, create tasks to achieve those objectives, execute the tasks, adapt priorities and learn from their actions until the desired goals are reached, all without human intervention. Building systems that allow cooperation between humans and autonomous agents can lead to better productivity, decision-making and problem-solving.
Examples: autonomous cars, Auto-GPT, BabyAGI, AgentGPT, virtual assistants such as Siri, Alexa and Google Assistant
A probabilistic graphical model that represents a set of variables and their conditional dependencies using a directed acyclic graph (DAG), often used for reasoning under uncertainty and modelling complex systems.
The presence of systematic errors in the predictions or decisions made by an AI system, often resulting from biased training data, algorithmic bias, or other factors, which can lead to unfair or discriminatory outcomes.
A type of neural network architecture proposed by Geoffrey Hinton that aims to overcome the limitations of convolutional neural networks by encoding hierarchical relationships between features and preserving spatial information, potentially improving the robustness and interpretability of the model.
A computer program or AI system designed to interact with users through natural language conversation, often used for customer support, information retrieval, or entertainment purposes.
A field of AI that focuses on enabling computers to interpret and understand visual information from the world, such as images and videos, and make decisions based on this understanding.
A type of deep learning architecture specifically designed for processing grid-like data, such as images, by using convolutional layers to detect local patterns and features.
A technique used in machine learning to increase the size and diversity of a training dataset by applying various transformations, such as rotation, scaling, or flipping, to the original data samples.
A type of machine learning algorithm that builds a tree-like structure to model decisions and their possible consequences, often used for classification and regression tasks.
This is a subset of machine learning that imitates the workings of the human brain in processing data for use in decision making. Deep learning models are built using multiple layers of artificial neural networks, and they can learn and make intelligent decisions on their own.
For example, deep learning is used in autonomous vehicles to interpret visual input and navigate roads.Â
A machine learning technique that combines the predictions of multiple models, such as decision trees, neural networks, or support vector machines, to improve the overall accuracy and robustness of the predictions.
A computer program that uses AI techniques to simulate the decision-making ability of a human expert in a specific domain, providing solutions or recommendations based on a knowledge base and a set of rules.
An approach to artificial intelligence that focuses on creating AI systems that can provide clear, understandable explanations for their decisions and actions, enabling humans to trust and effectively collaborate with AI.
A technique used in machine learning to identify and select the most relevant and informative features from a dataset, reducing the dimensionality of the data and improving the performance and interpretability of the models.
A distributed machine learning approach that enables multiple devices or clients to collaboratively train a shared model while keeping their data locally, often used to improve privacy, security, and data efficiency in AI systems.
A machine learning technique in which a model learns to recognize new objects or classes based on a small number of labeled examples, aiming to improve the model's ability to generalize from limited data.
Also known as Artificial General Intelligence (AGI) or strong AI, it refers to a type of artificial intelligence that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks, similar to human intelligence.
Generative AI refers to a category of artificial intelligence algorithms that can create new content, such as text, images, audio, and more, based on the data they have been trained on. These models use deep learning techniques to generate outputs that are often indistinguishable from those created by humans. Examples include models like ChatGPT, which generates human-like text, and DALL-E, which creates images from textual descriptions. The underlying technology typically involves transformer architectures, which excel at understanding and producing complex sequences of data. Learn more.
A deep learning architecture consisting of two neural networks, a generator and a discriminator, that compete against each other in a zero-sum game, often used for generating new data samples that resemble a given dataset.
A search heuristic inspired by the process of natural selection, used to find approximate solutions to optimization and search problems in AI and machine learning.
In the context of AI, hallucinations refer to the generation of false or unrealistic outputs by a model, often caused by overfitting, insufficient training data, or biases in the data.
The process of optimizing the hyperparameters, i.e., the external configuration settings, of a machine learning model, such as the learning rate, number of layers, or activation function, to achieve the best performance on a given task.
A structured representation of knowledge in the form of nodes (entities) and edges (relationships) that can be used to store, organize, and query information, often used in AI systems for reasoning, question answering, and knowledge discovery.
A large language model is an artificial intelligence system designed to understand, generate, and manipulate human language. It's called "large" because it has been trained on vast amounts of text data and consists of many parameters (often billions or more). These models use deep learning techniques, particularly transformer architectures, to predict and generate text based on the input they receive. Essentially, they can perform tasks like translation, summarization, and conversation by recognizing patterns and structures in language. (GPT 4o, 24.5.24) Learn more.
A type of recurrent neural network architecture designed to address the vanishing gradient problem, allowing the network to learn long-term dependencies more effectively.
A subset of AI that focuses on designing algorithms and statistical models that enable computers to learn and improve from experience without being explicitly programmed. The "experience" might be a dataset describing previous instances, not necessarily the experience of this particular system.
Examples: weather forecasting based on historical weather data.
A subfield of AI that studies the behaviour and interaction of multiple autonomous agents, which could be robots, software agents, or humans, in a shared environment, with the aim of developing systems that can coordinate and cooperate to achieve common goals.
Multimodal AI refers to artificial intelligence systems that integrate and process multiple types of data, or modes, such as text, images, audio, and numerical data. By combining these different data types, multimodal AI can create more accurate and insightful outputs, leading to a more holistic understanding of the information. This approach enhances the AI's ability to draw conclusions and make predictions based on a richer set of inputs.Â
Also known as weak AI or specific AI, it refers to artificial intelligence systems designed to perform specific tasks or solve narrowly defined problems, without possessing the broad cognitive abilities of general AI, e.g. speech recognition.
A field of AI that focuses on enabling computers to understand, interpret, and generate human language in a way that is both meaningful and useful.
A computational model inspired by the structure and function of biological neural networks, consisting of interconnected nodes (neurons) that process and transmit information.
A machine learning technique in which a model learns to recognize new objects or classes based on just one or very few examples, inspired by the human ability to learn from limited experience.
An AI language model developed by OpenAI that can understand and generate human-like text based on prompts, often used for tasks such as code generation, natural language understanding, and content creation.
A phenomenon in machine learning where a model learns the training data too well, capturing noise or random fluctuations instead of the underlying patterns, resulting in poor generalization to new, unseen data.
In the context of AI and natural language processing, a prompt is an input text or signal given to a model to trigger a response or generate an output, often used to guide the model's behaviour during training or inference.
A system that combines retrieving information and generating new data. In other words, these systems do not rely only on their training data but also look up information. This can improve the quality and relevance of the generated responses.
A type of neural network architecture designed for processing sequences of data by maintaining an internal state or memory that can capture information from previous time steps.
A type of machine learning where an agent learns to make decisions by interacting with an environment, receiving feedback in the form of rewards or penalties, and optimizing its actions to maximize cumulative rewards.
The branch of AI that deals with the design, construction, operation, and application of robots, which are machines capable of carrying out tasks autonomously or semi-autonomously.
An extension of the World Wide Web that aims to make the meaning and relationships of data on the web machine-readable and understandable, enabling more intelligent and efficient search, data integration, and knowledge discovery.
A subfield of natural language processing that focuses on determining the sentiment or emotion expressed in a piece of text, such as positive, negative, or neutral, often used for analysing opinions and attitudes in social media or customer reviews.
Also known as the technological singularity, it refers to a hypothetical point in the future when artificial intelligence surpasses human intelligence, leading to rapid technological advancements and potentially unpredictable consequences for humanity.
A type of machine learning where the algorithm is trained on a labelled dataset, i.e., a dataset with known input-output pairs, to make predictions for unseen data.
A supervised learning algorithm that can be used for classification or regression tasks by finding the optimal hyperplane or decision boundary that separates the data into different classes or predicts the target values.
A form of AI inspired by the collective behaviour of social insects, such as ants or bees, which focuses on the development of algorithms and multi-agent systems that can self-organize and solve problems through decentralized cooperation and communication.
In the context of natural language processing (NLP) and text-based machine learning models, a token refers to a single unit or element of a text. Tokens are usually words, phrases, or individual characters that are extracted from a text during the tokenisation process.
Tokenisation is the process of breaking down a text into a sequence of tokens, which can then be used as input for various NLP tasks such as text classification, sentiment analysis, or language modelling. Tokens serve as the basic building blocks for understanding and processing textual data in AI systems.
The process of teaching a machine learning model to recognize patterns, make decisions, or generate outputs based on a dataset, typically involving adjusting the model's parameters to minimize the error between its predictions and the actual target values.
A machine learning technique where a pre-trained model is fine-tuned or adapted to solve a different but related problem, leveraging the knowledge gained from the initial task to improve performance on the new task.
Transformer architectures are a type of deep learning model designed to handle sequential data, such as natural language, by using a mechanism called self-attention. Developed by Google and introduced in the 2017 paper "Attention Is All You Need," transformers are highly effective in natural language processing (NLP) and other machine learning tasks due to their ability to consider the entire context of a sentence or sequence simultaneously rather than sequentially.
Key components of transformer architectures include:
Self-Attention Mechanism: This allows the model to weigh the importance of different words in a sentence relative to each other, capturing dependencies regardless of their distance in the sequence.
Multi-Head Attention: Multiple attention mechanisms operate in parallel, enabling the model to focus on different parts of the input simultaneously, thus capturing various aspects of the data.
Positional Encoding: Since transformers do not process data sequentially, positional encodings are added to the input embeddings to give the model information about the position of each word in the sequence.
These features make transformers highly scalable and efficient, significantly advancing the field of NLP and enabling models like BERT and GPT. (GPT 4o, 24.5.24)
A test proposed by British mathematician and computer scientist Alan Turing, in 1950, to determine whether a machine can exhibit intelligent behaviour indistinguishable from that of a human, typically involving a human evaluator who engages in a natural language conversation with the machine and another human.
A type of machine learning where the algorithm is trained on an unlabelled dataset, i.e., a dataset without known input-output pairs, to discover hidden patterns, structures, or relationships within the data.
A machine learning technique in which a model learns to recognize new objects or classes without any labelled examples, often by leveraging prior knowledge or semantic relationships between known and unknown classes.
These are the backbone of deep learning and are designed to replicate the way human think and learn. A neural network takes in inputs, which are processed in hidden layers using weights that are adjusted during training. The model then outputs a prediction. Neural networks can learn to perform tasks by considering examples, generally without task-specific programming.
For example, they can learn to recognize images that contain cats by analysing example images that have been manually labelled as "cat" or "no cat".Â
Large language models, such as GPT-3.5 Turbo, are advanced artificial intelligence systems that have been trained on vast amounts of text data.
These models are designed to understand and generate human-like text, making them capable of performing a wide range of language-related tasks. By leveraging their immense size and complexity, large language models can comprehend and generate coherent and contextually relevant responses to user queries. They have the ability to understand nuances, generate creative content, and even engage in meaningful conversations.
These models have the potential to revolutionize various industries, including customer service, content creation, and language translation, by providing efficient and accurate language processing capabilities.
The application of artificial intelligence techniques and algorithms to healthcare, including medical imaging, diagnostics, drug discovery, personalized medicine, and patient monitoring, with the aim of improving patient outcomes, reducing costs, and advancing medical research.
The application of artificial intelligence techniques and algorithms to finance, including credit scoring, fraud detection, algorithmic trading, risk management, and customer service, with the aim of improving efficiency, accuracy, and decision-making in financial institutions.
The application of artificial intelligence techniques and algorithms to education, including adaptive learning, intelligent tutoring, educational data mining, and student assessment, with the aim of personalizing learning experiences, improving outcomes, and enhancing educational resources.
The application of artificial intelligence techniques and algorithms to transportation, including autonomous vehicles, traffic management, route optimization, and logistics, with the aim of improving safety, efficiency, and sustainability in transportation systems.
The application of artificial intelligence techniques and algorithms to manufacturing, including process optimization, quality control, predictive maintenance, and supply chain management, with the aim of increasing productivity, reducing costs, and enhancing product quality.
13 AI Terms Pastors Must Know - (Exponential)
The Ultimate AI Glossary to Help You Navigate our Changing World - (Mashable)