Artificial Intelligence (AI): A field of computer science that focuses on creating computer systems and software that can perform tasks that typically require human intelligence. These tasks include things like understanding natural language, recognizing patterns, making decisions, and learning from experience/training. AI systems aim to mimic human cognitive functions such as reasoning, problem-solving, and learning. Artificial Intelligence is often used as the umbrella term for these technologies.
Artificial General Intelligence: AI systems that have human-like intelligence and can understand, learn, and perform a wide range of tasks as flexibly as humans.
Artificial Narrow Intelligence: AI systems specialized in performing a specific task or a narrow set of tasks. These "learn" by identifying patterns in information and data and make predictions.
Black Box: In the context of AI and machine learning, refers to a machine learning model or system whose internal workings are not transparent or easily understandable by humans. This means that it can make predictions or decisions, but it is challenging to interpret how or why it arrives at those outcomes. The lack of transparency raises ethical concerns, especially when they are used in situations like healthcare, finance, or criminal justice because the decisions cannot be evaluated to ensure fairness, prevent discrimination, and verify compliance with regulations because the decision-making process is hidden.
Deep Learning: A subset of machine learning that uses neural networks with multiple layers to learn complex patterns from data.
Generative AI: Generative AI is subset of artificial intelligence that is possible through machine learning and deep learning technologies.
Generative Models: These are models that aim to generate new data that is like the training data. Common types include Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs).
GANs (Generative Adversarial Networks): GANs consist of two neural networks, a generator, and a discriminator, which are trained simultaneously through adversarial training. The generator tries to create data that is similar to real data, while the discriminator tries to distinguish between real and generated data.
Large Language Model (LLM): A type of artificial intelligence algorithm that uses deep learning techniques and massive data sets to perform natural language processing (NLP) tasks. LLMs can: Recognize, translate, predict, or generate text or other content; Understand, summarize, generate, and predict new content; Process and understand natural language
Machine Learning: A subset of AI involving training algorithms on data to enable them to make predictions or decisions without being explicitly programmed. It trains the computer to recognize patterns and make decisions based on examples.
Natural Language Processing (NLP): A machine learning technology that gives computers the ability to interpret, manipulate, and comprehend human language.
Neural Networks: Computing systems inspired by the structure of the human brain, consisting of interconnected nodes (neurons) that process information.
Reinforcement Learning: A type of machine learning that focuses on training a computer program to recognize patterns in data and decide what the output should be. It's often likened to teaching a computer how to make decisions through trial and error.
Training Data: The initial dataset used to train the generative model. It serves as the basis for the model to learn patterns and generate new content.