Artificial General Intelligence - A hypothetical type of AI that can understand, learn, and apply intelligence to solve any problem a human can.
How is this different from our existing AI? (The following bullet points were generated by AI)
Scope of Intelligence - AI focuses on specific domains (image generation, image examination, language translation, etc.). AGI can learn to do anything (not limited to a specific domain).
Learning Ability - AI required massive data sets to find patterns for solving specific tasks. AGI learns from everything around it and can solve novel problems.
Knowledge Transfer - AI can't apply learned concepts in 1 area to a new, unrelated area. AGI can connect concepts from one field to another.
Reasoning Depth - AI is a pattern machine and It lacks common sense. AGI can do abstract reasoning logical deduction, and complex text comprehension
Autonomy - AI has boundaries and goals set by humans. AGI is capable of autonomous goal-setting and self improving feedback loops.
Hardware Requirements - AI runs on existing GPU/NPU architecture and cloud infrastructure. AGI likely requires technology that hasn't been developed yet (requiring significantly more processing power and memory storage).
It's important to recognize that AI is something we're living with currently and AGI is a major research focus, but is currently still hypothetical. But AI using Neural Networks and machine learning was also hypothetical prior to 2022. AGI could be closer than we think.
(Contrast AGI with Superintelligence)
Agentic refers to an AI's ability to act as an agent (making choices on its own based on inputs it receives). Though there's some features built into Gemini and ChatGPT that make them Agentic, we typically categorize them as Generative AI rather than Agentic AI. Think of it more like an autonomous partner.
Agentic AI is typically something you setup that takes in inputs from the world around it, has certain goals built into it, is adaptable, and uses tools to complete tasks.
An AI bot setup to read your emails, draft a summary, and text you with a list of tasks that you need to complete from those emails.
An AI bot that scans major news outlets every morning and emails you with a summary of news articles relevant to your specific interests (such as AI, Soccer, or politics).
An AI bot that analyzes weekly academic progress and creates a study plan as well as assignment plan to help you get/stay caught up.
An AI bot that sends you a text reminder every day of events planned for that day as well as text reminders shortly before those events start.
An AI bot that's setup to collect student work, provide feedback to the student, and update the gradebook with the student's results.
To setup your own Agentic AI bot, you would need to designate the following (in a service that provides Agentic AI tools):
The brain (Model) - Gemini, ChatGPT, etc.
Tools & APIs - these connect you to the services used for inputs and outputs such as email, calendars, text messages, news sites, weather sites, etc.
Knowledge Base - Data that the AI needs to complete the tasks
Memory - long term storage of information for user preferences or past task outcomes.
Reasoning & Constraints - this is your actual instructions for the model. What should it do, what are its limits, etc.
Blackbox refers to the unknown way that AI works. Unlike coded AI which we can see the algorithms and processes that make it work, AI is a self-taught pattern-finding machine and it doesn't disclose how it comes up with the conclusions it does. The phrase "Blackbox" comes from the idea of having a box you can't see into, but it has a hole on one side where you can give it inputs (like numbers), and it can provide outputs from the other end (like answers to an equation based on those inputs). What happens within that box is hidden.
Multimodal AI takes inputs from multiple places and of multiple types and processes it altogether, whereas traditional AI typically focuses on one input source (such as a chat prompt).
Humans are multimodal in that we take in vision, sound, taste, touch, and smell and we process them altogether to understand the world around us. We can use these senses together to get a better understanding of a situation, such as when we are listening to someone plus taking in their facial expressions and posture to get a better idea of their actual mood and interest in the current conversation.
Modalities that are supported by common AI (such as ChatGPT and Gemini):
Text
Images
Audio
Video
Code
Where we're likely to really see the effects of multimodal technology:
Wearables, like watches, using sensors that track multiple health data in real time, creating digital snapshots of you and your health that combine to become your "digital twin".
Autonomous Vehicles, using sensors inside and outside the vehicle. These can be used to take over driving when the driver is fatigued or when there's a danger that the car recognizes but the driver does not yet see.
Smart Glasses, using visual and audio sensors to record information and recall it based on user requests.
Nural Processing Unit - Like CPUs, which are the brains of computers, NPUs are the brains for AI-focused devices. They are faster and more energy-efficient than previous computer chips.
Superintelligence - a hypothetical AI surpassing human intellect in every domain. Take AGI, which aims at matching human intelligence, and then keep going to where the AI can do things we've never done before.
Key features:
Superior Cognition
Rapid Self-Improvement
Vast data processing
Advanced reasoning.
Because a Superintelligence would be self-learning, and do so at a rapid pace, there are huge concerns around the risks of such a machine, especially surrounding control and alignment with human values.
User Generated Content - Content created by users, not brands, influencers, or companies.
Key aspects of UGC - Authenticity. Made by someone for passion, not for profit.
Though not specific to AI, this could extend to emphasizing Human-Made content over AI content as well.
Vibecoding is the process of prompting AI to write your code for you. Modern AI can do more than write short lines of code, they can solve entire projects, even adding user interfaces in multiple programming languages. If you want AI to write an app for you, you simply need to tell it what language or platform it will run on and give it the specifications.
This brings up the question of "Do I need to learn to code if AI can do it for me?" Currently, the answer is still yes. AI doesn't always get it right, and you'll need to be able to read and analyze the AI code in order to fix some of its issues. Programming still makes you a better problem solver, so even if AI was able to code any problem correctly for us, I'd still argue that learning to program is important for students.
Article: 10 Pieces of Tech Jargon that Confused Us in 2025 (Dec 2025) - Mostly on AI Jargon