When people say “AI” today, they often mean chat-based systems called large language models (LLMs)—tools that read and write natural language.
This page focuses on LLMs. How an LLM works, in one line: it predicts the most likely next piece of text—again and again—until a full answer appears. This is pattern-matching, not certainty.
Some products connect the model to tools (e.g., web search) to fetch information. Those tools can improve answers, but the core process is still next-token prediction.
Before we go deeper, a few quick terms:
Traditional programs: rules hand-coded. Machine Learning: rules (parameters) learned from data using explicit training algorithms.
Can be wrong or vague if your prompt is vague
May sound confident even when mistaken (*verify important information*)
Context window is limited; very long inputs may need trimming or summaries for effectiveness