Generative AI is a type of machine learning model. Generative AI is not a human being. It can’t think for itself or feel emotions. It’s just great at finding patterns.
In the past, AI was used to understand and recommend information. Now, generative AI can also help us create new content, like images, music, and code.
How machine learning models are trained?
Machine learning models, including generative AI, learn through a process of observation and pattern matching known as training. For a model to understand what a sneaker is, it’s trained on millions of photos of sneakers. Over time, it recognizes that sneakers are objects that humans wear on their feet with laces, soles, and a logo.
The model can use training to:
Take an input like “Generate an image of sneakers with a goat charm.
Connect what it’s learned about sneakers, goats, and charms.
Generate an image, even if it hasn’t seen an image like that before.
How Large Language Models power generative AI?
Generative AI and Large Language Models (LLMs) are part of the same technology. Generative AI can be trained on any type of data, but LLMs use words as their main source of training data.
Experiences powered by LLMs, like Gemini and AI Overviews, can predict words that might come next based on your prompt and the text it’s generated so far. They’re given flexibility to pick probable next words that match patterns they get from training. This flexibility lets them generate creative responses.
If you prompt them to fill in the phrase “Harry [blank],” they might predict the next word is “Styles” or “Potter.”