In generative AI the goal is to generate data, as opposed to, say, a label. For example: with a dataset of images of either cats or dogs, one could:
- Develop a model that predicts if a given new image is of a cat or of a dog -- this model is said to be discriminative.
- Develop a model to generate new images of cats or dogs -- this model is said to be generative.
Modern generative models are trained on vasts amounts of data, sometimes labeled, sometimes not. One prominent example is GPT -- Generative Pre-trained Transformer -- which is trained to predict a token (a word or part of a word) based on a given series of tokens. Notice that any text can be used as a source of data for this model: we just set aside the last token in a chunk of text as the "label" to the previous tokens.
Circa 2022-2023, there have been many breacktroughs in generative text models, particularly in their application as chatbots (or "assistants"). Examples include ChatGPT (by OpenAI), BARD (by Google), and Claude (by Anthropic). Likewise various image-generation AIs became popular -- e.g. Stable Diffusion (by Stability AI), Midjourney (by Midjourney, Inc.), and Dall-E 2 (by OpenAI).