What are large language models?
Large language models (LLMs), such as GPT-4 (which powers ChatGPT), are AI models that are trained to take a string of text as input and output a likely continuation of that text. LLMs are typically trained on massive amounts of textual data from the Internet.
In the course of being trained to predict continuations of text, LLMs have acquired a variety of abilities not explicitly trained for, such as solving math problems, translating between languages, performing basic contextual reasoning, finding mistakes in code, and referencing large sets of data. The performance of LLMs on these tasks tends to improve as the number of parameters in the model increases, with performance on different metrics often improving simultaneously. Modern LLMs achieve human or above-human performance on many metrics.