Large language model

A large language model (LLM) is a language model consisting of a neural network with many parameters (typically billions of weights or more), trained on large quantities of unlabelled text using self-supervised learning. LLMs emerged around 2018 and perform well at a wide variety of tasks. This has shifted the focus of natural language processing research away from the previous paradigm of training specialized supervised models for specific tasks.[1]

Properties

Though the term large language model has no formal definition, it often refers to deep learning models having a parameter count on the order of billions or more.[2] LLMs are general purpose models which excel at a wide range of tasks, as opposed to being trained for one specific task (such as sentiment analysis, named entity recognition, or mathematical reasoning).[1][3] The skill with which they accomplish tasks, and the range of tasks at which they are capable, seems to be a function of the amount of resources (data, parameter-size, computing power) devoted to them, in a way that is not dependent on additional breakthroughs in design.[4]

Though trained on simple tasks along the lines of predicting the next word in a sentence, neural language models with sufficient training and parameter counts are found to capture much of the syntax and semantics of human language. In addition, large language models demonstrate considerable general knowledge about the world, and are able to "memorize" a great quantity of facts during training.[1]

Hallucinations

In artificial intelligence in general, and in large language models in particular, a "hallucination" is a confident response that does not seem to be justified by the model's training data.[5]

Emergent abilities

On a number of natural language benchmarks involving tasks such as question answering, models perform no better than random chance until they reach a certain scale (in this case, measured by training computation), at which point their performance sharply increases. These are examples of emergent abilities.

Unpredictable abilities that have been observed in large language models but that were not present in simpler models (and that were not explicitly designed into the model) are usually called "emergent abilities". Researchers note that such abilities "cannot be predicted simply by extrapolating the performance of smaller models".[3] These abilities are discovered rather than programmed-in or designed, in some cases only after the LLM has been publicly deployed.[4] Hundreds of emergent abilities have been described. Examples include multi-step arithmetic, taking college-level exams, identifying the intended meaning of a word,[6] chain-of-thought prompting,[3] decoding the International Phonetic Alphabet, unscrambling a word’s letters, identifying offensive content in paragraphs of Hinglish (a combination of Hindi and English), and generating a similar English equivalent of Kiswahili proverbs.[7]

Architecture and training

Large language models have most commonly used the transformer architecture, which, since 2018, has become the standard deep learning technique for sequential data (previously, recurrent architectures such as the LSTM were most common).[1] LLMs are trained in an unsupervised manner on unannotated text. A left-to-right transformer is trained to maximize the probability assigned to the next word in the training data, given the previous context.[8] Alternatively, an LLM may use a bidirectional transformer (as in the example of BERT), which assigns a probability distribution over words given access to both preceding and following context.[9] In addition to the task of predicting the next word or "filling in the blanks", LLMs may be trained on auxiliary tasks which test their understanding of the data distribution such as Next Sentence Prediction (NSP), in which pairs of sentences are presented and the model must predict whether they appear side-by-side in the training corpus.[9]

The earliest LLMs were trained on corpora having on the order of billions of words. The first model in OpenAI's GPT series was trained in 2018 on BookCorpus, consisting of 985 million words. In the same year, BERT was trained on a combination of BookCorpus and English Wikipedia, totalling 3.3 billion words.[9] In the years since then, training corpora for LLMs have increased by orders of magnitude, reaching up to hundreds of billions or trillions of tokens.[9]

LLMs are computationally expensive to train. A 2020 study estimated the cost of training a 1.5 billion parameter model (1-2 orders of magnitude smaller than the state of the art at the time) at $1.6 million.[10]

A 2020 analysis found that neural language models' capability (as measured by training loss) increased smoothly in a power law relationship with number of parameters, quantity of training data, and computation used for training.[11][12] These relationships were tested over a wide range of values (up to seven orders of magnitude) and no attenuation of the relationship was observed at the highest end of the range (including for network sizes up to trillions of parameters).[12]

Application to downstream tasks

Between 2018 and 2020, the standard method for harnessing an LLM for a specific natural language processing (NLP) task was to fine tune the model with additional task-specific training. It has subsequently been found that more powerful LLMs such as GPT-3 can solve tasks without additional training via "prompting" techniques, in which the problem to be solved is presented to the model as a text prompt, possibly with some textual examples of similar problems and their solutions.[1]

Fine-tuning

Fine-tuning is the practice of modifying an existing pretrained language model by training it (in a supervised fashion) on a specific task (e.g. sentiment analysis, named entity recognition, or part-of-speech tagging). It is a form of transfer learning. It generally involves the introduction of a new set of weights connecting the final layer of the language model to the output of the downstream task. The original weights of the language model may be "frozen", such that only the new layer of weights connecting them to the output are learned during training. Alternatively, the original weights may receive small updates (possibly with earlier layers frozen).[9]

Prompting

In the prompting paradigm, popularized by GPT-3,[3] the problem to be solved is formulated via a text prompt, which the model must solve by providing a completion (via inference). In "few-shot prompting", the prompt includes a small number of examples of similar (problem, solution) pairs.[1] For example, a sentiment analysis task of labelling the sentiment of a movie review could be prompted as follows:[3]

Review: This movie stinks.
Sentiment: negative

Review: This movie is fantastic!
Sentiment:

If the model outputs "positive", then it has correctly solved the task. In zero-shot prompting, no solve examples are provided.[10][13] An example of a zero-shot prompt for the same sentiment analysis task would be "The sentiment associated with the movie review 'This movie is fantastic!' is".[14]

Few-shot performance of LLMs has been shown to achieve competitive results on NLP tasks, sometimes surpassing prior state-of-the-art fine-tuning approaches. Examples of such NLP tasks are translation, question answering, cloze tasks, unscrambling words, and using a novel word in a sentence.[13] The creation and optimisation of such prompts is called prompt engineering.

Instruction tuning

Instruction tuning is a form of fine-tuning designed to facilitate more natural and accurate zero-shot prompting interactions. Given a text input, a pretrained language model will generate a completion which matches the distribution of text on which it was trained. A naive language model given the prompt "Write an essay about the main themes of Hamlet." might provide a completion such as "A late penalty of 10% per day will be applied to submissions received after March 17." In instruction tuning, the language model is trained on many examples of tasks formulated as natural language instructions, along with appropriate responses. Various techniques for instruction tuning have been applied in practice. OpenAI's InstructGPT protocol involves supervised fine-tuning on a dataset of human-generated (prompt, response) pairs, followed by reinforcement learning from human feedback (RLHF), in which a reward function was learned based on a dataset of human preferences.[15] Another technique, "self-instruct", fine-tunes the language model on a training set of examples which are themselves generated by an LLM (bootstrapped from a small initial set of human-generated examples).[16]

List of large language models

List of large language models
NameRelease date[lower-alpha 1]DeveloperNumber of parameters[lower-alpha 2]Corpus sizeLicense[lower-alpha 3]Notes
BERT2018Google340 million[17]3.3 billion words[17]Apache 2.0[18] An early and influential language model,[1] but encoder-only and thus not built to be prompted or generative[19]
GPT-22019OpenAI1.5 billion[20]40GB[21] (~10 billion tokens)[22]MIT[23] general-purpose model based on transformer architecture
GPT-32020OpenAI175 billion[10]499 billion tokens[22]public web API A fine-tuned variant of GPT-3, termed GPT-3.5, was made available to the public through a web interface called ChatGPT in 2022.[24]
GPT-NeoMarch 2021EleutherAI2.7 billion[25]825 GiB[26]MIT[27] The first of a series of free GPT-3 alternatives released by EleutherAI. GPT-Neo outperformed an equivalent-size GPT-3 model on some benchmarks, but was significantly worse than the largest GPT-3.[27]
GPT-JJune 2021EleutherAI6 billion[28]825 GiB[26]Apache 2.0 GPT-3-style language model
Megatron-Turing NLGOctober 2021[29]Microsoft and Nvidia530 billion[30]338.6 billion tokens[30]Restricted web access Standard architecture but trained on a supercomputing cluster.
Ernie 3.0 TitanDecember 2021Baidu260 billion[31][32]4 TbProprietary Chinese-language LLM. Ernie Bot is based on this model.
Claude[33]December 2021Anthropic52 billion[34]400 billion tokens[34]Closed beta Fine-tuned for desirable behavior in conversations.[35]
GLaM (Generalist Language Model)December 2021Google1.2 trillion[36]1.6 trillion tokens[36]Proprietary Sparse mixture-of-experts model, making it more expensive to train but cheaper to run inference compared to GPT-3.
GopherDecember 2021DeepMind280 billion[37]300 billion tokens[38]Proprietary
LaMDA (Language Models for Dialog Applications)January 2022Google137 billion[39]1.56T words,[39] 168 billion tokens[38]Proprietary Specialized for response generation in conversations. Used in Google Bard chatbot.
GPT-NeoXFebruary 2022EleutherAI20 billion[40]825 GiB[26]Apache 2.0 based on the Megatron architecture
ChinchillaMarch 2022DeepMind70 billion[41]1.4 trillion tokens[41][38]Proprietary Reduced-parameter model trained on more data. Used in the Sparrow bot.
PaLM (Pathways Language Model)April 2022Google540 billion[42]768 billion tokens[41]Proprietary aimed to reach the practical limits of model scale
OPT (Open Pretrained Transformer)May 2022Meta175 billion[43]180 billion tokens[44]Non-commercial research[lower-alpha 4] GPT-3 architecture with some adaptations from Megatron
YaLM 100B June 2022 Yandex 100 billion[45] 1.7TB[45] Apache 2.0 English-Russian model based on Microsoft's Megatron-LM.
MinervaJune 2022Google540 billion[46]38.5B tokens from webpages filtered for mathematical content and from papers submitted to the arXiv preprint server[46]Proprietary LLM trained for solving "mathematical and scientific questions using step-by-step reasoning".[47] Minerva is based on PaLM model, further trained on mathematical and scientific data.
BLOOMJuly 2022Large collaboration led by Hugging Face175 billion[11]350 billion tokens (1.6TB)[48]Responsible AI Essentially GPT-3 but trained on a multi-lingual corpus (30% English excluding programming languages)
AlexaTM (Teacher Models)November 2022Amazon20 billion[49]1.3 trillion[50]public web API[51] bidirectional sequence-to-sequence architecture
LLaMA (Large Language Model Meta AI)February 2023Meta65 billion[52]1.4 trillion[52]Non-commercial research[lower-alpha 5] Trained on a large 20-language corpus to aim for better performance with fewer parameters.[52] Researchers from Stanford University trained a fine-tuned model based on LLaMA weights, called Alpaca.[53]
GPT-4March 2023OpenAIUnknown[lower-alpha 6]Unknownpublic web API Available for ChatGPT Plus users and used in several products.
Cerebras-GPT March

2023

Cerebras 13 billion[55] Apache 2.0 Trained with Chinchilla formula.
FalconMarch 2023Technology Innovation Institute40 billion[56]1 Trillion tokens (1TB)[56]Proprietary The model is claimed to use only 75% of GPT-3's training compute, 40% of Chinchilla's, and 80% of PaLM-62B's.

See also

Notes

  1. This is the date that documentation describing the model's architecture was first released.
  2. In many cases, researchers release or report on multiple versions of a model having different sizes. In these cases, the size of the largest model is listed here.
  3. This is the license of the pre-trained model weights. In almost all cases the training code itself is open-source or can be easily replicated.
  4. The smaller models including 66B are publicly available, while the 175B model is available on request.
  5. Facebook's license and distribution scheme restricted access to approved researchers, but the model weights were leaked and became widely available.
  6. As stated in Technical report: "Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method ..."[54]

References

  1. Manning, Christopher D. (2022). "Human Language Understanding & Reasoning". Daedalus.
  2. Carlini, Nicholas; Tramer, Florian; Wallace, Eric; Jagielski, Matthew; Herbert-Voss, Ariel; Lee, Katherine; Roberts, Adam; Brown, Tom B; Song, Dawn; Erlingsson, Ulfar (2021). Extracting Training Data from Large Language Models (PDF). USENIX Security Symposium. Vol. 6.
  3. Wei, Jason; Tay, Yi; Bommasani, Rishi; Raffel, Colin; Zoph, Barret; Borgeaud, Sebastian; Yogatama, Dani; Bosma, Maarten; Zhou, Denny; Metzler, Donald; Chi, Ed H.; Hashimoto, Tatsunori; Vinyals, Oriol; Liang, Percy; Dean, Jeff; Fedus, William (31 August 2022). "Emergent Abilities of Large Language Models". Transactions on Machine Learning Research. ISSN 2835-8856.
  4. Bowman, Samuel R. "Eight Things to Know about Large Language Models" (PDF). {{cite journal}}: Cite journal requires |journal= (help)
  5. Ji, Ziwei; Lee, Nayeon; Frieske, Rita; Yu, Tiezheng; Su, Dan; Xu, Yan; Ishii, Etsuko; Bang, Yejin; Dai, Wenliang; Madotto, Andrea; Fung, Pascale (November 2022). "Survey of Hallucination in Natural Language Generation" (pdf). ACM Computing Surveys. Association for Computing Machinery. 55 (12): 1–38. doi:10.1145/3571730. S2CID 246652372. Retrieved 15 January 2023.
  6. "Characterizing Emergent Phenomena in Large Language Models". ai.googleblog.com.
  7. Ornes, Stephen (March 16, 2023). "The Unpredictable Abilities Emerging From Large AI Models". Quanta Magazine.
  8. https://www.researchgate.net/publication/338931711_A_Short_Survey_of_Pre-trained_Language_Models_for_Conversational_AI-A_New_Age_in_NLP
  9. Jurafsky, Dan; Martin, James H. (7 January 2023). Speech and Language Processing (PDF) (3rd edition draft ed.). Retrieved 24 May 2022.
  10. Wiggers, Kyle (28 April 2022). "The emerging types of language models and why they matter". TechCrunch.
  11. Ananthaswamy, Anil (8 March 2023). "In AI, is bigger always better?". Nature.
  12. Kaplan, Jared; McCandlish, Sam; Henighan, Tom; Brown, Tom B.; Chess, Benjamin; Child, Rewon; Gray, Scott; Radford, Alec; Wu, Jeffrey; Amodei, Dario (2020). "Scaling Laws for Neural Language Models". CoRR. abs/2001.08361. arXiv:2001.08361.
  13. Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; Sutskever, Ilya; Amodei, Dario (Dec 2020). Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.F.; Lin, H. (eds.). "Language Models are Few-Shot Learners" (PDF). Advances in Neural Information Processing Systems. Curran Associates, Inc. 33: 1877–1901.
  14. Bosma, Maarten; Wei, Jason (6 October 2021). "Introducing FLAN: More generalizable Language Models with Instruction Fine-Tuning". Google Research.
  15. Ouyang, Long; Wu, Jeff; Jiang, Xu; Almeida, Diogo; Wainwright, Carroll L.; Mishkin, Pamela; Zhang, Chong; Agarwal, Sandhini; Slama, Katarina; Ray, Alex; Schulman, John; Hilton, Jacob; Kelton, Fraser; Miller, Luke; Simens, Maddie; Askell, Amanda; Welinder, Peter; Christiano, Paul; Leike, Jan; Lowe, Ryan (2022). "Training language models to follow instructions with human feedback". arXiv:2203.02155.
  16. Wang, Yizhong; Kordi, Yeganeh; Mishra, Swaroop; Liu, Alisa; Smith, Noah A.; Khashabi, Daniel; Hajishirzi, Hannaneh (2022). "Self-Instruct: Aligning Language Model with Self Generated Instructions". arXiv:2212.10560.
  17. Devlin, Jacob; Chang, Ming-Wei; Lee, Kenton; Toutanova, Kristina (11 October 2018). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". arXiv:1810.04805v2 [cs.CL].
  18. "BERT". March 13, 2023 via GitHub.
  19. Patel, Ajay; Li, Bryan; Rasooli, Mohammad Sadegh; Constant, Noah; Raffel, Colin; Callison-Burch, Chris (2022). "Bidirectional Language Models Are Also Few-shot Learners". ArXiv.
  20. "GPT-2: 1.5B Release". OpenAI. 2019-11-05. Archived from the original on 2019-11-14. Retrieved 2019-11-14.
  21. "Better language models and their implications". openai.com.
  22. "OpenAI's GPT-3 Language Model: A Technical Overview". lambdalabs.com.
  23. "gpt-2". GitHub. Retrieved 13 March 2023.
  24. "ChatGPT: Optimizing Language Models for Dialogue". OpenAI. 2022-11-30. Retrieved 2023-01-13.
  25. "GPT Neo". March 15, 2023 via GitHub.
  26. Gao, Leo; Biderman, Stella; Black, Sid; Golding, Laurence; Hoppe, Travis; Foster, Charles; Phang, Jason; He, Horace; Thite, Anish; Nabeshima, Noa; Presser, Shawn; Leahy, Connor (31 December 2020). "The Pile: An 800GB Dataset of Diverse Text for Language Modeling". arXiv:2101.00027.
  27. Iyer, Abhishek (15 May 2021). "GPT-3's free alternative GPT-Neo is something to be excited about". VentureBeat.
  28. "GPT-J-6B: An Introduction to the Largest Open Source GPT Model | Forefront". www.forefront.ai. Retrieved 2023-02-28.
  29. Alvi, Ali; Kharya, Paresh (11 October 2021). "Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, the World's Largest and Most Powerful Generative Language Model". Microsoft Research.
  30. Smith, Shaden; Patwary, Mostofa; Norick, Brandon; LeGresley, Patrick; Rajbhandari, Samyam; Casper, Jared; Liu, Zhun; Prabhumoye, Shrimai; Zerveas, George; Korthikanti, Vijay; Zhang, Elton; Child, Rewon; Aminabadi, Reza Yazdani; Bernauer, Julie; Song, Xia (2022-02-04). "Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model". arXiv:2201.11990.
  31. "China's ChatGPT Black Market Is Thriving" via www.wired.co.uk.
  32. Wang, Shuohuan; Sun, Yu; Xiang, Yang; Wu, Zhihua; Ding, Siyu; Gong, Weibao; Feng, Shikun; Shang, Junyuan; Zhao, Yanbin; Pang, Chao; Liu, Jiaxiang; Chen, Xuyi; Lu, Yuxiang; Liu, Weixin; Wang, Xi; Bai, Yangfan; Chen, Qiuliang; Zhao, Li; Li, Shiyong; Sun, Peng; Yu, Dianhai; Ma, Yanjun; Tian, Hao; Wu, Hua; Wu, Tian; Zeng, Wei; Li, Ge; Gao, Wen; Wang, Haifeng (December 23, 2021). "ERNIE 3.0 Titan: Exploring Larger-scale Knowledge Enhanced Pre-training for Language Understanding and Generation". arXiv:2112.12731. {{cite journal}}: Cite journal requires |journal= (help)
  33. "Product". Anthropic. Retrieved 14 March 2023.
  34. Askell, Amanda; Bai, Yuntao; Chen, Anna; et al. (9 December 2021). "A General Language Assistant as a Laboratory for Alignment". arXiv:2112.00861.
  35. Bai, Yuntao; Kadavath, Saurav; Kundu, Sandipan; et al. (15 December 2022). "Constitutional AI: Harmlessness from AI Feedback". arXiv:2212.08073.
  36. Dai, Andrew M; Du, Nan (December 9, 2021). "More Efficient In-Context Learning with GLaM". ai.googleblog.com. Retrieved 2023-03-09.
  37. "Language modelling at scale: Gopher, ethical considerations, and retrieval". www.deepmind.com. Retrieved 20 March 2023.
  38. Hoffmann, Jordan; Borgeaud, Sebastian; Mensch, Arthur; et al. (29 March 2022). "Training Compute-Optimal Large Language Models". arXiv:2203.15556.
  39. Cheng, Heng-Tze; Thoppilan, Romal (January 21, 2022). "LaMDA: Towards Safe, Grounded, and High-Quality Dialog Models for Everything". ai.googleblog.com. Retrieved 2023-03-09.
  40. Black, Sidney; Biderman, Stella; Hallahan, Eric; et al. (2022-05-01). GPT-NeoX-20B: An Open-Source Autoregressive Language Model. Proceedings of BigScience Episode #5 -- Workshop on Challenges & Perspectives in Creating Large Language Models. Vol. Proceedings of BigScience Episode #5 -- Workshop on Challenges & Perspectives in Creating Large Language Models. pp. 95–136. Retrieved 2022-12-19.
  41. Hoffmann, Jordan; Borgeaud, Sebastian; Mensch, Arthur; Sifre, Laurent (12 April 2022). "An empirical analysis of compute-optimal large language model training". Deepmind Blog.
  42. Narang, Sharan; Chowdhery, Aakanksha (April 4, 2022). "Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance". ai.googleblog.com. Retrieved 2023-03-09.
  43. "Democratizing access to large-scale language models with OPT-175B". ai.facebook.com.
  44. Zhang, Susan; Roller, Stephen; Goyal, Naman; Artetxe, Mikel; Chen, Moya; Chen, Shuohui; Dewan, Christopher; Diab, Mona; Li, Xian; Lin, Xi Victoria; Mihaylov, Todor; Ott, Myle; Shleifer, Sam; Shuster, Kurt; Simig, Daniel; Koura, Punit Singh; Sridhar, Anjali; Wang, Tianlu; Zettlemoyer, Luke (21 June 2022). "OPT: Open Pre-trained Transformer Language Models". arXiv:2205.01068.
  45. Khrushchev, Mikhail; Vasilev, Ruslan; Petrov, Alexey; Zinov, Nikolay (2022-06-22), YaLM 100B, retrieved 2023-03-18
  46. Lewkowycz, Aitor; Andreassen, Anders; Dohan, David; Dyer, Ethan; Michalewski, Henryk; Ramasesh, Vinay; Slone, Ambrose; Anil, Cem; Schlag, Imanol; Gutman-Solo, Theo; Wu, Yuhuai; Neyshabur, Behnam; Gur-Ari, Guy; Misra, Vedant (30 June 2022). "Solving Quantitative Reasoning Problems with Language Models". arXiv:2206.14858.
  47. "Minerva: Solving Quantitative Reasoning Problems with Language Models". ai.googleblog.com. Retrieved 20 March 2023.
  48. "bigscience/bloom · Hugging Face". huggingface.co.
  49. "20B-parameter Alexa model sets new marks in few-shot learning". Amazon Science. 2 August 2022.
  50. Soltan, Saleh; Ananthakrishnan, Shankar; FitzGerald, Jack; et al. (3 August 2022). "AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model". arXiv:2208.01448.
  51. "AlexaTM 20B is now available in Amazon SageMaker JumpStart | AWS Machine Learning Blog". aws.amazon.com. 17 November 2022. Retrieved 13 March 2023.
  52. "Introducing LLaMA: A foundational, 65-billion-parameter large language model". Meta AI. 24 February 2023.
  53. "Stanford CRFM". crfm.stanford.edu.
  54. "GPT-4 Technical Report" (PDF). OpenAI. 2023. Archived (PDF) from the original on March 14, 2023. Retrieved March 14, 2023.
  55. "Abu Dhabi-based TII launches its own version of ChatGPT". tii.ae.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.