GPT-J
GPT-J is an open source artificial intelligence language model developed by EleutherAI.[1] GPT-J performs very similarly to OpenAI's GPT-3 on various zero-shot down-streaming tasks and can even outperform it on code generation tasks.[2] The newest version, GPT-J-6B is a language model based on a data set called The Pile.[3] The Pile is an open-source 825 gibibyte language modelling data set that is split into 22 smaller datasets.[4] GPT-J is similar to ChatGPT in ability, although it does not function as a chat bot, only as a text predictor.[5] In March 2023, Databricks released Dolly, an Apache-licensed, instruction-following model based on GPT-J with fine-tuning from the Stanford Alpaca dataset.[6]
![]() Logo | |
Developer(s) | EleutherAI |
---|---|
Initial release | June 9, 2021 |
Type | Language model |
License | Open-source |
Website | 6b![]() |
References
- Demo, GPT-3. "GPT-J | Discover AI use cases". gpt3demo.com. Retrieved 2023-02-28.
- "GPT-J-6B: An Introduction to the Largest Open Source GPT Model | Forefront". www.forefront.ai. Retrieved 2023-02-28.
- Wang, Ben (2023-02-28), Table of contents, retrieved 2023-02-28
- "The Pile". pile.eleuther.ai. Retrieved 2023-02-28.
- Mueller, Vincent (2022-01-25). "How you can use GPT-J". Medium. Retrieved 2023-02-28.
- Conover, Mike; Hayes, Matt; Mathur, Ankit; Meng, Xiangrui; Xie, Jianwei; Wan, Jun; Ghodsi, Ali; Wendell, Patrick; Zaharia, Matei (24 March 2023). "Hello Dolly: Democratizing the magic of ChatGPT with open models". Retrieved 2023-04-05.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.