MITB Banner

GPT-Neo: The Open-Source Cure For GPT-3 FOMO

GPT-3 was the largest language model when OpenAI released it last year. Now, Google Brain’s 1.6 trillion parameters language model (as opposed to GPT-3’s 175 billion parameters) has replaced GPT-3 as the largest language model. Both GPT-3 and Google Brain model are transformer-based neural networks.

Developers across the world were waiting for the arrival of GPT-3 with bated breath. But much to their disappointment, and a departure from their earlier signals, OpenAI exclusively sold the source code of GPT-3 to Microsoft. Interestingly, GPT and GPT-2 were open-source projects.

Ever since, the internet hive mind, developers and researchers ached for an open-source version of GPT-3. Now, it looks like their dream has finally come true.

Enter GPT-Neo, the brainchild of EleutherAI.

Connor Leahy, Leo Gao, and Sid Black founded EleutherAI in July 2020. It is a decentralized grassroots collective of volunteer developers, engineers, and researchers focused on AI alignment, scaling, and open-source AI research.

According to EleutherAI’s website, GPT‑Neo is the code name for a family of transformer-based language models loosely styled around the GPT architecture. The stated goal of the project is to replicate a GPT‑3 DaVinci-sized model and open-source it to the public, for free. 

GPT‑Neo is an implementation of model & data-parallel GPT‑2 and GPT‑3-like models, utilizing Mesh Tensorflow for distributed support. The codebase is optimized for TPUs, but also work on GPUs. Interestingly, Leahy had earlier attempted to replicate GPT-2 through Google’s Tensorflow Research Cloud (TFRC) program, which worked out to their advantage while working on GPT-Neo.

The researchers have now announced the release of two mid-sized models in their GPT-Neo library, pre-trained using 1.3 billion and 2.7 billion parameters, a far cry from GPT-3’s 175 billion parameter but in the ballpark of GPT-2 with 1.5 billion parameters.

An EleutherAI researcher tweeted:

https://twitter.com/arankomatsuzaki/status/1373732645444579331

 

GPT-Neo: A GPT-3-Sized Model

The researchers are working on two repositories — GPT-Neo (for training on TPUs) and GPT-NeoX (for training on GPUs)

The original codebases for GPT-Neo were built on TPUs, Google’s custom AI accelerator chips. The EleutherAI team realised even the generous amount of TPUs provided through TFRC wouldn’t be sufficient to train GPT-Neo. The research work got a huge boost when CoreWeave, a US-based cryptocurrency miner, approached EleutherAI. The former offered EleutherAI team access to its hardware in exchange for an open-source GPT-3-like model. No money changed hands as part of the deal.

While the model released is quite similar to GPT-3, it will differ in terms of the training dataset, refined by extensive bias analysis. The dataset, called The Pile, is an 835GB corpus consisting of 22 smaller datasets combined to ensure broad generalisation abilities. GPT-Neo was trained on The Pile, the weights and configs of which can be freely downloaded here

With the GPT-Neo implementation, researchers were able to make GPT-2 and GPT-3-like models while scaling them up to GPT3 sizes and more, using the mesh-TensorFlow library. 

It includes alternative model architectures and linear attention implementations that should enable scaling up to even larger model sizes & context lengths, including Local attention, Linear attention, Mixture of Experts, Axial Positional embedding and Masked Language Modelling. 

Way Forward

Researchers are now focusing on the development of the GPT-NeoX library using the hardware and compute-capabilities provided by Coreweave. EleutherAI is currently waiting for CoreWeave to finish building the final hardware for training. GPT-Neox will offer features such as 3D parallelism, model structuring and straightforward configuration using various files. 

GPT-NeoX is under active development and will be based on the DeepSpeed library. It is designed to be able to train models in the hundreds of billions of parameters or larger.

Access all our open Survey & Awards Nomination forms in one place >>

Picture of Srishti Deoras

Srishti Deoras

Srishti currently works as Associate Editor at Analytics India Magazine. When not covering the analytics news, editing and writing articles, she could be found reading or capturing thoughts into pictures.

Download our Mobile App

CORPORATE TRAINING PROGRAMS ON GENERATIVE AI

Generative AI Skilling for Enterprises

Our customized corporate training program on Generative AI provides a unique opportunity to empower, retain, and advance your talent.

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
Recent Stories