loading
koncpt-img

What is GPT-3?

Designed by the American OpenAI in 2018, GPT-3 is a deep learning model from the GPT family (for generative pre-trained transformer) initiated by the same publisher. These are pre-trained generative models of automatic language processing (TAL) or natural language processing (NLP), or natural language processing (NLP) in English.

With 175 billion parameters, GPT-3 is part of the family of large language models (LLMs). It is designed for question-and-answer, translation, text composition, problem solving or application code generation tasks.

How does GPT-3 work?

Like most large language models (LLMs), GPT-3, like other members of the GPT family, relies on a transformer-like architecture. Similar to recurrent neural networks (RNNs), transformers are artificial neural networks designed to ingest sequential data. Result: they are particularly suitable for natural language processing.

Unlike RNNs on the other hand, transformers do not process data as a continuous stream respecting the word order of sentences. This allows them to divide the processing and parallelize the calculations of the learning phase. Result: they are much faster to train than RNNs.

What is the main advantage of GPT-3?

One of the main strengths of GPT-3 is to be accessible to junior data scientists. It is able to implement specific processing based on retraining on a few examples (few-shot learning), or even without retraining at all (zero-shot learning). For example, it could be managing a task or a specialized vocabulary.

What is the GPT-3 license?

GPT-3 is a proprietary licensed deep learning model. It is marketed by OpenAI in the form of different versions trained for various tasks:

  • Davinci for summary management and text generation,
  • Curie for translation, classification, sentiment analysis and text summarization,
  • Babbage for semantic search-oriented classification,
  • Ada for text analysis, simple classification and address correction,
  • Codex for generating programming code or technical documentation.
  • OpenAI has licensed GPT-3 to Microsoft for its own products and services. The Windows publisher has made its Codex version the basis of its GitHub Copilot development assistant.

How was GPT-3 trained?

GPT-3 was trained on large corpora of English data in a self-supervised manner. “This means that it has been pretrained only on the raw texts, without any human labeling them. With the result an automatic process to generate entries and labels from these texts. Specifically, it has been trained to guess the next word in the sentences,” says Houssam AlRachid, lead data scientist at Devoteam.

What about GPT-2?

Released by OpenAI in February 2019, GPT-2 is limited to 1.5 billion parameters. However, it covers multiple use cases: translation, questions-answers, summary and text generation. Only limit: Unlike GPT-3, it will have to be retrained for each of its tasks, not being powerful enough for zero-shot learning.

What is GPT-J?

Developed by Eleuther AI, GPT-J is an open source alternative to GPT-3 based on the Mesh Transformer JAX architecture. With 6 billion parameters, it targets translation, chat and text generation tasks.

GPT-J achieves the performance of its proprietary big brother on a large number of tasks without requiring retraining, via the zero-shot learning technique. It would even surpass GPT3 in code generation.

You don’t need to use the API to try out GPT-3

I think a big reason people have been put off trying out GPT-3 is that OpenAI market it as the OpenAI API. This sounds like something that’s going to require quite a bit of work to get started with.

But access to the API includes access to the GPT-3 playground, which is an interface that is incredibly easy to use. You get a text box, you type things in it, you press the “Execute” button. That’s all you need to know.

How to sign up

To try out GPT-3 for free you need three things: an email address, a phone number that can receive SMS messages and to be located in one of this list of supported countries and regions.

  1. Create an account at https://openai.com/join/—you can create an email/password address or you can sign up using your Google or Microsoft account
  2. Verify your email address (click the link in the email they send you)
  3. Enter your phone number and wait for their text
  4. Enter the code that they texted to you

New accounts get $18 of credit for the API, which expire after three months. Each query should cost single digit cents to execute, so you can do a lot of experimentation without needing to spend any money.

How to use the playground

Once you’ve activated your account, head straight to the Playground on https://beta.openai.com/playground

ABOUT LONDON DATA CONSULTING (LDC)

We, at London Data Consulting (LDC), provide all sorts of Data Solutions. This includes Data Science (AI/ML/NLP), Data Engineer, Data Architecture, Data Analysis, CRM & Leads Generation, Business Intelligence and Cloud solutions (AWS/GCP/Azure).

For more information about our range of services, please visit: https://london-data-consulting.com/services

Interested in working for London Data Consulting, please visit our careers page on https://london-data-consulting.com/careers

Write a Reply or Comment

Your email address will not be published. Required fields are marked *

    wpChatIcon