GPT-3 might be the iPhone moment for AI

franki-chamaki-z4H9MYmWIMA-unsplash

GPT-3 is a transformer neural net topology that can be used for natural language processing (NLP) tasks. It is an unsupervised learning approach with the core idea that if you feed the net enough text, it eventually will be able to find patterns. It is developed by special research company OpenAI that is backed by Silicon Valley's who is who. OpenAI has been working and publishing on language models for artificial intelligence for quite a while.  Its predecessor GPT-2 was already quite impressive (running on 1.5 billion parameters) but GPT-3 (running on 175 billion parameters) is knocking the socks off. If you wan't to dive into what parameters are Jesse Vig has written a post that explains it nicely.

The texts that it generates are almost indistinguishable from what an original author would have written and its internal architecture of understanding is quite astonishing. If you feed it some input it can predict the rest of the text. In this tweet it was given the first half of a blog post on "How to run an effective board meeting" and GPT-3 wrote a 3-step process on how to recruit board members. Check it out:

People are starting to tinker with GPT-3 and creating really interesting experiments. In this tweet someone is creating Layouts right from specification:

He refined it by creating React components from specification:

Another guy was creating a Figma plugin to create layouts from a specification:

But how reliable does it work? Kevin Lacker has written a nice post "Giving GPT-3 a Turing Test" where he's checking out the rough edges. Max Woolf has also written a post "Tempering Expectations for GPT-3 and OpenAI’s API" where he is diving into limitations - especially noting that GPT-3 is not a public model but it's locked up behind a proprietary API.

Some might argue since the paper was published you can replicate the training of an equivalent model - but as Lambda has pointed out it is not that easy. To be specific: It would cost 355 years and $4,600,000 to train GPT-3 on the lowest-priced GPU cloud on the market. OpenAI actually build a super computer with 285,000 CPU cores, 10,000 GPUs and 400 gigabits per second of network connectivity to make it work (I think that's the engineering secret sauce that you would need to replicate GPT-3 and it's a shame that Europe doesn't have any research facility that's even trying to replicate what has been achieved at OpenAI).


What can I do with GPT-3?

Beside the examples that has been listed above, think about content tasks like:

  • Text Summarisation
  • Rephrasing Content
  • Unique Content Creation

All these tasks are pretty manual tasks that could be half or fully automated in the near future (bye-bye unique content for SEO). But GPT-3 is not only for linguists - since it's eaten the internet for learning it's capable of understanding programming languages too. So other use cases could be:

  • Create Entity Relationship models from written use cases
  • Context sensitive auto complete, where an IDE could detect your intent and then suggest the boilerplate for you

GPT-3 is still locked behind a closed API. But soon it will be available for the public (for fun and profit). So what would you build with GPT-3?


We're open for business so feel free to contact us for your next project.

Photo by Franki Chimaki

Cofounder of 9elements. Software alchemist since he was a kid. Knee deep in tech with building things in React, Rails or Solidity.