How does ChatGPT really work?

The Most Mediocre Web Content Imaginable
by Hilary Mason
Share:

How does ChatGPT work, really?

It can feel like a wizard behind a curtain, or like a drunk friend, but ChatGPT is just… math. Let’s talk about how to think about Large Language Models (LLMs) like ChatGPT so you can understand what to expect from them and better imagine what you might actually use them for.

A language model is a compressed representation of the patterns in language, trained on all of the language on the internet, and some books. That’s it.

The model is trained by taking a very large data set — in this case, text from sites like Reddit (including any problematic content 🤮) and Wikipedia, and books from Project Gutenberg — extracting tokens, which are essentially the words in the sentences, and then computing the relationships in the patterns of how those words are used. That representation can then be used to generate patterns that look like the patterns it's seen before.

If I sing the words 🎼Haaappy biiiirthdayyy to ____🎼, most English speakers know that the next word is likely to be “you.” It might be other words like “me,” or “them,” but it’s very unlikely to be words like “allocative” or “pudding.” (No shade to pudding. I wish pudding the happiest of birthdays.) You weren’t born knowing the birthday song; you know the next likely words in the song because you’ve heard it sung lots of times. That’s basically what’s happening under the hood with a language model.

This is why a language model can generate coherent sentences without understanding the things it writes about. It's why it hallucinates (another technical term!) plausible-sounding information but doesn't have any idea what's factual. Inevitably, “it’s gonna lie.” The model understands syntax, not semantics.

When you ask ChatGPT a question, you get a response that fits the patterns of the probability distribution of language that the model has seen before. It does not reflect knowledge, facts, or insights.

And to make this even more fun, in that compression of language patterns, we also magnify the bias in the underlying language. This is how we end up with models that are even more problematic than what you find on the web. ChatGPT specifically does use human-corrected judgements to reduce the worst of this behavior, but those efforts are far from foolproof, and the model still reflects the biases of the humans doing the correcting.

Finally, because of the way these models are designed, they are at best, a representation of the average language used on the internet. By design, ChatGPT aspires to be the most mediocre web content you can imagine.

With all of that said, these language models are tremendously useful. They can take minimal inputs and make coherent text for us! They can help us draft, or translate, or change our writing style, or trigger new ideas. And they’ll be completely ✨transformative✨

If you’re thinking about the impact of AI on games, we’d love to hear from you. We’ll be at GDC later this month, showing off our most recent development work. Please be in touch if you’d like to see us there!