Quick answer
ChatGPT is a large language model trained on enormous amounts of text. It does not "think" like a human — it predicts what word should come next based on patterns it learned. That is both why it is so impressive and why it sometimes gets things wrong.
ChatGPT launched in November 2022 and became the fastest-growing consumer application in history — 100 million users in two months. But most people who use it every day still have no idea how it actually works. Here is the honest, jargon-free explanation.
Step 1: Training on text
ChatGPT was trained on a huge dataset of text from the internet — books, websites, Wikipedia, code, articles, forums, and more. We are talking hundreds of billions of words. During training, the model read all of this text and learned the statistical patterns: which words tend to follow which other words, how ideas connect, what makes a good answer to a question.
Think of it like a student who has read every book in an enormous library — not to memorise facts, but to develop an intuition for how language works.
Step 2: Predicting the next word (over and over)
When you send a message to ChatGPT, here is what actually happens: the model looks at your words and predicts what word should come next in the response. Then it predicts the word after that. Then the one after that. It repeats this thousands of times until the response is complete.
This sounds simple, but the model is doing something extremely sophisticated — it is weighing the context of every previous word in the conversation to decide what comes next. That is why longer context produces better answers.
Step 3: Human feedback made it helpful (not just clever)
Early versions of these models could predict text well but were not actually helpful — they would sometimes say harmful things or give rambling answers. OpenAI fixed this through a process called RLHF (Reinforcement Learning from Human Feedback). Human trainers rated responses as good or bad, and the model learned to produce responses humans rated highly. This is why ChatGPT is polite, cautious, and tries to be genuinely useful.
Why does it make things up?
Because it is always predicting what word comes next — not looking things up. If the model was not trained on a specific fact, it will sometimes generate text that sounds correct but is not. This is called "hallucination." It is not lying — it does not know what is true or false. It only knows what sounds plausible based on patterns. This is why you should always verify facts from ChatGPT against reliable sources.
Important: ChatGPT does not have access to the internet by default (unless given tools). Its base knowledge has a training cutoff date, meaning it genuinely does not know about recent events.
What it is genuinely good at
- Drafting and editing text of all kinds
- Explaining complex concepts simply
- Summarising long documents
- Helping you think through problems
- Writing and debugging code
- Answering questions where approximate accuracy is fine
What it is not reliable for
- Specific facts, statistics, and citations (verify these)
- Recent events beyond its training cutoff
- Precise calculations (use a calculator instead)
- Legal, medical, or financial advice (consult a professional)
Related reading
Bottom line
ChatGPT is an incredibly sophisticated autocomplete system — one trained on so much text that it has developed something that looks a lot like understanding. It is genuinely useful, but knowing its limitations makes you a much better user of it.
