Introduction to AI
Understand how Large Language Models work, what tokens are, and why AI responses vary.
Now that you've seen a glimpse of what's possible on Replit, let's take a quick look under the hood. What actually powers Replit Agent and most chat-based AI tools you use today?
These tools are powered by Large Language Models, or LLMs - systems trained on huge amounts of text from the internet, books, code, and even conversations with humans! Over time, we've trained these statistical models to recognize patterns in how humans write, think, and solve problems.
Think of an LLM like a super-charged autocomplete. When you ask a question or give it a prompt, it predicts the next most likely word, one token at a time, based on everything it has learned. A token is a fundamental unit of text that the model uses to process language. It could be a whole word like "cat," part of a word like "ing," or even a single character.
These days, the predictions have become extremely fast, accurate, and organized to make LLMs very powerful.
If you ask an LLM the same question twice, you might notice the answers are a little different each time. This is caused by natural variance, or non-determinism, in the model's training data. Each time an LLM responds, it predicts and evaluates many possible options for the next token, a bit like rolling weighted dice. Sometimes one option will be weighted more than the others. This can lead LLMs to make mistakes or sound robotic, but this same quality also makes AI great at creative tasks.


The quality of the input into an LLM will also dictate the quality of the output. Providing necessary context, being specific, and adding constraints can help ensure the model does what you expect it to.
LLMs are typically reactive, meaning they require some input or question and then respond. An agent on the other hand, is software that enables LLMs to use predefined tools and resources to solve problems or accomplish tasks in multiple steps.
Try It: See AI Variability
AI models can produce different outputs for the same input. Click the button to see how completing the same sentence generates different words each time.
Prompt
“I would like to go to the ___”
While LLMs and agents are powerful, they're not perfect. They can hallucinate (make things up when they don't know the answer), be biased based on the training data, or misinterpret your instructions. That's why even when using AI, we need to be diligent in testing and quality control to make sure what we create meets our standards.
Check Your Understanding
Below are two prompts. Which one is likely to produce a more accurate and consistent response from an AI model?