Token

What is a Token?

A token is the basic unit of text that a large language model uses to process and generate language.

In the context of artificial intelligence, a token is a piece of a word. Large Language Models (LLMs) don’t read entire words or sentences at once; instead, they break down text into these smaller, manageable chunks. A token can be a single character, a punctuation mark, or a common syllable. For example, the word “networking” might be broken into three tokens: “network”, “ing”, and “.”. This method allows the AI to handle a vast vocabulary and complex grammar with greater flexibility.

Understanding tokens is crucial because they are the primary unit used to measure the cost and capacity of an AI model. Both the text you input (your prompt) and the text the AI generates are measured in tokens. The “context window” of a model refers to the maximum number of tokens it can remember and process at one time. Therefore, the length and complexity of your prompts and the AI’s responses are directly tied to token consumption, which impacts both the cost of using the API and the model’s performance.

Think of it this way: Tokens are like the currency of an AI. Every word you “say” to the AI and every word it “says” back costs a certain number of tokens. Your prompt is the payment, and the AI’s response is the product you receive, with the price of both being calculated in this token currency.

Why It Matters for You

For a non-profit or Chamber leader using AI tools, tokens translate directly to your budget. When you use a paid service like the ChatGPT API, you are billed based on the number of tokens you use. A long, rambling prompt or a request for a very long report will consume more tokens and therefore cost more money than a concise, well-structured one. Being “token-aware” means you can write more efficient prompts that get you the results you need without wasting your organization’s resources on unnecessary processing.

Example: The Efficient vs. Inefficient Prompt

Let’s say you need to draft an email to your members about an upcoming AGM.

  • Weak (High Token Cost): “Hey ChatGPT, I’m the manager of a Chamber of Commerce. We have our AGM coming up in a few weeks and I need to write an email to get people to register. Can you write a long email that talks about all the great things we did this year, like our advocacy work and our networking events, and make sure it sounds really professional and encourages people to sign up for the event which is on October 25th at the community hall? Please make it about 500 words.”
  • Strong (Low Token Cost): “Act as a Chamber of Commerce manager. Write a concise, 150-word email to members. The goal is to drive registrations for our AGM on Oct 25. Key message: Celebrate this year’s wins and connect with fellow leaders. Include a clear call-to-action to register.”

The second prompt is shorter and more direct, consuming fewer tokens while producing a better, more focused result.

Key Takeaways

  • A token is a piece of a word that AI models use to process text.
  • Tokens are the basis for calculating the cost of using many AI tools.
  • Both your input (prompt) and the AI’s output (response) consume tokens.
  • Writing concise, efficient prompts can save your organization money.

Go Deeper

  • Learn More: Understand the limit on how many tokens an AI can handle by reading our definition of Context Window.
  • Related Term: See how tokens are processed by the core technology behind ChatGPT by learning about Large Language Models (LLM).