Intermediate6 min read

Chain of Thought

Improve reasoning quality by prompting models to think step-by-step before answering.

The Problem with Direct Answers

When you ask a complex question and demand an immediate answer, models often jump to plausible-sounding conclusions without rigorous reasoning. For math, logic, multi-step analysis, and judgment calls, this produces errors.

Chain-of-thought (CoT) prompting fixes this by asking the model to show its work.

Basic Chain of Thought

Add "Let's think step by step" to any prompt:

Question: A store has 120 items. They sell 40% on Monday and 25% of the remainder on Tuesday. How many remain?

Let's think step by step.

This simple addition consistently improves accuracy on reasoning tasks.

Zero-Shot vs Few-Shot CoT

Zero-shot CoT — Append "Think step by step" to your prompt. Works surprisingly well for math and logic.

Few-shot CoT — Provide examples that include the reasoning chain, not just the final answer. The model learns your specific reasoning style.

Structured CoT

For complex multi-step problems, structure the reasoning explicitly:

Analyze this architectural decision. Use this format:

**Step 1: Understand the constraints**
[your analysis]

**Step 2: Identify tradeoffs**
[your analysis]

**Step 3: Evaluate options**
[your analysis]

**Step 4: Recommendation**
[final answer]

When CoT Helps Most

CoT is most valuable for: math and quantitative reasoning, multi-step logic, strategic decisions with tradeoffs, debugging, and any task where the path to the answer matters.

For simple factual retrieval or creative tasks, CoT adds token cost without benefit. Use it selectively.

Loading…