How to Write Better AI Prompts: 7 Techniques That Actually Work
Most people treat AI prompts like search queries. That's why they get mediocre results. These 7 techniques close the gap between what you typed and what you actually wanted.
Kevin Zai
Most people treat AI prompts like search queries. Type a few words, hit enter, hope for the best. That's why most people get mediocre results.
Writing better AI prompts is a learnable skill. After thousands of hours working with language models — building products on top of them, deploying them in production, and helping companies integrate them — I've distilled what actually moves the needle into 7 techniques you can apply today.
Why Prompt Quality Matters More Than Model Choice
Before we get to the techniques: the quality of your prompt often matters more than which model you're using. A well-crafted prompt to a mid-tier model will outperform a poorly-written prompt to the best model available.
This isn't intuition — it's testable. Try the same mediocre prompt on GPT-4o and Claude Opus. Then try a well-engineered prompt on the same models. The gap from prompt improvement is almost always larger than the gap between models.
That said, better prompts and better models compound. Here's how to write prompts that get the most out of whatever model you're using.
Technique 1: Specify the Role Before the Task
Models perform dramatically better when you establish a clear role before stating the task.
Weak: "Summarize this document."
Strong: "You are a senior executive at a consulting firm. Summarize this document as a 3-bullet executive brief for a CEO who has 90 seconds to read it."
The role establishes register, vocabulary, perspective, and implied constraints. "Senior executive" tells the model to be concise, direct, and strategic. "CEO with 90 seconds" tells it what matters and what to cut.
Role-first prompting works because models are trained on vast amounts of human-written text, and each role comes loaded with implicit behavioral patterns. Activate the right pattern first.
Technique 2: Show the Format You Want
Describe the output structure before you ask for content. If you want a numbered list, say so. If you want markdown headers, specify that. If you want exactly three paragraphs with no bullet points, make it explicit.
Without format spec: "Explain the difference between supervised and unsupervised learning."
With format spec: "Explain the difference between supervised and unsupervised learning. Use exactly 2 paragraphs. No bullet points. Write for a non-technical business audience. End with one concrete example of each from a retail context."
The format instruction isn't constraining creativity — it's giving the model the information it needs to produce what you actually want rather than its default output shape.
Technique 3: Include Negative Constraints
Tell the model what NOT to do. This is chronically underused and consistently effective.
Examples of useful negative constraints:
- "Do not use jargon."
- "Do not hedge or qualify every sentence."
- "Do not repeat the question before answering."
- "Do not suggest I consult a professional."
- "Do not use the word 'delve'."
Models have default behaviors — they hedge, they caveat, they use filler phrases. Negative constraints override those defaults directly.
Technique 4: Provide Reference Examples
If you have an example of what "good" looks like, include it. Explicitly. Don't assume the model will infer your standard from abstract description.
"Here's an example of a headline I consider effective: [example]. Write 5 more in the same style."
"Here's how I normally sign off emails: [example]. Match this tone."
Few-shot prompting — providing 2-3 examples before the actual request — is one of the most reliable quality boosters available without any fine-tuning or model changes. Use it.
Technique 5: Ask for Reasoning First, Answer Second
For complex tasks where accuracy matters, ask the model to reason through the problem before giving an answer.
"Before you give me your recommendation, walk through the key tradeoffs. Then make your recommendation based on that analysis."
This is the prompt-level equivalent of chain-of-thought reasoning. It forces the model to develop its thinking step by step rather than jumping to a conclusion and then rationalizing backward. The answer quality at the end of a reasoning chain is meaningfully better.
Technique 6: Iterate in Layers
Don't try to get a perfect result in one prompt. Build in layers.
Layer 1: Get the structure right. "Give me a rough outline of the 5 main sections." Layer 2: Expand each section. "Expand section 2 into 3 paragraphs." Layer 3: Refine the tone. "Rewrite this section to be more direct and less formal."
Single-prompt outputs are often the equivalent of a first draft. Treating prompts as a conversation — iterating and refining — produces dramatically better final outputs than trying to engineer a perfect prompt upfront.
Technique 7: Calibrate Length Explicitly
Be specific about length. "Short" means different things to different contexts. "A few sentences" gives the model room to write a paragraph that feels short to it but too long to you.
Better alternatives:
- "In exactly 2 sentences..."
- "In under 100 words..."
- "In 3 bullet points of no more than 10 words each..."
- "A 300-word article..."
Explicit word counts or sentence counts eliminate the ambiguity that leads to outputs that miss the mark on length.
Putting It Together
These techniques compound. A prompt that uses all seven — establishes a role, specifies format, includes negative constraints, provides examples, asks for reasoning, builds iteratively, and gives explicit length targets — will get results that feel qualitatively different from a casual query.
The underlying principle is simple: models are doing their best to infer what you want from incomplete information. Every technique on this list reduces that ambiguity. Less ambiguity means better outputs.
Want to apply these techniques without starting from scratch every time? Try our free Prompt Enhancer — paste your draft prompt and it applies these principles automatically, showing you the before and after. No sign-up required.
Ready to Start?
Find your highest-leverage AI opportunity
Take the free AI Readiness Scorecard to identify where agents can save the most time in your business — or book a strategy session and we will map out your first deployment together.