Best prompting techniques for Claude
Quick answer
Use clear, explicit instructions with
Claude models, specifying roles and context to guide responses effectively. Incorporate few-shot examples and avoid ambiguous language to improve output quality.RECOMMENDATION
For best results with
Claude, use explicit role and context setting combined with few-shot prompting to guide the model’s behavior precisely.| Use case | Best choice | Why | Runner-up |
|---|---|---|---|
| Creative writing | Explicit role + few-shot examples | Guides style and tone clearly, improving creativity and coherence | Context-only prompts |
| Code generation | Few-shot with detailed instructions | Demonstrates expected code style and logic, reducing errors | Explicit role prompts |
| Customer support | Role specification + context | Ensures polite, helpful tone aligned with brand voice | Few-shot examples |
| Data extraction | Clear, structured instructions | Minimizes ambiguity for precise extraction | Role + context prompts |
Top picks explained
Use explicit role specification to tell Claude what persona or function it should adopt, which improves response relevance. Combine this with few-shot prompting by providing examples to demonstrate desired output style or format. Context setting helps by giving background information that guides the model’s understanding.
For instance, in coding tasks, few-shot examples showing input-output pairs reduce errors. In creative tasks, role specification ensures tone consistency.
In practice
import anthropic
import os
client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
system_prompt = "You are a helpful assistant specialized in writing professional emails."
few_shot_examples = [
{"role": "user", "content": "Write a polite email declining a meeting request."},
{"role": "assistant", "content": "Dear John,\n\nThank you for your invitation. Unfortunately, I am unavailable at that time. I hope we can connect another time.\n\nBest regards,\nJane"}
]
user_prompt = {"role": "user", "content": "Write a polite email rescheduling a meeting to next week."}
messages = [
*few_shot_examples,
user_prompt
]
response = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=300,
system=system_prompt,
messages=messages
)
print(response.content[0].text) output
Dear [Name], Thank you for your message. I would like to reschedule our meeting to next week at a time convenient for you. Please let me know your availability. Best regards, [Your Name]
Pricing and limits
| Option | Free | Cost | Limits | Context window |
|---|---|---|---|---|
| claude-3-5-sonnet-20241022 | Yes, limited tokens | $0.003 / 1K tokens | Up to 100K tokens per request | 100K tokens |
| claude-3-5-haiku-20241022 | Yes, limited tokens | $0.0025 / 1K tokens | Up to 100K tokens per request | 100K tokens |
| claude-3-opus-20240229 | Yes, limited tokens | $0.0015 / 1K tokens | Up to 100K tokens per request | 100K tokens |
What to avoid
- Avoid vague or ambiguous prompts without clear instructions, as Claude may produce off-topic or generic responses.
- Do not rely solely on single-shot prompts for complex tasks; few-shot examples improve accuracy.
- Avoid mixing multiple unrelated questions in one prompt, which confuses the model.
- Do not omit role or context when tone or style consistency is critical.
Key Takeaways
- Always specify the role and context to guide Claude’s behavior precisely.
- Use few-shot prompting with examples to improve output quality for complex tasks.
- Avoid ambiguous or multi-topic prompts to reduce irrelevant or confusing responses.