Best For Intermediate · 3 min read

Best prompting techniques for Claude

Quick answer
Use clear, explicit instructions with Claude models, specifying roles and context to guide responses effectively. Incorporate few-shot examples and avoid ambiguous language to improve output quality.

RECOMMENDATION

For best results with Claude, use explicit role and context setting combined with few-shot prompting to guide the model’s behavior precisely.
Use caseBest choiceWhyRunner-up
Creative writingExplicit role + few-shot examplesGuides style and tone clearly, improving creativity and coherenceContext-only prompts
Code generationFew-shot with detailed instructionsDemonstrates expected code style and logic, reducing errorsExplicit role prompts
Customer supportRole specification + contextEnsures polite, helpful tone aligned with brand voiceFew-shot examples
Data extractionClear, structured instructionsMinimizes ambiguity for precise extractionRole + context prompts

Top picks explained

Use explicit role specification to tell Claude what persona or function it should adopt, which improves response relevance. Combine this with few-shot prompting by providing examples to demonstrate desired output style or format. Context setting helps by giving background information that guides the model’s understanding.

For instance, in coding tasks, few-shot examples showing input-output pairs reduce errors. In creative tasks, role specification ensures tone consistency.

In practice

python
import anthropic
import os

client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])

system_prompt = "You are a helpful assistant specialized in writing professional emails."

few_shot_examples = [
    {"role": "user", "content": "Write a polite email declining a meeting request."},
    {"role": "assistant", "content": "Dear John,\n\nThank you for your invitation. Unfortunately, I am unavailable at that time. I hope we can connect another time.\n\nBest regards,\nJane"}
]

user_prompt = {"role": "user", "content": "Write a polite email rescheduling a meeting to next week."}

messages = [
    *few_shot_examples,
    user_prompt
]

response = client.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=300,
    system=system_prompt,
    messages=messages
)

print(response.content[0].text)
output
Dear [Name],

Thank you for your message. I would like to reschedule our meeting to next week at a time convenient for you. Please let me know your availability.

Best regards,
[Your Name]

Pricing and limits

OptionFreeCostLimitsContext window
claude-3-5-sonnet-20241022Yes, limited tokens$0.003 / 1K tokensUp to 100K tokens per request100K tokens
claude-3-5-haiku-20241022Yes, limited tokens$0.0025 / 1K tokensUp to 100K tokens per request100K tokens
claude-3-opus-20240229Yes, limited tokens$0.0015 / 1K tokensUp to 100K tokens per request100K tokens

What to avoid

  • Avoid vague or ambiguous prompts without clear instructions, as Claude may produce off-topic or generic responses.
  • Do not rely solely on single-shot prompts for complex tasks; few-shot examples improve accuracy.
  • Avoid mixing multiple unrelated questions in one prompt, which confuses the model.
  • Do not omit role or context when tone or style consistency is critical.

Key Takeaways

  • Always specify the role and context to guide Claude’s behavior precisely.
  • Use few-shot prompting with examples to improve output quality for complex tasks.
  • Avoid ambiguous or multi-topic prompts to reduce irrelevant or confusing responses.
Verified 2026-04 · claude-3-5-sonnet-20241022, claude-3-5-haiku-20241022, claude-3-opus-20240229
Verify ↗