How to use Claude API for batch processing
Quick answer
Use the
anthropic Python SDK to send multiple prompts in a loop or batch by calling client.messages.create for each input. Batch processing involves iterating over your inputs and collecting responses asynchronously or synchronously with the claude-3-5-sonnet-20241022 model.PREREQUISITES
Python 3.8+Anthropic API keypip install anthropic>=0.20
Setup
Install the Anthropic Python SDK and set your API key as an environment variable.
pip install anthropic>=0.20 Step by step
This example demonstrates batch processing by sending multiple prompts to Claude in a loop and collecting their responses synchronously.
import os
import anthropic
client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
prompts = [
"Explain the theory of relativity in simple terms.",
"Summarize the plot of 'To Kill a Mockingbird'.",
"What are the benefits of renewable energy?"
]
responses = []
for prompt in prompts:
message = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=300,
system="You are a helpful assistant.",
messages=[{"role": "user", "content": prompt}]
)
responses.append(message.content[0].text)
for i, response in enumerate(responses, 1):
print(f"Response {i}:\n{response}\n") output
Response 1: The theory of relativity, developed by Albert Einstein, explains how space and time are linked for objects moving at a consistent speed in a straight line... Response 2: 'To Kill a Mockingbird' is a novel about racial injustice and childhood innocence in the American South, narrated by Scout Finch... Response 3: Renewable energy sources like solar and wind reduce greenhouse gas emissions, decrease dependence on fossil fuels, and promote sustainable development...
Common variations
- Use asynchronous calls with
asyncioand the Anthropic SDK for parallel batch processing. - Adjust
max_tokensandmodelparameters for different use cases. - Implement error handling and retries for robust batch processing.
import os
import asyncio
import anthropic
client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
async def fetch_response(prompt):
return await client.messages.acreate(
model="claude-3-5-sonnet-20241022",
max_tokens=300,
system="You are a helpful assistant.",
messages=[{"role": "user", "content": prompt}]
)
async def main():
prompts = [
"Explain quantum computing.",
"List benefits of meditation.",
"Describe the water cycle."
]
tasks = [fetch_response(p) for p in prompts]
results = await asyncio.gather(*tasks)
for i, message in enumerate(results, 1):
print(f"Response {i}:\n{message.content[0].text}\n")
if __name__ == "__main__":
asyncio.run(main()) output
Response 1: Quantum computing uses quantum bits that can represent multiple states simultaneously, enabling faster problem solving for certain tasks... Response 2: Meditation improves mental clarity, reduces stress, and enhances emotional well-being... Response 3: The water cycle describes how water evaporates, condenses into clouds, and precipitates back to Earth...
Troubleshooting
- If you receive authentication errors, verify your
ANTHROPIC_API_KEYenvironment variable is set correctly. - For rate limit errors, implement exponential backoff and retry logic.
- Ensure your prompts are within token limits to avoid truncation or errors.
Key Takeaways
- Use a loop or async calls to batch process multiple prompts with the Anthropic SDK.
- Set the
systemparameter to guide Claude's behavior consistently across batch requests. - Handle API errors and rate limits to ensure reliable batch processing.