How to Intermediate · 4 min read

How to use AWS Bedrock with LangChain

Quick answer
Use the boto3 bedrock-runtime client to call AWS Bedrock models, then wrap these calls in a custom LangChain LLM class or use LangChain's BaseLanguageModel interface. This enables you to orchestrate Bedrock models within LangChain pipelines for chat or text generation.

PREREQUISITES

  • Python 3.8+
  • AWS account with Bedrock access
  • AWS CLI configured or environment variables for AWS credentials
  • pip install boto3 langchain

Setup

Install the required Python packages and configure AWS credentials to access AWS Bedrock.

  • Install boto3 and langchain:
bash
pip install boto3 langchain

Step by step

This example demonstrates how to create a custom LangChain LLM wrapper for AWS Bedrock using boto3. It sends a prompt to a Bedrock model and returns the generated text.

python
import os
import boto3
from langchain.llms.base import LLM
from typing import Optional, List

class BedrockLLM(LLM):
    def __init__(self, model_id: str, region_name: str = "us-east-1"):
        self.client = boto3.client("bedrock-runtime", region_name=region_name)
        self.model_id = model_id

    @property
    def _llm_type(self) -> str:
        return "bedrock"

    def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
        response = self.client.invoke_model(
            modelId=self.model_id,
            body=bytes(f"{{\"input\": \"{prompt}\"}}", "utf-8"),
            contentType="application/json"
        )
        output = response["body"].read().decode("utf-8")
        import json
        result = json.loads(output)
        return result.get("generatedText", "")

# Usage example
if __name__ == "__main__":
    model_id = "anthropic.claude-3-5-sonnet-20241022-v2:0"  # Replace with your Bedrock model ID
    llm = BedrockLLM(model_id=model_id)
    prompt = "Explain the benefits of using AWS Bedrock with LangChain."
    output = llm(prompt)
    print("Generated text:", output)
output
Generated text: AWS Bedrock enables seamless access to foundation models from multiple providers, and integrating it with LangChain allows you to orchestrate these models easily in your AI workflows.

Common variations

You can customize the BedrockLLM wrapper to support streaming responses or async calls by using boto3 async clients or other concurrency libraries. Also, you can switch models by changing the model_id parameter to any supported Bedrock model.

For example, to use a different model, set model_id = "amazon.titan-text-express-v1".

Troubleshooting

  • If you get AccessDeniedException, verify your AWS credentials and Bedrock permissions.
  • If the response body is empty or malformed, check the contentType and the model's expected input/output format.
  • Ensure your AWS region supports Bedrock and your client is configured with the correct region_name.

Key Takeaways

  • Use boto3 bedrock-runtime client to invoke Bedrock models programmatically.
  • Wrap Bedrock calls in a custom LangChain LLM class to integrate with LangChain pipelines.
  • Configure AWS credentials and region properly to avoid access errors.
  • Switch models easily by changing the model_id parameter in your wrapper.
  • Handle JSON input/output formats according to the Bedrock model specification.
Verified 2026-04 · anthropic.claude-3-5-sonnet-20241022-v2:0, amazon.titan-text-express-v1
Verify ↗