How to build code review tool with LLM
Quick answer
Build a code review tool by integrating an LLM like gpt-4o via the OpenAI API to analyze code diffs and provide feedback. Use client.chat.completions.create with prompts that include code snippets and review instructions for actionable suggestions.
PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install openai>=1.0
Setup
Install the OpenAI Python SDK and set your API key as an environment variable for secure access.
pip install openai>=1.0 Step by step
This example shows how to send a code snippet to gpt-4o for review and receive improvement suggestions.
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
code_to_review = '''
def add_numbers(a, b):
return a + b
print(add_numbers(2, 3))
'''
prompt = f"""You are a senior developer performing a code review. Analyze the following Python code and provide constructive feedback, improvements, and potential bugs.\n\nCode:\n{code_to_review}\n\nReview:"""
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": prompt}]
)
print("Code review feedback:\n", response.choices[0].message.content) output
Code review feedback: The function `add_numbers` is simple and correct for adding two numbers. Consider adding type hints for clarity, e.g., `def add_numbers(a: int, b: int) -> int:`. Also, add error handling if inputs might not be numbers. The print statement is fine for testing but should be removed or replaced with proper unit tests in production.
Common variations
- Use asynchronous calls with asyncio and await for non-blocking review requests.
- Stream responses for real-time feedback display.
- Switch models to gpt-4o-mini for cost-effective reviews or claude-sonnet-4-5 for alternative LLMs.
import os
import asyncio
from openai import OpenAI
async def async_code_review(code: str):
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
prompt = f"You are a code reviewer. Review this code:\n{code}\nReview:"
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}]
)
print("Async review feedback:\n", response.choices[0].message.content)
if __name__ == "__main__":
code_sample = "def foo():\n pass"
asyncio.run(async_code_review(code_sample)) output
Async review feedback: The function `foo` is defined but does nothing. Consider implementing the function or removing it if unused.
Troubleshooting
- If you get authentication errors, verify your OPENAI_API_KEY environment variable is set correctly.
- For rate limits, implement exponential backoff or switch to a smaller model like gpt-4o-mini.
- If code snippets are too long, split them into smaller chunks or summarize before sending.
Key Takeaways
- Use gpt-4o with OpenAI SDK v1 for effective code review automation.
- Structure prompts to include clear instructions and code context for best feedback.
- Async and streaming calls improve user experience in interactive tools.
- Handle API limits and large code inputs by chunking or model selection.
- Secure API keys via environment variables to protect credentials.