How to beginner · 3 min read

Prompt engineering for code generation

Quick answer
Use clear, specific prompts with context and examples to guide LLMs like gpt-4o for code generation. Include instructions on language, style, and output format to get precise, runnable code.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install openai>=1.0

Setup

Install the openai Python package and set your API key as an environment variable for secure access.

bash
pip install openai>=1.0

Step by step

This example shows how to craft a prompt for generating Python code that calculates Fibonacci numbers. The prompt specifies language, function name, and output format.

python
import os
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

prompt = (
    "Write a Python function named fibonacci that returns the nth Fibonacci number. "
    "Include type hints and a docstring explaining the function. "
    "Return only the code without extra explanation."
)

response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": prompt}]
)

print(response.choices[0].message.content)
output
def fibonacci(n: int) -> int:
    """Return the nth Fibonacci number."""
    if n <= 0:
        return 0
    elif n == 1:
        return 1
    else:
        a, b = 0, 1
        for _ in range(2, n + 1):
            a, b = b, a + b
        return b

Common variations

You can use async calls, streaming responses, or switch models like gpt-4o-mini for faster, cheaper code generation. Adjust prompts to specify language or coding style.

python
import asyncio
import os
from openai import OpenAI

async def async_code_generation():
    client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
    prompt = (
        "Generate a JavaScript function to reverse a string. "
        "Return only the code."
    )
    stream = await client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": prompt}],
        stream=True
    )
    async for chunk in stream:
        print(chunk.choices[0].delta.content or "", end="", flush=True)

if __name__ == "__main__":
    asyncio.run(async_code_generation())
output
function reverseString(str) {
    return str.split('').reverse().join('');
}

Troubleshooting

  • If the generated code is incomplete, increase max_tokens in your request.
  • If the output includes explanations, clarify in the prompt to return only code.
  • For syntax errors, specify the programming language explicitly in the prompt.

Key Takeaways

  • Be explicit and detailed in your prompt to get accurate code output.
  • Use examples or specify output format to reduce ambiguity.
  • Leverage streaming and async calls for efficient code generation workflows.
Verified 2026-04 · gpt-4o-mini
Verify ↗