How to beginner · 3 min read

How to use Gemini for code generation

Quick answer
Use the OpenAI Python SDK with the gemini-1.5-pro or gemini-2.0-flash model to generate code by calling client.chat.completions.create with your prompt. Set your API key in os.environ and send a user message describing the code you want.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install openai>=1.0

Setup

Install the official OpenAI Python SDK and set your API key as an environment variable.

  • Install SDK: pip install openai
  • Set environment variable in your shell: export OPENAI_API_KEY='your_api_key_here'
bash
pip install openai

Step by step

Use the OpenAI client to call the Gemini model for code generation. Provide a clear prompt describing the code you want.

python
import os
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

response = client.chat.completions.create(
    model="gemini-1.5-pro",
    messages=[
        {"role": "user", "content": "Write a Python function that returns the Fibonacci sequence up to n."}
    ]
)

print(response.choices[0].message.content)
output
def fibonacci(n):
    sequence = []
    a, b = 0, 1
    while a <= n:
        sequence.append(a)
        a, b = b, a + b
    return sequence

Common variations

You can switch to gemini-2.0-flash for faster responses or use streaming for real-time output. Async calls are supported with Python's asyncio.

python
import os
import asyncio
from openai import OpenAI

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

async def generate_code():
    response = await client.chat.completions.acreate(
        model="gemini-2.0-flash",
        messages=[{"role": "user", "content": "Generate a JavaScript function to reverse a string."}]
    )
    print(response.choices[0].message.content)

asyncio.run(generate_code())
output
function reverseString(str) {
    return str.split('').reverse().join('');
}

Troubleshooting

  • If you get authentication errors, verify your OPENAI_API_KEY environment variable is set correctly.
  • If the model is not found, confirm you are using a valid Gemini model name like gemini-1.5-pro or gemini-2.0-flash.
  • For rate limits, implement exponential backoff or check your usage quota on the OpenAI dashboard.

Key Takeaways

  • Use the OpenAI Python SDK with gemini-1.5-pro or gemini-2.0-flash for code generation.
  • Always set your API key securely via os.environ to authenticate requests.
  • Async and streaming calls enable more interactive code generation workflows.
  • Clear, specific prompts yield better code generation results.
  • Check model names and API key if you encounter errors.
Verified 2026-04 · gemini-1.5-pro, gemini-2.0-flash
Verify ↗