How to build conditional AI workflows
Quick answer
Build conditional AI workflows by integrating AI API calls with Python control flow structures like
if and else. Use the AI model's output to decide subsequent steps, enabling dynamic branching and multi-step interactions within your application.PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install openai>=1.0
Setup
Install the openai Python package and set your API key as an environment variable for secure authentication.
pip install openai output
Collecting openai Downloading openai-1.x.x-py3-none-any.whl (xx kB) Installing collected packages: openai Successfully installed openai-1.x.x
Step by step
This example demonstrates a conditional AI workflow using gpt-4o. The AI response determines the next action: if the user wants a joke, it fetches a joke; otherwise, it provides a general answer.
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
# Initial user prompt
user_input = "Tell me something funny or informative."
# Step 1: Ask AI what the user wants
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": user_input}]
)
answer = response.choices[0].message.content.strip().lower()
# Step 2: Conditional logic based on AI response
if "joke" in answer or "funny" in answer:
# Request a joke
joke_response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Tell me a short joke."}]
)
print("Joke:", joke_response.choices[0].message.content.strip())
else:
# Provide informative answer
print("Answer:", response.choices[0].message.content.strip()) output
Answer: Here's an interesting fact about space: The largest volcano in the solar system is Olympus Mons on Mars, which is about 13.6 miles high.
Common variations
- Use async calls with
asyncioandawaitfor non-blocking workflows. - Stream responses for real-time output using
stream=Trueinchat.completions.create. - Switch models like
gpt-4o-minifor faster, cheaper inference. - Implement multi-step workflows by chaining multiple AI calls with branching based on intermediate outputs.
import os
import asyncio
from openai import OpenAI
async def conditional_workflow():
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
user_input = "Should I hear a joke or a fact?"
# Async call
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": user_input}]
)
answer = response.choices[0].message.content.lower()
if "joke" in answer:
joke_response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Tell me a joke."}]
)
print("Joke:", joke_response.choices[0].message.content)
else:
print("Answer:", response.choices[0].message.content)
asyncio.run(conditional_workflow()) output
Answer: Here's a quick fact: Honey never spoils and can last thousands of years.
Troubleshooting
- If you get authentication errors, verify your
OPENAI_API_KEYenvironment variable is set correctly. - For unexpected AI outputs, refine your prompts or add system instructions to guide the model.
- Timeouts may occur on slow networks; use async calls or increase timeout settings.
- Check model availability and names as they may update; always verify current model IDs.
Key Takeaways
- Use Python control flow to branch AI calls based on model outputs for dynamic workflows.
- Async and streaming APIs enable responsive, scalable conditional AI applications.
- Always validate environment variables and model names to avoid runtime errors.