How to classify support tickets with LLM
Quick answer
Use a large language model like
gpt-4o via the OpenAI Python SDK to classify support tickets by sending the ticket text as a prompt and instructing the model to output the category. This approach requires crafting a clear prompt and parsing the model's response for the classification label.PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install openai>=1.0
Setup
Install the official openai Python SDK and set your API key as an environment variable for secure authentication.
pip install openai>=1.0 output
Collecting openai Downloading openai-1.x.x-py3-none-any.whl Installing collected packages: openai Successfully installed openai-1.x.x
Step by step
This example shows how to classify a support ticket into categories like "Billing", "Technical", or "General" using gpt-4o. The prompt instructs the model to return only the category name.
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
support_ticket = "My internet connection is slow and keeps dropping intermittently."
prompt = f"Classify the following support ticket into one of these categories: Billing, Technical, General.\nTicket: {support_ticket}\nCategory:"
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": prompt}]
)
category = response.choices[0].message.content.strip()
print(f"Support ticket category: {category}") output
Support ticket category: Technical
Common variations
- Use
gpt-4o-minifor faster, cheaper classification with slightly less accuracy. - Implement async calls with
asyncioandawaitfor high-throughput classification. - Use streaming to process large batches interactively.
import os
import asyncio
from openai import OpenAI
async def classify_ticket_async(ticket_text: str) -> str:
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
prompt = f"Classify the following support ticket into Billing, Technical, or General.\nTicket: {ticket_text}\nCategory:"
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}]
)
return response.choices[0].message.content.strip()
async def main():
ticket = "I need help updating my payment method."
category = await classify_ticket_async(ticket)
print(f"Async classified category: {category}")
if __name__ == "__main__":
asyncio.run(main()) output
Async classified category: Billing
Troubleshooting
- If the model returns unexpected text, ensure your prompt clearly instructs it to output only the category name.
- If you get authentication errors, verify your
OPENAI_API_KEYenvironment variable is set correctly. - For rate limits, implement exponential backoff or use a smaller model like
gpt-4o-mini.
Key Takeaways
- Use clear, concise prompts instructing the LLM to output only the classification label.
- The
OpenAIPython SDK withgpt-4oorgpt-4o-minimodels is ideal for support ticket classification. - Async and streaming calls improve throughput for batch classification tasks.
- Always secure your API key via environment variables to avoid leaks.
- Prompt engineering is critical to get consistent, parseable classification results.