How to classify sentiment with LLM
Quick answer
Use a large language model like
gpt-4o via the OpenAI Python SDK to classify sentiment by prompting it with a sentiment analysis task. Send the text as a user message and parse the model's response for sentiment labels such as positive, negative, or neutral.PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install openai>=1.0
Setup
Install the openai Python package and set your API key as an environment variable.
- Install package:
pip install openai - Set environment variable in your shell:
export OPENAI_API_KEY='your_api_key'(Linux/macOS) orsetx OPENAI_API_KEY "your_api_key"(Windows)
pip install openai output
Collecting openai Downloading openai-1.x.x-py3-none-any.whl (xx kB) Installing collected packages: openai Successfully installed openai-1.x.x
Step by step
This example uses the gpt-4o model to classify sentiment by sending a prompt that instructs the model to label the sentiment of the input text. The response is parsed to extract the sentiment label.
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
text_to_classify = "I love the new design of your website!"
messages = [
{"role": "system", "content": "You are a sentiment analysis assistant. Respond with one word: Positive, Negative, or Neutral."},
{"role": "user", "content": f"Classify the sentiment of this text: '{text_to_classify}'"}
]
response = client.chat.completions.create(
model="gpt-4o",
messages=messages
)
sentiment = response.choices[0].message.content.strip()
print(f"Sentiment: {sentiment}") output
Sentiment: Positive
Common variations
You can classify sentiment asynchronously, use streaming for real-time output, or switch to other models like gpt-4o-mini or claude-3-5-sonnet-20241022 from Anthropic. For Anthropic, use the system= parameter instead of a system message.
import os
import asyncio
from openai import OpenAI
async def classify_sentiment_async(text: str) -> str:
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
messages = [
{"role": "system", "content": "You are a sentiment analysis assistant. Respond with Positive, Negative, or Neutral."},
{"role": "user", "content": f"Classify the sentiment of this text: '{text}'"}
]
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=messages
)
return response.choices[0].message.content.strip()
async def main():
sentiment = await classify_sentiment_async("The movie was okay, not great but not bad.")
print(f"Sentiment: {sentiment}")
asyncio.run(main()) output
Sentiment: Neutral
Troubleshooting
- If you get an authentication error, verify your
OPENAI_API_KEYenvironment variable is set correctly. - If the model returns unexpected output, ensure your system prompt clearly instructs the model to respond with a single sentiment label.
- For rate limit errors, consider retrying after a delay or upgrading your API plan.
Key Takeaways
- Use the
OpenAISDK withgpt-4ofor accurate sentiment classification. - Provide clear instructions in the system prompt to get consistent sentiment labels.
- Async and streaming calls enable flexible integration in real-time applications.