Earnings call analysis with LLM
Quick answer
Use a large language model like
gpt-4o to analyze earnings call transcripts by feeding the transcript text as input and prompting for summary, sentiment, or key insights. The OpenAI Python SDK enables easy integration for extracting financial insights from calls.PREREQUISITES
Python 3.8+OpenAI API key (free tier works)pip install openai>=1.0
Setup
Install the openai Python package and set your API key as an environment variable for secure access.
pip install openai>=1.0 output
Collecting openai Downloading openai-1.x.x-py3-none-any.whl (xx kB) Installing collected packages: openai Successfully installed openai-1.x.x
Step by step
This example loads an earnings call transcript, sends it to gpt-4o with a prompt to summarize key financial highlights, and prints the output.
import os
from openai import OpenAI
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
transcript = '''
Good morning everyone, and thank you for joining our Q1 earnings call. This quarter, revenue grew 12% year-over-year driven by strong demand in our cloud services segment. Operating margin improved by 3 percentage points due to cost efficiencies. We raised our full-year guidance reflecting this momentum.
'''
prompt = f"Analyze the following earnings call transcript and summarize the key financial highlights:\n\n{transcript}\n\nSummary:"
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": prompt}]
)
print("Earnings call summary:")
print(response.choices[0].message.content) output
Earnings call summary: The company reported a 12% year-over-year revenue growth driven by cloud services, improved operating margin by 3 percentage points due to cost efficiencies, and raised full-year guidance based on strong momentum.
Common variations
- Use
gpt-4o-minifor faster, lower-cost analysis with slightly less detail. - Implement async calls with
asynciofor concurrent transcript processing. - Stream partial results using
stream=Trueinchat.completions.createfor real-time analysis feedback.
import os
import asyncio
from openai import OpenAI
async def analyze_transcript_async(transcript: str):
client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
prompt = f"Summarize key financial points from this earnings call:\n\n{transcript}\n\nSummary:"
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}]
)
print("Async summary:", response.choices[0].message.content)
asyncio.run(analyze_transcript_async("Sample transcript text here.")) output
Async summary: The earnings call highlighted revenue growth, margin improvement, and raised guidance reflecting positive business trends.
Troubleshooting
- If you receive
RateLimitError, reduce request frequency or upgrade your API plan. - For
InvalidRequestErrordue to input length, chunk the transcript into smaller parts before sending. - Ensure your
OPENAI_API_KEYenvironment variable is set correctly to avoid authentication errors.
Key Takeaways
- Use
gpt-4ofor detailed earnings call analysis andgpt-4o-minifor cost-effective summaries. - Chunk large transcripts to fit model context windows and avoid input length errors.
- Async and streaming calls enable scalable and real-time earnings call processing.
- Always secure your API key via environment variables to prevent leaks.