How to beginner · 3 min read

How to configure Semantic Kernel services

Quick answer
Use the semantic_kernel Python package to create a Kernel instance and add AI services like OpenAIChatCompletion or OpenAITextEmbedding by providing your API key and model ID. Configure each service with add_service() specifying service_id and model details to enable AI completions and embeddings within Semantic Kernel.

PREREQUISITES

  • Python 3.8+
  • OpenAI API key (free tier works)
  • pip install semantic-kernel openai

Setup

Install the semantic-kernel and openai packages and set your OpenAI API key as an environment variable.

  • Install packages: pip install semantic-kernel openai
  • Set environment variable: export OPENAI_API_KEY='your_api_key' (Linux/macOS) or setx OPENAI_API_KEY "your_api_key" (Windows)
bash
pip install semantic-kernel openai

Step by step

Create a Kernel instance and add AI services for chat completions and text embeddings using your OpenAI API key and model IDs. Use add_service() to register each service with a unique service_id. Then call the services via the kernel.

python
import os
import semantic_kernel as sk
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion, OpenAITextEmbedding

# Initialize the kernel
kernel = sk.Kernel()

# Add OpenAI chat completion service
kernel.add_service(OpenAIChatCompletion(
    service_id="chat",
    api_key=os.environ["OPENAI_API_KEY"],
    ai_model_id="gpt-4o-mini"
))

# Add OpenAI text embedding service
kernel.add_service(OpenAITextEmbedding(
    service_id="embedding",
    api_key=os.environ["OPENAI_API_KEY"],
    ai_model_id="text-embedding-3-small"
))

# Use the chat service
chat_result = kernel.chat.complete(
    service_id="chat",
    prompt="Explain Semantic Kernel in simple terms."
)
print("Chat completion:", chat_result)

# Use the embedding service
embedding_result = kernel.embeddings.get_embedding(
    service_id="embedding",
    text="Semantic Kernel is an AI orchestration framework."
)
print("Embedding vector length:", len(embedding_result))
output
Chat completion: Semantic Kernel is an AI orchestration framework that helps developers integrate AI models and tools seamlessly.
Embedding vector length: 1536

Common variations

You can configure different AI providers by adding other service classes, such as Anthropic or Azure OpenAI. For async usage, Semantic Kernel supports async methods on services. You can also switch models by changing the ai_model_id parameter when adding a service.

python
import asyncio
import os
import semantic_kernel as sk
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion

async def async_chat():
    kernel = sk.Kernel()
    kernel.add_service(OpenAIChatCompletion(
        service_id="chat",
        api_key=os.environ["OPENAI_API_KEY"],
        ai_model_id="gpt-4o"
    ))
    response = await kernel.chat.complete_async(
        service_id="chat",
        prompt="What is the future of AI?"
    )
    print("Async chat completion:", response)

asyncio.run(async_chat())
output
Async chat completion: The future of AI is promising, with advancements in natural language understanding, automation, and personalized experiences.

Troubleshooting

  • If you see KeyError for the API key, ensure OPENAI_API_KEY is set in your environment.
  • If the model is not found, verify the ai_model_id matches a valid OpenAI model name.
  • For network errors, check your internet connection and firewall settings.

Key Takeaways

  • Use Kernel and add_service() to configure AI services in Semantic Kernel.
  • Provide your API key and model ID when adding services like OpenAIChatCompletion or OpenAITextEmbedding.
  • Semantic Kernel supports async calls and multiple AI providers via service configuration.
  • Always verify environment variables and model IDs to avoid common errors.
Verified 2026-04 · gpt-4o-mini, gpt-4o, text-embedding-3-small
Verify ↗