How to run local AI assistant on PC
Quick answer
Use
Ollama to run AI models locally on your PC by installing the Ollama CLI and downloading supported models. You can then start a local AI assistant by running ollama run with your chosen model, enabling private and offline AI interactions.PREREQUISITES
Windows 10/11 or macOSPython 3.8+ (optional for scripting)Ollama CLI installedInternet connection for initial model download
Setup Ollama CLI
Install the Ollama CLI to manage and run AI models locally on your PC. Ollama supports Windows and macOS. Download the installer from the official Ollama website and follow the installation instructions.
After installation, verify by running ollama version in your terminal or command prompt.
ollama version output
Ollama CLI version 1.0.0
Run a local AI assistant
Once Ollama CLI is installed, you can run a local AI assistant by pulling a model and starting an interactive session.
Example: Run the llama2 model locally with:
ollama run llama2 output
Ollama Llama2 model loaded. You: Hello, AI assistant! AI: Hello! How can I help you today?
Step by step Python integration
You can also integrate Ollama models into Python scripts using subprocess calls to run the local assistant and capture responses.
import subprocess
# Run Ollama local model and send a prompt
command = ['ollama', 'run', 'llama2', '--prompt', 'What is the weather today?']
result = subprocess.run(command, capture_output=True, text=True)
print(result.stdout) output
AI: The weather today is sunny with a high of 75°F.
Common variations
- Use different models like
llama2,mistral, orfalconby specifying the model name inollama run. - Run Ollama in server mode for API access with
ollama serve. - Integrate with Python asynchronously using
asyncio.create_subprocess_execfor non-blocking calls.
Troubleshooting
- If
ollama runfails, ensure the model is downloaded by runningollama pull <model-name>. - Check your PATH environment variable includes the Ollama CLI directory.
- For permission errors, run the terminal as administrator or with elevated privileges.
Key Takeaways
- Install Ollama CLI to run AI models locally on your PC without internet after initial setup.
- Use
ollama run <model>to start an interactive local AI assistant session. - Integrate Ollama with Python via subprocess calls for custom local AI workflows.