How to beginner · 3 min read

How to use Ollama in terminal

Quick answer
Use the ollama command-line interface (CLI) to run AI models locally in your terminal. Install Ollama, then run ollama run model_name to interact with models directly from the terminal.

PREREQUISITES

  • macOS or Linux terminal
  • Ollama installed (https://ollama.com/docs/installation)
  • Basic command line knowledge

Setup

Install Ollama on your system by following the official instructions. For macOS, use the installer or Homebrew. For Linux, download the binary or use the package manager if available. After installation, verify by running ollama --version.

bash
brew install ollama
ollama --version
output
ollama version 0.1.0

Step by step

Run an AI model in your terminal using Ollama CLI. For example, to run the llama2 model, use the ollama run command and type your prompt interactively.

bash
ollama run llama2

# Then type your prompt, e.g.:
Hello Ollama, how do I use you in terminal?

# Ollama responds interactively.
output
User: Hello Ollama, how do I use you in terminal?
Ollama: You can run me with `ollama run llama2` and type your prompts directly.

Common variations

You can specify different models by changing the model name in ollama run model_name. Use ollama list to see installed models. For scripting, you can pipe input or use flags for non-interactive mode.

bash
ollama list
ollama run llama2 --prompt "Summarize AI in one sentence."
output
Installed models:
- llama2
- gpt4all

Ollama: AI is the simulation of human intelligence by machines.

Troubleshooting

If ollama command is not found, ensure the installation path is in your PATH environment variable. If models fail to load, check your internet connection or reinstall Ollama. Use ollama help for command usage.

bash
ollama help
output
Usage: ollama [command] [options]
Commands:
  run       Run a model
  list      List installed models
  help      Show help information

Key Takeaways

  • Use ollama run model_name to interact with AI models in terminal.
  • Install Ollama and verify with ollama --version before usage.
  • Use ollama list to view available models and switch easily.
  • For scripting, use flags or pipe input to ollama run for automation.
  • Check ollama help for troubleshooting and command options.
Verified 2026-04
Verify ↗