How to beginner · 3 min read

How to use question answering pipeline Hugging Face

Quick answer
Use the Hugging Face transformers library's pipeline with the task set to "question-answering". Provide a question and context to get an answer extracted from the context text.

PREREQUISITES

  • Python 3.8+
  • pip install transformers>=4.30.0
  • pip install torch (or tensorflow)
  • Basic Python knowledge

Setup

Install the transformers library and a deep learning backend like torch or tensorflow. Set up your Python environment to run the question answering pipeline.

bash
pip install transformers torch

Step by step

Use the pipeline function from transformers to create a question answering pipeline. Pass a question and context string to get the extracted answer.

python
from transformers import pipeline

# Initialize the question answering pipeline
qa_pipeline = pipeline("question-answering")

# Define the context and question
context = "The Apollo 11 mission was the first to land humans on the Moon in 1969."
question = "When did Apollo 11 land on the Moon?"

# Get the answer
result = qa_pipeline(question=question, context=context)

print(f"Answer: {result['answer']}")
output
Answer: 1969

Common variations

  • Specify a different pretrained model by passing model="deepset/roberta-base-squad2" to pipeline.
  • Use GPU acceleration by installing torch with CUDA support.
  • Run asynchronously with asyncio and transformers if needed.
python
from transformers import pipeline

# Using a specific model
qa_pipeline = pipeline("question-answering", model="deepset/roberta-base-squad2")

context = "Python is a popular programming language created by Guido van Rossum."
question = "Who created Python?"

result = qa_pipeline(question=question, context=context)
print(f"Answer: {result['answer']}")
output
Answer: Guido van Rossum

Troubleshooting

  • If you get a ModelNotFoundError, verify the model name and internet connection.
  • For slow performance, ensure you have a compatible GPU and the correct torch version installed.
  • If answers are incorrect, try a different pretrained model or provide more detailed context.

Key Takeaways

  • Use Hugging Face's pipeline with task "question-answering" for easy QA integration.
  • Provide both question and context strings to extract precise answers.
  • Specify pretrained models to improve accuracy or adapt to domain-specific data.
  • Install transformers and a backend like torch for best performance.
  • Troubleshoot by checking model names, dependencies, and context quality.
Verified 2026-04 · deepset/roberta-base-squad2
Verify ↗