How to use question answering pipeline Hugging Face
Quick answer
Use the Hugging Face
transformers library's pipeline with the task set to "question-answering". Provide a question and context to get an answer extracted from the context text.PREREQUISITES
Python 3.8+pip install transformers>=4.30.0pip install torch (or tensorflow)Basic Python knowledge
Setup
Install the transformers library and a deep learning backend like torch or tensorflow. Set up your Python environment to run the question answering pipeline.
pip install transformers torch Step by step
Use the pipeline function from transformers to create a question answering pipeline. Pass a question and context string to get the extracted answer.
from transformers import pipeline
# Initialize the question answering pipeline
qa_pipeline = pipeline("question-answering")
# Define the context and question
context = "The Apollo 11 mission was the first to land humans on the Moon in 1969."
question = "When did Apollo 11 land on the Moon?"
# Get the answer
result = qa_pipeline(question=question, context=context)
print(f"Answer: {result['answer']}") output
Answer: 1969
Common variations
- Specify a different pretrained model by passing
model="deepset/roberta-base-squad2"topipeline. - Use GPU acceleration by installing
torchwith CUDA support. - Run asynchronously with
asyncioandtransformersif needed.
from transformers import pipeline
# Using a specific model
qa_pipeline = pipeline("question-answering", model="deepset/roberta-base-squad2")
context = "Python is a popular programming language created by Guido van Rossum."
question = "Who created Python?"
result = qa_pipeline(question=question, context=context)
print(f"Answer: {result['answer']}") output
Answer: Guido van Rossum
Troubleshooting
- If you get a
ModelNotFoundError, verify the model name and internet connection. - For slow performance, ensure you have a compatible GPU and the correct
torchversion installed. - If answers are incorrect, try a different pretrained model or provide more detailed context.
Key Takeaways
- Use Hugging Face's
pipelinewith task"question-answering"for easy QA integration. - Provide both
questionandcontextstrings to extract precise answers. - Specify pretrained models to improve accuracy or adapt to domain-specific data.
- Install
transformersand a backend liketorchfor best performance. - Troubleshoot by checking model names, dependencies, and context quality.