Last updated
1/23/2025
Get started
Query structured documents using a lightweight LLM workflow
This Blueprint demonstrates how to use open-source models and a simple LLM workflow to answer questions based on structured documents.
It is designed to showcase a simpler alternative to more complex and/or resource demanding alternatives, such as RAG systems that rely on vectorDBs and/or long-context models with large token windows.
If you encounter any issues with the hosted demo below, try the Blueprint in the GPU-enabled Google Colab Notebook available here.
Preview this Blueprint in action
Hosted demo
Step by step walkthrough
Tools used to create
Trusted open source tools used for this Blueprint
Choices
Insights into our motivations and key technical decisions throughout the development process.
Focus
Decision
Rationale
Alternatives Considered
Trade-offs
Overall Motivation
Overall Motivation
Build a local-friendly, Q&A system for structured docs (e.g. rulebooks).
Enables structured document retrieval without relying on closed APIs, or embeddings generation/VectorDB set-up.
Full Context API calls, standard RAG.
Performance gap compared to Full Context API solutions.
Document Pre-processing
Document Pre-processing
Used PyMuPDF4LLM for section extraction.
Extracts structured sections for retrieval-based answers.
Docling - Lack of heading hierarchy affected performance. Marker - Slower, required additional model.
Struggles with visually complex layouts, impacting accuracy.
Question answering workflow
Question answering workflow
Find, Retrieve, Answer workflow.
Simpler than RAG and requires smaller context window than full-context window methods.
Agentic retrieval - added to much complexity to the workflow.
Relies on preprocessing quality and quality of section titles.
Model Selection
Model Selection
Qwen2.5-7B-Instruct.
Runs on accessible hardware while maintaining reasonable performance.
DeepSeek R1 distilled models (COT lowered accuracy), 1.5B-3B range models (lowered accuracy).
Slight reduction in accuracy compared to larger models.
Deployment Options
Deployment Options
Supports Codespaces, local CLI, local Streamlit app, Google Colab, HF Spaces.
Flexible for diverse environments and compute needs.
Single-path setups like only local.
Maintaining multiple pathways adds complexity for updates and consistency.
Ready? Try it yourself!
System Requirments
Windows, macOS, or Linux. Python 3.10 or higher. Min RAM: 10 GB. Disk space: 32 GB min
Learn MoreHelp Documentation
Detailed guidance on GitHub walking you through this project installation.
View MoreDiscussion Points
Get involved in improving the Blueprint by visiting the GitHub Blueprint issues.
Join inExplore Blueprints Extensions
See examples of extended blueprints unlocking new capabilities and adjusted configurations enabling tailored solutions—or try it yourself.
Load more