PDF Parsing for RAG — extracting tables precisely. Convert to HTML. Fast, local, no GPU.
A lightweight Python library that extracts tables from PDFs and converts them to clean HTML, designed for RAG pipelines and LLM retrieval. Runs entirely on CPU with no external APIs or GPU dependencies.
- Precise extraction — Character-level coordinate extraction for accurate cell boundaries
- ≥/≤ symbols — Correctly repositioned by coordinates (avoids pdfplumber's line-end placement bug)
- Merged cells — Proper
rowspan/colspanoutput for multi-column tables - Line-wrapped text — Auto-segments and concatenates text across line breaks within cells (no symbol/text serialization)
- Fangzheng font — Handles full-width character ordering and decimal point encoding issues
- Adaptive config — Per-page tuning based on character metrics
- Fast & local — Pure Python, pdfplumber-based, no GPU required
- Python 3.8+
- pdfplumber >= 0.10.0
pip install ragtable-extract -i https://pypi.org/simpleOr from source:
git clone https://github.com/ZhuJiaxin2/ragtable-extract.git
cd ragtable-extract
pip install -e .import ragtable_extract
# Convert PDF tables to HTML file
ragtable_extract.convert(
input_path="document.pdf",
output_path="tables.html",
)
# Or extract as structured data
tables = ragtable_extract.extract(input_path="document.pdf")
for t in tables:
print(f"Page {t['page']}: {t['html'][:80]}...")python -m ragtable_extract document.pdf output.htmlRun the Flask web app to upload PDFs and preview extraction results in the browser:
pip install flask
python app.pyThen open http://localhost:1965 to upload a PDF and view extracted tables.
Run python test.py to generate extraction results. Output files:
| Function | Description |
|---|---|
convert(input_path, output_path, pages?, config?, use_adaptive_config=True) |
Convert PDF tables to HTML file |
extract(input_path, pages?, config?, use_adaptive_config=True) |
Extract tables as list of dicts with page, html, bbox, raw |
build_full_html(pdf_filename, tables) |
Build full HTML document from extracted tables |
Config |
Dataclass for tuning extraction (multiline thresholds, font tolerance, etc.) |
import ragtable_extract
# Custom config
config = ragtable_extract.Config(
multiline_cell_top_range=25,
multiline_y_tolerance=4,
)
tables = ragtable_extract.extract("doc.pdf", config=config)
# Adaptive config (default) — infers parameters from page character metrics
tables = ragtable_extract.extract("doc.pdf") # use_adaptive_config=True by defaultragtable-extract/
├── ragtable_extract/ # Core library
│ ├── __init__.py # convert(), extract()
│ ├── _core.py # Table extraction logic
│ ├── _config.py # Config & adaptive metrics
│ ├── _font.py # Special font handling
│ └── _html.py # HTML template
├── pyproject.toml
├── demo.py # CLI demo
└── app.py # Optional Flask web API
PDF → pdfplumber.find_tables()
→ Filter chars by bbox, cluster by top (y)
→ Reorder ≥/≤ symbols, fix Fangzheng font
→ Output <table> HTML
We compare ragtable-extract with opendataloader-pdf on real government PDF tables. Our extraction:
- Multi-column tables — Correctly recognizes complex layouts with merged cells
- Line-wrapped text — Automatically segments and concatenates text across line breaks within cells
- No serialization — Symbols and text stay in correct cells (e.g. no
1 2or万人 %wrongly merged)
Run python test_comparison.py to generate the report, then open comparison_report.html for side-by-side results.
MIT