|
|
--- |
|
|
license: apache-2.0 |
|
|
library_name: PaddleOCR |
|
|
language: |
|
|
- en |
|
|
- zh |
|
|
pipeline_tag: image-to-text |
|
|
tags: |
|
|
- OCR |
|
|
- PaddlePaddle |
|
|
- PaddleOCR |
|
|
- layout_detection |
|
|
--- |
|
|
|
|
|
# PicoDet-L_layout_17cls |
|
|
|
|
|
## Introduction |
|
|
|
|
|
A balanced efficiency and precision layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using PicoDet-L, including 17 common layout categories: Paragraph Title, Image, Text, Number, Abstract, Content, Figure Caption, Formula, Table, Table Caption, References, Document Title, Footnote, Header, Algorithm, Footer, and Seal. |
|
|
|
|
|
| Model| mAP(0.5) (%) | |
|
|
| --- | --- | |
|
|
|PicoDet-L_layout_17cls | 89.0 | |
|
|
|
|
|
**Note**: Paddleocr's self built layout area detection data set contains 892 common document type images such as Chinese and English papers, magazines and research papers. |
|
|
|
|
|
## Quick Start |
|
|
|
|
|
### Installation |
|
|
|
|
|
1. PaddlePaddle |
|
|
|
|
|
Please refer to the following commands to install PaddlePaddle using pip: |
|
|
|
|
|
```bash |
|
|
# for CUDA11.8 |
|
|
python -m pip install paddlepaddle-gpu==3.0.0 -i https://www.paddlepaddle.org.cn/packages/stable/cu118/ |
|
|
|
|
|
# for CUDA12.6 |
|
|
python -m pip install paddlepaddle-gpu==3.0.0 -i https://www.paddlepaddle.org.cn/packages/stable/cu126/ |
|
|
|
|
|
# for CPU |
|
|
python -m pip install paddlepaddle==3.0.0 -i https://www.paddlepaddle.org.cn/packages/stable/cpu/ |
|
|
``` |
|
|
|
|
|
For details about PaddlePaddle installation, please refer to the [PaddlePaddle official website](https://www.paddlepaddle.org.cn/en/install/quick). |
|
|
|
|
|
2. PaddleOCR |
|
|
|
|
|
Install the latest version of the PaddleOCR inference package from PyPI: |
|
|
|
|
|
```bash |
|
|
python -m pip install paddleocr |
|
|
``` |
|
|
|
|
|
|
|
|
### Model Usage |
|
|
|
|
|
You can quickly experience the functionality with a single command: |
|
|
|
|
|
```bash |
|
|
paddleocr layout_detection \ |
|
|
--model_name PicoDet-L_layout_17cls \ |
|
|
--threshold 0.6 \ |
|
|
-i https://cdn-uploads.huggingface.co/production/uploads/63d7b8ee07cd1aa3c49a2026/N5C68HPVAI-xQAWTxpbA6.jpeg |
|
|
``` |
|
|
|
|
|
You can also integrate the model inference of the layout detection module into your project. Before running the following code, please download the sample image to your local machine. |
|
|
|
|
|
```python |
|
|
from paddleocr import LayoutDetection |
|
|
|
|
|
model = LayoutDetection(model_name="PicoDet-L_layout_17cls") |
|
|
output = model.predict("N5C68HPVAI-xQAWTxpbA6.jpeg", batch_size=1, threshold=0.6) |
|
|
for res in output: |
|
|
res.print() |
|
|
res.save_to_img(save_path="./output/") |
|
|
res.save_to_json(save_path="./output/res.json") |
|
|
``` |
|
|
|
|
|
After running, the obtained result is as follows: |
|
|
|
|
|
```json |
|
|
{'res': {'input_path': '/root/.paddlex/predict_input/N5C68HPVAI-xQAWTxpbA6.jpeg', 'page_index': None, 'boxes': [{'cls_id': 2, 'label': 'text', 'score': 0.9635457992553711, 'coordinate': [388.77502, 734.85455, 711.20465, 851.34924]}, {'cls_id': 2, 'label': 'text', 'score': 0.9560548067092896, 'coordinate': [35.81832, 645.7974, 361.34726, 849.03186]}, {'cls_id': 8, 'label': 'table', 'score': 0.953098714351654, 'coordinate': [436.3866, 107.82475, 663.87585, 313.641]}, {'cls_id': 8, 'label': 'table', 'score': 0.9468971490859985, 'coordinate': [74.18633, 105.53037, 324.5257, 298.42532]}, {'cls_id': 2, 'label': 'text', 'score': 0.9267896413803101, 'coordinate': [380.26923, 22.14476, 711.83966, 79.47579]}, {'cls_id': 2, 'label': 'text', 'score': 0.9177922606468201, 'coordinate': [386.36615, 498.08298, 713.29956, 698.7275]}, {'cls_id': 2, 'label': 'text', 'score': 0.9103341698646545, 'coordinate': [33.40257, 349.2482, 361.91498, 615.99664]}, {'cls_id': 2, 'label': 'text', 'score': 0.9034966230392456, 'coordinate': [36.185455, 332.4166, 144.946, 345.4999]}, {'cls_id': 2, 'label': 'text', 'score': 0.8902557492256165, 'coordinate': [385.55212, 347.35916, 715.6368, 460.86615]}, {'cls_id': 2, 'label': 'text', 'score': 0.7871182560920715, 'coordinate': [34.448185, 628.47015, 188.9675, 640.77356]}, {'cls_id': 2, 'label': 'text', 'score': 0.7396460771560669, 'coordinate': [428.26053, 477.82913, 692.68634, 490.73227]}, {'cls_id': 2, 'label': 'text', 'score': 0.7116910219192505, 'coordinate': [33.61207, 21.087503, 360.95645, 80.145096]}, {'cls_id': 0, 'label': 'paragraph_title', 'score': 0.6801344752311707, 'coordinate': [394.5478, 716.9035, 526.30695, 730.278]}]}} |
|
|
``` |
|
|
|
|
|
The visualized image is as follows: |
|
|
|
|
|
 |
|
|
|
|
|
For details about usage command and descriptions of parameters, please refer to the [Document](https://paddlepaddle.github.io/PaddleOCR/latest/en/version3.x/module_usage/layout_detection.html#iii-quick-integration). |
|
|
|
|
|
### Pipeline Usage |
|
|
|
|
|
The ability of a single model is limited. But the pipeline consists of several models can provide more capacity to resolve difficult problems in real-world scenarios. |
|
|
|
|
|
#### PP-TableMagic (table_recognition_v2) |
|
|
|
|
|
The General Table Recognition v2 pipeline (PP-TableMagic) is designed to tackle table recognition tasks, identifying tables in images and outputting them in HTML format. PP-TableMagic includes the following 8 modules: |
|
|
|
|
|
* Table Structure Recognition Module |
|
|
* Table Classification Module |
|
|
* Table Cell Detection Module |
|
|
* Text Detection Module |
|
|
* Text Recognition Module |
|
|
* Layout Region Detection Module (optional) |
|
|
* Document Image Orientation Classification Module (optional) |
|
|
* Text Image Unwarping Module (optional) |
|
|
|
|
|
You can quickly experience the PP-TableMagic pipeline with a single command. |
|
|
|
|
|
```bash |
|
|
paddleocr table_recognition_v2 -i https://cdn-uploads.huggingface.co/production/uploads/63d7b8ee07cd1aa3c49a2026/tuY1zoUdZsL6-9yGG0MpU.jpeg \ |
|
|
--layout_detection_model_name PicoDet-L_layout_17cls \ |
|
|
--use_doc_orientation_classify False \ |
|
|
--use_doc_unwarping False \ |
|
|
--save_path ./output \ |
|
|
--device gpu:0 |
|
|
|
|
|
``` |
|
|
|
|
|
|
|
|
If save_path is specified, the visualization results will be saved under `save_path`. |
|
|
|
|
|
The command-line method is for quick experience. For project integration, also only a few codes are needed as well: |
|
|
|
|
|
|
|
|
```python |
|
|
from paddleocr import TableRecognitionPipelineV2 |
|
|
|
|
|
pipeline = TableRecognitionPipelineV2( |
|
|
layout_detection_model_name=PicoDet-L_layout_17cls, |
|
|
use_doc_orientation_classify=False, # Use use_doc_orientation_classify to enable/disable document orientation classification model |
|
|
use_doc_unwarping=False, # Use use_doc_unwarping to enable/disable document unwarping module |
|
|
device="gpu:0", # Use device to specify GPU for model inference |
|
|
) |
|
|
|
|
|
output = pipeline.predict("tuY1zoUdZsL6-9yGG0MpU.jpeg") |
|
|
for res in output: |
|
|
res.print() ## Print the predicted structured output |
|
|
res.save_to_img("./output/") |
|
|
res.save_to_xlsx("./output/") |
|
|
res.save_to_html("./output/") |
|
|
res.save_to_json("./output/") |
|
|
``` |
|
|
|
|
|
The default model used in pipeline is `PP-DocLayout-L`, so it is needed that specifing to `PicoDet-L_layout_17cls` by argument `layout_detection_model_name`. And you can also use the local model file by argument `layout_detection_model_dir`. For details about usage command and descriptions of parameters, please refer to the [Document](https://paddlepaddle.github.io/PaddleOCR/main/en/version3.x/pipeline_usage/table_recognition_v2.html#2-quick-start). |
|
|
|
|
|
## Links |
|
|
|
|
|
[PaddleOCR Repo](https://github.com/paddlepaddle/paddleocr) |
|
|
|
|
|
[PaddleOCR Documentation](https://paddlepaddle.github.io/PaddleOCR/latest/en/index.html) |
|
|
|
|
|
|