URGENT QUERY: Run Paddle Models with ONNX #14572
Replies: 1 comment 7 replies
-
To run PaddleOCR models (PPOCR V3/V4) with ONNX and achieve live inference for images and videos, here are the steps and answers to your queries: 1. How to run a PaddleOCR model with
|
Beta Was this translation helpful? Give feedback.
-
Dear Paddle Community,
I humbly request guidance to run PPOCR V3/v4 models with onnx. I wish to get live inference with images and videos.
Questions:
Context:: I used the below link: https://github.com/PaddlePaddle/PaddleOCR/blob/main/docs/ppocr/infer_deploy/paddle2onnx.en.md to convert the file into onnx, but I do not want to run through this: python3 tools/infer/predict_system.py since it does not provide live-results.
I would like to run and get instant results like the code with PaddleOCR's API:
from paddleocr import PaddleOCR,draw_ocr
ocr = PaddleOCR(use_angle_cls=True, lang='en') # need to run only once to download and load model into memory
img_path = './imgs_en/img_12.jpg'
result = ocr.ocr(img_path, cls=True)
for idx in range(len(result)):
res = result[idx]
for line in res:
print(line)
I humbly request guidance from @GreatV @WenmuZhou @LDOUBLEV @MissPenguin @tink2123 @UserWangZz and others ........ for guidance.
Beta Was this translation helpful? Give feedback.
All reactions