请问在web_service.py开启服务的时候,config.yml里面能不能配置onnx模型 #12937
tigflanker
started this conversation in
Ideas & Features
Replies: 1 comment
-
因为我们在实际使用的时候,还是想考虑服务化部署,但是C++编译我一直没走通(基于docker+ubuntu的方案) 我自己测试使用的方案: 一个是基于onnx直接推理 一个是基于常规的多进程服务 “web_service.py” + “python pipeline_http_client.py --image_dir /data/PaddleOCR/doc/imgs/00056221.jpg” 所以我想求助下大佬,web_service.py有没有示例可以加载使用onnxruntime? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
PaddleOCR组大佬好,
浅问一下,如果我使用pdserving中的web_service.py开启多进程的服务:
https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.7/deploy/pdserving/README_CN.md
在相应config.yml文件里能不能指定onnx模型?
我单独试过 “python3.7 tools/infer/predict_system.py --use_onnx=True” vs “python3.7 tools/infer/predict_system.py --use_onnx=False”,速度能快一倍
Beta Was this translation helpful? Give feedback.
All reactions