We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I just noticed that quantization guide in tf-mot is using tfLite Converter. Then, will TF-MOT library will partially use TF-Lite?
The text was updated successfully, but these errors were encountered:
Hi @jusonn
In general, TFMOT is working to make our tools backend-agnostic (meaning support TensorFlow, TFLite, and other ones also).
As you can see in this issue, we are working to make latency improvements for both TF and TFLite via pruning.
We will be launching a tool for quantization-aware training soon. TensorRT mentions its predecessor, contrib.quantize, in its docs.
Sorry, something went wrong.
For a while, the post-training quantization tools will be available only through TFLite. Again, we working to generalize it.
Closing and feel free to reopen and if you don't think your question is fully answered.
alanchiao
No branches or pull requests
I just noticed that quantization guide in tf-mot is using tfLite Converter.
Then, will TF-MOT library will partially use TF-Lite?
The text was updated successfully, but these errors were encountered: