Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does TF-MOT quantization will keep use tfLiteConverter? #236

Closed
JuHyung-Son opened this issue Jan 30, 2020 · 2 comments
Closed

Does TF-MOT quantization will keep use tfLiteConverter? #236

JuHyung-Son opened this issue Jan 30, 2020 · 2 comments
Assignees
Labels
question Further information is requested

Comments

@JuHyung-Son
Copy link

I just noticed that quantization guide in tf-mot is using tfLite Converter.
Then, will TF-MOT library will partially use TF-Lite?

@JuHyung-Son JuHyung-Son added the bug Something isn't working label Jan 30, 2020
@alanchiao alanchiao added question Further information is requested and removed bug Something isn't working labels Feb 2, 2020
@alanchiao alanchiao self-assigned this Feb 2, 2020
@alanchiao
Copy link

alanchiao commented Feb 2, 2020

Hi @jusonn

In general, TFMOT is working to make our tools backend-agnostic (meaning support TensorFlow, TFLite, and other ones also).

As you can see in this issue, we are working to make latency improvements for both TF and TFLite via pruning.

We will be launching a tool for quantization-aware training soon. TensorRT mentions its predecessor, contrib.quantize, in its docs.

@alanchiao
Copy link

For a while, the post-training quantization tools will be available only through TFLite. Again, we working to generalize it.

Closing and feel free to reopen and if you don't think your question is fully answered.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants