Accuracy drop in quantized keras model after hls compile #587
wilfredkisku
started this conversation in
General
Replies: 2 comments 2 replies
-
Could you try profiling each layer and comparing the expected output (from the QKeras model) to the actual output (from the hls4ml QKeras model)? My suspicion is the skip connection/merge layers/max pooling may be causing overflows, especially if they're using different input/output data types. An example for how to do this is here: https://github.com/hls4ml-finn-mlperftiny/CIFAR10/blob/main/hls4ml/convert.py#L222-L233 |
Beta Was this translation helpful? Give feedback.
1 reply
-
Hi have you solve your problem? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am working on a skip connection-based CNN structure that either uses an
Add
or aConcatenate
function to merge different paths in a model. I am able to obtain proper accuracy in the baseline model (fp32) for both keras and hls compiled model.But there is an issue that pops up when I use the quantized model and then convert it to hls (issue in the accuracy drop of the quantilized hls model after it is compiled).
Model Architecture:
HLS configuration code:
Accuracy Report:
You can see that there is a considerable drop in accuracy after HLS conversion. Can anyone point out the mistake on my part?
Beta Was this translation helpful? Give feedback.
All reactions