QKeras model doesn't match the HLS4ML model #674
Unanswered
alexberian
asked this question in
Q&A
Replies: 1 comment
-
Did you find a work around for this? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have made a 12bit QKeras model, and when I create my HLS4ML model from it I get different results.
This code snippet yields the following;
I was under the impression that the point of designing/training a model with QKeras was to ensure that there is no loss in performance when converting into something hardware-compatible. Is there a mismatch between how QKeras and HLS4ML are doing the quantization? What should I do to make the models match?
Beta Was this translation helpful? Give feedback.
All reactions