Replies: 1 comment
-
The pretrained models all used a range of different parameters, as I found there isn't one set that works well across different wakewords. Often the parameters for one wake word actually lead to worse performance for other wake words. Does your model for "Voice Genie" work well in practice? The accuracy/recall and false-positive per hour scores are really just to give an indication of if training is going somewhat well, and don't always correlate nicely with real-world performance. If your model has low recall (doesn't activate often enough on the target phrase), decreasing the negative weight scale and increasing the number of generated samples can help. Conversely, if your model activates too often when it shouldn't (false positives), increasing the negative weight scale and adding some adversarial negative generations via the custom training config file can help. None of that is a guarantee, unfortunately, and often it takes some experiments to figure out which parameters lead to a model that works well in your environment. |
Beta Was this translation helpful? Give feedback.
-
I see in the docs it mentions the number of examples the pretrained models were trained on, but how many timesteps, step size, and what was false activation penalty? I can barely seem to break the 80% performance mark and it doesn't really seem like increasing number of steps, nor increasing number of examples generated leads to any significant improvement for my wakeword "Voice Genie"
Beta Was this translation helpful? Give feedback.
All reactions