-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I have a question about using lora for fine-tuning #64
Comments
there has no vits, just a bigvgan, after upsample layers use speaker info to change x with weights and biases. |
Thanks for your reply. Can I ask one more thing? |
lora is low rank adapter, here is another adapter from micsoft AdaSpeech: Adaptive Text to Speech for Custom Voice or Adapter-Based Extension of Multi-Speaker Text-to-Speech Model for New Speakers |
lora_svc is not real lora, use this name is just want svc developers to think about lora. |
I have trained VITS model now and when I apply LORA to attention layer, fine-tuning is not working properly, could you please tell me which layer you applied to fine-tune VITS model with LORA and what values you used for rank and alpha ?
The text was updated successfully, but these errors were encountered: