You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In your unlabeled pre-training phase, you train on two datasets (STV2 and Kinetics) with reconstruction loss and later fine-tune with loss computed against their labels. Did you find that training over these two datasets during the unsupervised stage helped when finetuning? Did you find training over the validation sets to help as well?
Also, I'm wondering if you've ever played around with combining the first two stages of training (training against an added reconstruction loss and cross entropy loss). Thank you ahead of time, and great work on this paper!
The text was updated successfully, but these errors were encountered:
Hello,
In your unlabeled pre-training phase, you train on two datasets (STV2 and Kinetics) with reconstruction loss and later fine-tune with loss computed against their labels. Did you find that training over these two datasets during the unsupervised stage helped when finetuning? Did you find training over the validation sets to help as well?
Also, I'm wondering if you've ever played around with combining the first two stages of training (training against an added reconstruction loss and cross entropy loss). Thank you ahead of time, and great work on this paper!
The text was updated successfully, but these errors were encountered: