-
Notifications
You must be signed in to change notification settings - Fork 268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Are you using the test dataset in the training process? #37
Comments
I think the author just used test data to determine whether the training was over in each epoch. If you don't want to do this, you can put the test code in a test function and call it again after the train finishes. |
Yes that's true. But it is not correct in general ML, as the only criterion for stopping the training is the performance on the validation set. |
I think it's better to call this a validation set than testing data. |
Then where is the evaluation script if that's just the validation set? Based on the split of the dataset in the code, there is no validation set if I understand it correctly. Feel free to point me to the code where I might miss something critical. |
Could you explain why you are using "users_to_test" in the early stopping ? I'm really puzzled by the code pasted above.
The text was updated successfully, but these errors were encountered: