Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How are submissions judged? #17

Open
Samhori opened this issue Jan 7, 2025 · 0 comments
Open

How are submissions judged? #17

Samhori opened this issue Jan 7, 2025 · 0 comments

Comments

@Samhori
Copy link

Samhori commented Jan 7, 2025

I was wondering what criteria submissions are judged by? If it is a metric similar to accuracy, which is what seems to be what is used at the moment, then the fraction of the test data with an anomaly matters a lot which will mean that scoring well will rely on accurately guessing how many anomalies were placed in the test set. The problem description suggest we should provide "an array of predicted probabilities in the range [0, 1]" so I believe that a method like cross-entropy or AUROC is being applied instead, but it would be very helpful to understand exactly what is being provided.

Also, the description of the Sine-Gaussian data says that "these are generic low-frequency signals used to represent potential gravitational wave sources that do not fit into the well-understood categories like BBH" which seems to imply that the aim of this challenge is to identify sine-gaussians. My understanding is that this is wrong and that in the test set there will be unspecified anomalies so that submissions are judged on their ability to identify a completely generic anomaly, but could you clarify whether this understanding is correct?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant