Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expected results for benchmarks #10

Open
5 tasks
sim642 opened this issue Dec 16, 2021 · 0 comments
Open
5 tasks

Expected results for benchmarks #10

sim642 opened this issue Dec 16, 2021 · 0 comments

Comments

@sim642
Copy link
Member

sim642 commented Dec 16, 2021

Our regression tests mostly have expected results that can be tested against, but none of our benchmarks do. This means it's impossible to know when Goblint has become better or worse on them and to track regressions in benchmarks automatically.

Various degrees of expected results would be possible:

  • Explicit annotations in benchmark sources, using a common mechanism with regression tests.
  • Expected statistics: race counts, warning counts (by category), etc.
  • Expected resource usage: rough runtime, memory use.
  • Expected comparison outcomes between configurations
  • Expected comparison with previous runs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant