Skip to content

Commit

Permalink
Updating readme to link to OpenAI hosted evals experience (#1572)
Browse files Browse the repository at this point in the history
To offer greater flexibility, proposing we add a link to OpenAI's
[hosted evals experience](https://platform.openai.com/docs/guides/evals)
launched at DevDay this year
  • Loading branch information
dmitry-openai authored Dec 18, 2024
1 parent a32c982 commit cdb8ce9
Showing 1 changed file with 2 additions and 0 deletions.
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# OpenAI Evals

> You can now configure and run Evals directly in the OpenAI Dashboard. [Get started →](https://platform.openai.com/docs/guides/evals)
Evals provide a framework for evaluating large language models (LLMs) or systems built using LLMs. We offer an existing registry of evals to test different dimensions of OpenAI models and the ability to write your own custom evals for use cases you care about. You can also use your data to build private evals which represent the common LLMs patterns in your workflow without exposing any of that data publicly.

If you are building with LLMs, creating high quality evals is one of the most impactful things you can do. Without evals, it can be very difficult and time intensive to understand how different model versions might affect your use case. In the words of [OpenAI's President Greg Brockman](https://twitter.com/gdb/status/1733553161884127435):
Expand Down

0 comments on commit cdb8ce9

Please sign in to comment.