diff --git a/README.md b/README.md index 96729eff3e..37121432fd 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,4 @@ + # OpenAI Evals Evals provide a framework for evaluating large language models (LLMs) or systems built using LLMs. We offer an existing registry of evals to test different dimensions of OpenAI models and the ability to write your own custom evals for use cases you care about. You can also use your data to build private evals which represent the common LLMs patterns in your workflow without exposing any of that data publicly. @@ -6,6 +7,12 @@ If you are building with LLMs, creating high quality evals is one of the most im https://x.com/gdb/status/1733553161884127435?s=20 +| Eval | Summary of evaluation | Capability targeted | +| --- | --- | --- | +| [Identifying Variables](evals/elsuite/identifying_variables) | Identify the correct experimental variables for testing a hypothesis | AI R&D | + +--- + ## Setup To run evals, you will need to set up and specify your [OpenAI API key](https://platform.openai.com/account/api-keys). After you obtain an API key, specify it using the [`OPENAI_API_KEY` environment variable](https://platform.openai.com/docs/quickstart/step-2-setup-your-api-key). Please be aware of the [costs](https://openai.com/pricing) associated with using the API when running evals. You can also run and create evals using [Weights & Biases](https://wandb.ai/wandb_fc/openai-evals/reports/OpenAI-Evals-Demo-Using-W-B-Prompts-to-Run-Evaluations--Vmlldzo0MTI4ODA3).