Skip to content

Commit

Permalink
untouch readme
Browse files Browse the repository at this point in the history
  • Loading branch information
thesofakillers committed Mar 19, 2024
1 parent 82b33de commit e6872f1
Showing 1 changed file with 0 additions and 6 deletions.
6 changes: 0 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,6 @@ If you are building with LLMs, creating high quality evals is one of the most im

<img width="596" alt="https://x.com/gdb/status/1733553161884127435?s=20" src="https://github.com/openai/evals/assets/35577566/ce7840ff-43a8-4d88-bb2f-6b207410333b">

| Eval | Summary of evaluation | Capability targeted |
| --- | --- | --- |
| [Track the Stat](evals/elsuite/track_the_stat) | Perform a sequential task by keeping track of state implicitly | AI R&D |

---

## Setup

To run evals, you will need to set up and specify your [OpenAI API key](https://platform.openai.com/account/api-keys). After you obtain an API key, specify it using the [`OPENAI_API_KEY` environment variable](https://platform.openai.com/docs/quickstart/step-2-setup-your-api-key). Please be aware of the [costs](https://openai.com/pricing) associated with using the API when running evals. You can also run and create evals using [Weights & Biases](https://wandb.ai/wandb_fc/openai-evals/reports/OpenAI-Evals-Demo-Using-W-B-Prompts-to-Run-Evaluations--Vmlldzo0MTI4ODA3).
Expand Down

0 comments on commit e6872f1

Please sign in to comment.