From bd1736eaf0065ae25ad5e6c540a6877adae7ea38 Mon Sep 17 00:00:00 2001 From: Andrei Alexandru Date: Tue, 19 Mar 2024 13:57:16 +0000 Subject: [PATCH] Add 20 questions eval (#1499) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit # Thank you for contributing an eval! ♥️ 🚨 Please make sure your PR follows these guidelines, **failure to follow the guidelines below will result in the PR being closed automatically**. Note that even if the criteria are met, that does not guarantee the PR will be merged nor GPT-4 access be granted. 🚨 **PLEASE READ THIS**: In order for a PR to be merged, it must fail on GPT-4. We are aware that right now, users do not have access, so you will not be able to tell if the eval fails or not. Please run your eval with GPT-3.5-Turbo, but keep in mind as we run the eval, if GPT-4 gets higher than 90% on the eval, we will likely reject it since GPT-4 is already capable of completing the task. We plan to roll out a way for users submitting evals to see the eval performance on GPT-4 soon. Stay tuned! Until then, you will not be able to see the eval performance on GPT-4. **Starting April 10, the minimum eval count is 15 samples, we hope this makes it easier to create and contribute evals.** Also, please note that we're using **Git LFS** for storing the JSON files, so please make sure that you move the JSON file to Git LFS before submitting a PR. Details on how to use Git LFS are available [here](https://git-lfs.com). ## Eval details 📑 ### Eval name 20 questions ### Eval description This eval tests models' ability to generate and iterate over hypotheses by playing the game of "20 questions". In 20 questions, one of the players – the "gamemaster" – thinks of a word (in our case a noun) and the other player needs to guess it. To help them guess, the player can ask up to 20 yes-or-no questions, which the gamemaster must answer. ### What makes this a useful eval? - ## Criteria for a good eval ✅ Below are some of the criteria we look for in a good eval. In general, we are seeking cases where the model does not do a good job despite being capable of generating a good response (note that there are some things large language models cannot do, so those would not make good evals). Your eval should be: - [x] Thematically consistent: The eval should be thematically consistent. We'd like to see a number of prompts all demonstrating some particular failure mode. For example, we can create an eval on cases where the model fails to reason about the physical world. - [x] Contains failures where a human can do the task, but either GPT-4 or GPT-3.5-Turbo could not. - [x] Includes good signal around what is the right behavior. This means either a correct answer for `Basic` evals or the `Fact` Model-graded eval, or an exhaustive rubric for evaluating answers for the `Criteria` Model-graded eval. - [x] **Include at least 15 high-quality examples.** If there is anything else that makes your eval worth including, please document it below. ### Unique eval value > Insert what makes your eval high quality that was not mentioned above. (Not required) ## Eval structure 🏗️ Your eval should - [x] Check that your data is in `evals/registry/data/{name}` - [x] Check that your YAML is registered at `evals/registry/evals/{name}.yaml` - [x] Ensure you have the right to use the data you submit via this eval (For now, we will only be approving evals that use one of the existing eval classes. You may still write custom eval classes for your own cases, and we may consider merging them in the future.) ## Final checklist 👀 ### Submission agreement By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies (). - [x] I agree that my submission will be made available under an MIT license and complies with OpenAI's usage policies. ### Email address validation If your submission is accepted, we will be granting GPT-4 access to a limited number of contributors. Access will be given to the email address associated with the commits on the merged pull request. - [x] I acknowledge that GPT-4 access will only be granted, if applicable, to the email address used for my merged pull request. ### Limited availability acknowledgment We know that you might be excited to contribute to OpenAI's mission, help improve our models, and gain access to GPT-4. However, due to the requirements mentioned above and the high volume of submissions, we will not be able to accept all submissions and thus not grant everyone who opens a PR GPT-4 access. We know this is disappointing, but we hope to set the right expectation before you open this PR. - [x] I understand that opening a PR, even if it meets the requirements above, does not guarantee the PR will be merged nor GPT-4 access be granted. ### Submit eval - [x] I have filled out all required fields of this form - [x] I have used **Git LFS** for the Eval JSON data - [x] (Ignore if not submitting code) I have run `pip install pre-commit; pre-commit install` and have verified that `mypy`, `black`, `isort`, `autoflake` and `ruff` are running when I commit and push Failure to fill out all required fields will result in the PR being closed. ### Eval JSON data Since we are using Git LFS, we are asking eval submitters to add in as many Eval Samples (at least 5) from their contribution here:
View evals in JSON ### Eval ```jsonl INSERT_EVAL_HERE ```
--- evals/elsuite/twenty_questions/eval.py | 204 ++++++++++++++++++ evals/elsuite/twenty_questions/readme.md | 82 +++++++ .../twenty_questions/scripts/make_plots.py | 142 ++++++++++++ .../scripts/run_experiments.sh | 60 ++++++ evals/elsuite/twenty_questions/test_utils.py | 27 +++ evals/elsuite/twenty_questions/utils.py | 69 ++++++ evals/registry/data/twenty_questions/LICENSE | 3 + .../data/twenty_questions/dataset.jsonl | 3 + .../data/twenty_questions/lexicon_nouns.jsonl | 3 + evals/registry/evals/twenty_questions.yaml | 60 ++++++ evals/registry/solvers/twenty_questions.yaml | 80 +++++++ 11 files changed, 733 insertions(+) create mode 100644 evals/elsuite/twenty_questions/eval.py create mode 100644 evals/elsuite/twenty_questions/readme.md create mode 100644 evals/elsuite/twenty_questions/scripts/make_plots.py create mode 100644 evals/elsuite/twenty_questions/scripts/run_experiments.sh create mode 100644 evals/elsuite/twenty_questions/test_utils.py create mode 100644 evals/elsuite/twenty_questions/utils.py create mode 100644 evals/registry/data/twenty_questions/LICENSE create mode 100644 evals/registry/data/twenty_questions/dataset.jsonl create mode 100644 evals/registry/data/twenty_questions/lexicon_nouns.jsonl create mode 100644 evals/registry/evals/twenty_questions.yaml create mode 100644 evals/registry/solvers/twenty_questions.yaml diff --git a/evals/elsuite/twenty_questions/eval.py b/evals/elsuite/twenty_questions/eval.py new file mode 100644 index 0000000000..3cb0d5c857 --- /dev/null +++ b/evals/elsuite/twenty_questions/eval.py @@ -0,0 +1,204 @@ +import logging +import random +import re +from typing import Any, Dict, List, Optional, Union + +import evals +import evals.metrics +from evals.api import CompletionFn +from evals.elsuite.twenty_questions.utils import PROMPTS, generate_task_state_for +from evals.eval import SolverEval +from evals.record import Recorder +from evals.registry import registry +from evals.solvers.human_cli_solver import HumanCliSolver +from evals.solvers.solver import Solver +from evals.solvers.utils import maybe_wrap_with_solver +from evals.task_state import Message + +logger = logging.getLogger(__name__) +WORD_PATTERN = r"\[GUESS (.*?)\]" + + +class TwentyQuestions(SolverEval): + def __init__( + self, + completion_fns: List[CompletionFn], + samples_jsonl: str, + gamemaster_spec: str, + max_questions: int = 20, + max_replies: int = 40, + num_shortlist_items: int = 20, + shortlist_variant: bool = False, + seed: int = 222024, + n_samples: Optional[int] = None, + *args, + **kwargs, + ): + super().__init__(completion_fns, seed=seed, *args, **kwargs) + + self.samples_jsonl = samples_jsonl + self.gamemaster_solver = maybe_wrap_with_solver( + registry.make_completion_fn(gamemaster_spec) + ) + self.max_questions = max_questions + + if max_replies < max_questions: + logger.warn( + f"max_replies ({max_replies}) is less than max_questions ({max_questions}). Setting max_replies to {max_questions + 20}" + ) + self.max_replies = max_replies if max_replies > max_questions else max_questions + 20 + self.num_shortlist_items = num_shortlist_items + self.shortlist_variant = shortlist_variant + + self.n_samples = n_samples + self.rng = random.Random(seed) + + def eval_sample(self, solver: Solver, sample: Dict, rng: random.Random) -> Dict[str, Any]: + assert "word" in sample, "Sample must contain 'word' field" + assert "difficulty" in sample, "Sample must contain 'difficulty' field" + + if not isinstance(solver, HumanCliSolver): + logging.info(f"Running sample: {sample['word']}") + + # Generate the shortlist for the current sample if applicable. + if self.shortlist_variant: + assert self.num_shortlist_items <= len( + self.shortlist + ), "Number of shortlist items must be less than or equal to the total number of samples." + shortlist_for_sample = rng.sample(self.shortlist, self.num_shortlist_items) + if sample["word"] not in shortlist_for_sample: + random_index = rng.randint(0, len(shortlist_for_sample) - 1) + shortlist_for_sample[random_index] = sample["word"] + else: + shortlist_for_sample = None + response = self._conversation_loop(solver, sample, shortlist_for_sample) + + return response + + def run(self, recorder: Recorder) -> Dict[str, Union[float, int]]: + samples = self.get_samples() + self.rng.shuffle(samples) + samples = samples[: self.n_samples] if self.n_samples else samples + + if self.shortlist_variant: + self.shortlist = [sample["word"] for sample in samples] + + self.eval_all_samples(recorder, samples) + events = recorder.get_events("match") + + scores = [event.data["score"] for event in events] + num_guesses = [event.data["num_guesses"] for event in events] + num_questions = [event.data["num_questions"] for event in events] + num_violations = [event.data["num_violations"] for event in events] + num_gamemaster_refusals = [event.data["num_gamemaster_refusals"] for event in events] + incorrect_guesses = [event.data["incorrect_guesses"] for event in events] + word_difficulties = [event.data["word_difficulty"] for event in events] + + return { + "score": sum(scores) / len(scores), + "accuracy": evals.metrics.get_accuracy(events), + "bootstrap_std": evals.metrics.get_bootstrap_accuracy_std(events), + "average_num_guesses": sum(num_guesses) / len(num_guesses), + "average_num_questions": sum(num_questions) / len(num_questions), + "average_num_violations": sum(num_violations) / len(num_violations), + "average_num_gamemaster_refusals": sum(num_gamemaster_refusals) + / len(num_gamemaster_refusals), + "average_num_incorrect_guesses": sum((len(ig) for ig in incorrect_guesses)) + / len(incorrect_guesses), + "average_word_difficulty": sum(word_difficulties) / len(word_difficulties), + } + + def _conversation_loop( + self, solver: Solver, sample: Dict, shortlist: Optional[List[str]] = None + ) -> Dict[str, Any]: + """Maintains a conversation between the guesser and the gamemaster until the maximum number of questions is reached, or until a correct guess is made. + + Args: + solver (Solver): any compatible solver, instantiated for the current sample. + sample (Dict): current sample – one word to guess, and its associated difficulty. + + Returns: + Dict[str, Any]: a dictionary containing the final result and metrics of the conversation. + """ + + metrics = { + "num_guesses": 0, + "num_questions": 0, + "num_violations": 0, + "num_guesser_replies": 0, # num_guesses + num_questions + num_violations + "num_gamemaster_refusals": 0, + "incorrect_guesses": [], + } + conversation = [] + + # Contains fall-back condition to avoid infinite loops for solvers which never output questions. + while ( + metrics["num_questions"] < self.max_questions + and metrics["num_guesser_replies"] < self.max_replies + ): + task_state = generate_task_state_for( + "guesser", conversation, max_questions=self.max_questions, shortlist=shortlist + ) + guesser_response = solver(task_state) + conversation += [Message(content=guesser_response.output, role="guesser")] + metrics["num_guesser_replies"] += 1 + + # Check if guess made: + match = re.search(WORD_PATTERN, guesser_response.output) + if match is not None: + metrics["num_guesses"] += 1 + guess = match.group(1) + if guess.lower() == sample["word"].lower(): + response = { + "correct": True, + "score": self.max_questions - metrics["num_questions"], + "expected": sample["word"], + "word_difficulty": sample["difficulty"], + "picked": guess, + "num_guesses": metrics["num_guesses"], + "num_questions": metrics["num_questions"], + "num_violations": metrics["num_violations"], + "num_gamemaster_refusals": metrics["num_gamemaster_refusals"], + "incorrect_guesses": metrics["incorrect_guesses"], + } + evals.record.record_match(**response) + return response + else: + metrics["incorrect_guesses"] += [guess] + conversation += [ + Message( + content=PROMPTS["incorrect_guess"].format(guess=guess), role="system" + ) + ] + continue + elif "?" in guesser_response.output.strip(): + metrics["num_questions"] += 1 + else: # Neither guess nor question. + # TODO: Maybe make the guesser retry here? + logger.warn( + f"Rule violation, no guess or question in output: {guesser_response.output}" + ) + metrics["num_violations"] += 1 + conversation += [Message(content=PROMPTS["rule_violation"], role="system")] + continue + + task_state = generate_task_state_for("gamemaster", conversation, sample["word"]) + gamemaster_response = self.gamemaster_solver(task_state) + conversation += [Message(content=gamemaster_response.output, role="gamemaster")] + if gamemaster_response.output.lower() == "skip": + metrics["num_gamemaster_refusals"] += 1 + + logger.info(f"Ran out of questions for word: {sample['word']}") + response = { + "correct": False, + "score": 0, + "expected": sample["word"], + "word_difficulty": sample["difficulty"], + "num_guesses": metrics["num_guesses"], + "num_questions": metrics["num_questions"], + "num_violations": metrics["num_violations"], + "num_gamemaster_refusals": metrics["num_gamemaster_refusals"], + "incorrect_guesses": metrics["incorrect_guesses"], + } + evals.record.record_match(**response) + return response diff --git a/evals/elsuite/twenty_questions/readme.md b/evals/elsuite/twenty_questions/readme.md new file mode 100644 index 0000000000..89f2e3ce6c --- /dev/null +++ b/evals/elsuite/twenty_questions/readme.md @@ -0,0 +1,82 @@ +# 20 Questions + +This eval tests models' ability to generate and iterate over hypotheses by playing the game of "20 questions". In 20 questions, one of the players – the "gamemaster" – thinks of a word (in our case a noun) and the other player needs to guess it. To help them guess, the player can ask up to 20 yes-or-no questions, which the gamemaster must answer. + +## Usage +Run with: +```bash +# Standard variant. +oaieval twenty_questions.full + +# Shortlist variant. +oaieval twenty_questions.shortlist.full +``` + +Where the solver can be any generation solver in `evals/registry/solvers/defaults.yaml`, e.g. `generation/cot/gpt-3.5-turbo-16k`, or the chain-of-thought solvers in `evals/registry/solvers/twenty_questions.yaml`. + +## Evaluation process +We run a dialogue loop between two models for each sample: the evaluated model and the "gamemaster". By default, the gamemaster is gpt-4-turbo-preview – but this can be updated by specifying a different solver in `evals/registry/evals/twenty_questions.yaml`. + +The dialogue continues until the word is guessed correctly, or until 20 questions have been asked, whichever comes first. We also terminate conversations that last longer than 40 replies, to ensure that models which do not ask questions don't have infinite conversations. Both the maximum questions and the maximum replies can be controlled from the eval YAML file. + +## Task State +The task state can be found in `twenty_questions/utils.py`; it reads: +``` +You are about to play the game '20 questions'. The other player has thought of a noun which you should try to guess. You can ask 20 yes/no questions, to which they will answer 'yes', 'no', or 'skip' (if they cannot answer your question). You should try to guess the word as soon as possible, using the least amount of questions. To guess a word, type [GUESS ] – for example to guess the word 'sibling', output [GUESS sibling]. Your score will be 0 if you do not guess correctly, and {max_questions} minus the number of questions you asked if you guess correctly. Start by asking your first question. +``` + +## Prompts +See `twenty_questions/utils.py` to review/adjust the prompts used in this eval. + +## Datasets + +We use a dataset of 207 words, 177 of which were from [this lexicon](https://github.com/mounicam/lexical_simplification), annotated by our team with a difficulty category. This dataset comprises: +- 47 words rated “easy”, e.g. ear, father, potato; +- 91 words rated “medium”, e.g. cloth, hike, discount; +- 69 words rated “hard”, e.g. prosperity, gland, philosopher; + +In addition to these common nouns, we include 30 proper nouns such as “Sherlock Holmes,” “The Beatles,” “Titanic,” and “Starbucks”, which span the easy and medium difficulties. + +## Metrics +We measure the score each model achieves, defined as `score = max_questions - questions_asked`. We also track the win-rate, i.e. the % of samples the model guesses correctly. Auxiliary metrics such as average number of average number of questions asked, average number of incorrect guesses, and average number of gamemaster refusals (i.e. situations where the gamemaster says 'skip') are also tracked. + + +## Variants + +We run two main variants of this evaluation: +- **standard**: the main variant +- **shortlist**: an easier variant where the evaluated model sees a shortlist of words in its system prompt. The word the gamemaster has selected is part of the list. In this variant, the evaluated model effectively has to narrow down the pool of candidate words until it finds the answer. + +## Token Usage Estimates + +Below is a rough estimate of the total number of tokens consumed by some variations the eval, including both input and output tokens: + +Variant | Model | Solver | Prompt tokens | Completion tokens | Total tokens +| --- | --- | --- | --- | --- | --- | +standard | direct | gpt-4-turbo-preview | 2,502,067 | 52,879 | 2,554,946 +standard | direct | gpt-4-base | 13,197,212 | 2,814,623 | 16,011,835 +standard | direct | gpt-3.5-turbo | 2,670,866 | 57,917 | 2,728,783 +standard | cot | gpt-4-turbo-preview | 73,765,861 | 1,881,455 | 75,647,316 +standard | cot | gpt-4-base | 51,777,817 | 6,397,472 | 58,175,289 +standard | cot | gpt-3.5-turbo | 38,236,500 | 199,831 | 38,436,331 +standard | cot | llama-2-70b | 6,785,634 | 581,421 | 7,367,055 +standard | cot | mixtral-8x7b-instruct | 175,956,903 | 5,327,393 | 181,284,296 +shortlist | direct | gpt-4-turbo-preview | 1,237,172 | 28,351 | 1,265,523 +shortlist | direct | gpt-4-base | 11,034,903 | 2,133,487 | 13,168,390 +shortlist | direct | gpt-3.5-turbo | 1,704,154 | 36,356 | 1,740,510 +shortlist | cot | gpt-4-turbo-preview | 10,951,215 | 545,945 | 11,497,160 +shortlist | cot | gpt-4-base | 45,591,363 | 596,429 | 46,187,792 +shortlist | cot | gpt-3.5-turbo | 19,798,263 | 165,731 | 19,963,994 +shortlist | cot | llama-2-70b | 5,980,667 | 528,879 | 6,509,546 +shortlist | cot | mixtral-8x7b-instruct | 143,646,924 | 4,315,806 | 147,962,730 + + +## Version History +v0: Initial version released + + +## Contribution statement + +Eval design, implementation, and results evaluation were primarily conducted by Andrei Alexandru with contributions from Dane Sherburn, under the guidance of (alphabetically by last-name) Steven Adler, James Aung, and Chan Jun Shern who scoped and managed the broader research project, including input on evaluation design, results analysis, and interpretation. + + diff --git a/evals/elsuite/twenty_questions/scripts/make_plots.py b/evals/elsuite/twenty_questions/scripts/make_plots.py new file mode 100644 index 0000000000..f07b76da5a --- /dev/null +++ b/evals/elsuite/twenty_questions/scripts/make_plots.py @@ -0,0 +1,142 @@ +import argparse +from pathlib import Path +import os + +import matplotlib.pyplot as plt +import pandas as pd +import seaborn as sns + +from evals.utils import log_utils + +PLOT_TITLES_BY_METRIC = { + "score": "Score", + "winrate": "Win-rate", + "bootstrap_std": "Bootstrapped standard deviation", + "average_num_guesses": "Average guesses per sample", + "average_num_questions": "Average questions per sample", + "average_num_violations": "Average rule violations per sample", + "average_num_gamemaster_refusals": "Average gamemaster refusals per sample", + "average_num_incorrect_guesses": "Average incorrect guesses per sample", + "average_word_difficulty": "Average word difficulty", +} + +HUMAN_BASELINE = { + "standard": { + "winrate": 0.0333, + "score": 0.1333, + "average_num_guesses": 0.3666, + "average_num_questions": 19.8666, + "average_num_violations": 0.62, + "average_num_gamemaster_refusals": 0.28, + "average_num_incorrect_guesses": 0.3333, + "average_word_difficulty": 2.2333, + }, + "shortlist": { + "winrate": 1, + "score": 14.1388, + "average_num_guesses": 1.8611, + "average_num_questions": 5.8611, + "average_num_violations": 0.1944, + "average_num_gamemaster_refusals": 0.1111, + "average_num_incorrect_guesses": 0.8611, + "average_word_difficulty": 2.2777, + } +} + +UNIT_METRICS = ["winrate"] + +def extract_metrics(datadir: Path) -> pd.DataFrame: + df_rows = [] + # There are two eval variants: standard and shortlist. + for variant in os.listdir(datadir): + for path, results in sorted(list(log_utils.get_final_results_from_dir(f"{datadir}/{variant}").items())): + spec = log_utils.extract_spec(path) + solver_path = Path(spec["completion_fns"][0]) + model = solver_path.name + solver = solver_path.parent.name + # Remove root section of path, which is the eval name + solver_path = solver_path.relative_to(solver_path.parts[0]) + df_rows.append({"solver": solver, "model": model, "variant": variant, **results}) + df = pd.DataFrame(df_rows) + df.rename(columns={"accuracy": "winrate"}, inplace=True) + df.sort_values(by=["variant", "model", "solver"], inplace=True) + df.to_csv(datadir / "results.csv", index=False) + + return df + +def make_plot(df: pd.DataFrame, outpath: Path, metric="score", variant="standard"): + df = df.round(2) + plt.figure() + sns.set_theme(style="whitegrid") + + def compute_sem(x): + sem = x.std() / (len(x) ** 0.5) + sem2 = sem * 2 # 95% confidence interval + lower = max(0, (x.mean() - sem2).round(2)) + upper = (x.mean() + sem2).round(2) + return lower, upper + + + # Plotting + sns.set(style="whitegrid") + ax = sns.barplot(x=metric, y="model", hue="solver", data=df, errorbar=compute_sem, capsize=0.1) + for container in ax.containers: + ax.bar_label(container, fmt="{:.2f}", label_type="edge", padding=15) + + ax.axvline(HUMAN_BASELINE[variant][metric], color="red", linestyle="--") + + # A bunch of tweaks to make individual plots look nice. + if variant == "shortlist" and metric == "winrate": + plt.text(HUMAN_BASELINE[variant][metric] - 0.35, .5, "Human baseline", color="red", fontsize=12, ha="left") + elif variant == "standard" and metric == "average_num_questions": + plt.text(HUMAN_BASELINE[variant][metric] - 7, .5, "Human baseline", color="red", fontsize=12, ha="left") + else: + plt.text(HUMAN_BASELINE[variant][metric] + 0.05, .5, "Human baseline", color="red", fontsize=12, ha="left") + + # Some of the metrics are in [0, 1]. + if metric in UNIT_METRICS: + plt.xlim(0, 1.1) + + if metric in ("score", "average_num_questions"): + plt.xlim(0, 20.1) + + if metric == "average_word_difficulty": + plt.xlim(0, 3.1) # 6 is the maximum word difficulty in the dataset. + + if metric in ("score", "winrate"): + plt.legend(loc="lower right") + + plt.title(PLOT_TITLES_BY_METRIC[metric] + f" ({variant} variant)") + plt.xlabel(metric) + plt.tight_layout() + plt.savefig(outpath) + plt.close() + +if __name__ == "__main__": + parser = argparse.ArgumentParser() + parser.add_argument("--log-dir", "-d", type=str, required=True) + parser.add_argument("--out-dir", "-o", type=str, default="./outputs") + args = parser.parse_args() + log_dir = Path(args.log_dir) + out_dir = Path(args.out_dir) + + out_dir.mkdir(exist_ok=True, parents=True) + + df = extract_metrics(log_dir) + + # Rename some of the solver values so they can be represented in the same plot. + df.loc[df['solver'] == 'cot_hhh', 'solver'] = 'cot' + df.loc[df['solver'] == 'hhh', 'solver'] = 'direct' + + for variant in df['variant'].unique(): + df_per_variant = df[df['variant'] == variant] + + print(f"Plotting all metrics for {variant} variant...") + + core_metrics = ["score", "winrate"] + auxiliary_metrics = ["average_num_guesses", "average_num_questions", "average_num_violations", "average_num_gamemaster_refusals", "average_num_incorrect_guesses", "average_word_difficulty"] + for metric in core_metrics + auxiliary_metrics: + make_plot(df_per_variant[["model", "solver", metric]].copy(), + out_dir / f"{variant}_{metric}.png", + metric, + variant) \ No newline at end of file diff --git a/evals/elsuite/twenty_questions/scripts/run_experiments.sh b/evals/elsuite/twenty_questions/scripts/run_experiments.sh new file mode 100644 index 0000000000..4b8718d607 --- /dev/null +++ b/evals/elsuite/twenty_questions/scripts/run_experiments.sh @@ -0,0 +1,60 @@ +logdir=./logs +outputdir=./outputs + +timestamp=$(date +%Y%m%d_%H%M%S) +logpathbase=$logdir/$timestamp + +num_repeats=1 + +# Check for --num_repeats argument +for arg in "$@" +do + if [[ $arg == --num_repeats=* ]]; then + num_repeats="${arg#*=}" + fi +done + +echo Num repeats is: $num_repeats +echo Running experiments and logging to $logpathbase + +declare -a SOLVERS=( + # Solvers for gpt-3.5-turbo + "generation/direct/gpt-3.5-turbo" + "twenty_questions/cot/gpt-3.5-turbo" + + # # Solvers for gpt-4-turbo-preview + "generation/direct/gpt-4-turbo-preview" + "twenty_questions/cot/gpt-4-turbo-preview" + + # # Solvers for gpt-4-base + "generation/hhh/gpt-4-base" + "twenty_questions/cot_hhh/gpt-4-base" +) + +if [ ! -d "$logpathbase/standard" ]; then + mkdir -p "$logpathbase/standard" +fi + +if [ ! -d "$logpathbase/standard" ]; then + mkdir -p "$logpathbase/shortlist" +fi + + for solver in "${SOLVERS[@]}" + do + for ((i=1;i<=num_repeats;i++)) + do + echo "Running $solver, iteration $i, standard variant." + oaieval $solver twenty_questions.full --record_path "$logpathbase/standard/$solver-$i.log" + + echo "Running $solver, iteration $i, shortlist variant." + oaieval $solver twenty_questions.shortlist.full --record_path "$logpathbase/shortlist/$solver-$i.log" + done + done +done + +echo Done running experiments, all logs in $logpathbase + +echo Producing plots, outputs to $outputdir + +# Produce results +python scripts/make_plots.py --log-dir $logpathbase --out-dir $outputdir \ No newline at end of file diff --git a/evals/elsuite/twenty_questions/test_utils.py b/evals/elsuite/twenty_questions/test_utils.py new file mode 100644 index 0000000000..069b86c8ad --- /dev/null +++ b/evals/elsuite/twenty_questions/test_utils.py @@ -0,0 +1,27 @@ +from utils import format_msg, format_msgs +from evals.task_state import Message + +def test_format_msg(): + msg = Message(content="I'm a message", role="guesser") + + assert format_msg(msg, "guesser") == Message(content="I'm a message", role="assistant") + assert format_msg(msg, "gamemaster") == Message(content="I'm a message", role="user") + +def test_format_msgs(): + msgs = [ + Message(content="I'm a guesser message", role="guesser"), + Message(content="I'm a gamemaster message", role="gamemaster"), + Message(content="I'm another guesser message", role="guesser"), + ] + + assert format_msgs(msgs, "guesser") == [ + Message(content="I'm a guesser message", role="assistant"), + Message(content="I'm a gamemaster message", role="user"), + Message(content="I'm another guesser message", role="assistant"), + ] + + assert format_msgs(msgs, "gamemaster") == [ + Message(content="I'm a guesser message", role="user"), + Message(content="I'm a gamemaster message", role="assistant"), + Message(content="I'm another guesser message", role="user"), + ] \ No newline at end of file diff --git a/evals/elsuite/twenty_questions/utils.py b/evals/elsuite/twenty_questions/utils.py new file mode 100644 index 0000000000..169dd0a977 --- /dev/null +++ b/evals/elsuite/twenty_questions/utils.py @@ -0,0 +1,69 @@ +from typing import Literal, List, Optional +from evals.task_state import Message, TaskState + +PROMPTS = { + "guesser": "You are about to play the game '20 questions'. The other player has thought of a noun which you should try to guess. You can ask 20 yes/no questions, to which they will answer 'yes', 'no', or 'skip' (if they cannot answer your question). You should try to guess the word as soon as possible, using the least amount of questions. To guess a word, type [GUESS ] – for example to guess the word 'sibling', output [GUESS sibling]. Your score will be 0 if you do not guess correctly, and {max_questions} minus the number of questions you asked if you guess correctly. Start by asking your first question.", + "guesser_shortlist": "You are about to play the game '20 questions'. The other player has thought of a noun which you should try to guess. The noun is in this list: {shortlist}.\nYou can ask 20 yes/no questions, to which they will answer 'yes', 'no', or 'skip' (if they cannot answer your question). You should try to guess the word as soon as possible, using the least amount of questions. To guess a word, type [GUESS ] – for example to guess the word 'sibling', output [GUESS sibling]. Your score will be 0 if you do not guess correctly, and {max_questions} minus the number of questions you asked if you guess correctly. Start by asking your first question.", + "gamemaster": "You are about to play the game '20 questions'. You'll be given a noun, which the other player will try to guess. They will ask 20 yes/no questions to try to guess the word. You will answer their questions with 'yes', 'no', or 'skip'. The word is: {word}.", + "incorrect_guess": "Guess {guess} is incorrect. Ask more questions, or make another guess!", + "rule_violation": "Your output was neither a guess nor a question. Try again! You can ask a yes/no question, or make a guess by outputting [GUESS ]." +} + +def generate_task_state_for(role: Literal["guesser", "gamemaster"], conversation: list[Message], word: Optional[str] = None, max_questions: int = 20, shortlist: Optional[List[str]] = None) -> TaskState: + """Generates a TaskState for the given role and conversation.""" + if role == "guesser": + prompt = PROMPTS["guesser"].format(max_questions=max_questions) if shortlist is None else PROMPTS["guesser_shortlist"].format(max_questions=max_questions, shortlist=shortlist) + elif role == "gamemaster": + prompt = PROMPTS[role].format(word=word) + else: + raise ValueError(f"Invalid role: {role}") + + formatted_conversation = format_msgs(conversation, role) + + return TaskState( + task_description=prompt, + messages=formatted_conversation, + ) + + +def format_msgs( + messages: list[Message], + role: Literal["guesser", "gamemaster"], +) -> list[Message]: + """Format messages from the perspective of the `role`.""" + new_messages = [format_msg(msg, role) for msg in messages] + + # post-conditions + for m in new_messages: + assert m.role in ["user", "assistant", "system"] + + return new_messages + +def format_msg(msg: Message, role: Literal["guesser", "gamemaster"]) -> Message: + """Formats a single message from the perspective of the `role`.""" + + # body + is_others_msg = role not in msg.role + new_content = msg.content + + if is_others_msg: + new_role = "user" + elif is_system_msg(msg): + new_role = "system" + else: + new_role = "assistant" + + new_message = Message(content=new_content, role=new_role) + + # post-conditions + assert isinstance(new_message.content, str) + assert new_message.role in ["user", "assistant", "system"] + + return new_message + +def is_system_msg(m: Message) -> bool: + assert isinstance(m, Message), "Message must be a Message type." + assert hasattr(m, "role"), "Message must have a role." + assert isinstance(m.role, str), "Message role must be a string." + + return m.role == "system" \ No newline at end of file diff --git a/evals/registry/data/twenty_questions/LICENSE b/evals/registry/data/twenty_questions/LICENSE new file mode 100644 index 0000000000..7b971d365d --- /dev/null +++ b/evals/registry/data/twenty_questions/LICENSE @@ -0,0 +1,3 @@ +lexical_simplification: +MIT License: https://opensource.org/licenses/MIT +Source: https://github.com/mounicam/lexical_simplification \ No newline at end of file diff --git a/evals/registry/data/twenty_questions/dataset.jsonl b/evals/registry/data/twenty_questions/dataset.jsonl new file mode 100644 index 0000000000..ea11e6a68e --- /dev/null +++ b/evals/registry/data/twenty_questions/dataset.jsonl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:8a8358c42ef70c2c48c6bb2e214787e968cd1b092daeb1dd572f942bd7146bff +size 7664 diff --git a/evals/registry/data/twenty_questions/lexicon_nouns.jsonl b/evals/registry/data/twenty_questions/lexicon_nouns.jsonl new file mode 100644 index 0000000000..869a13feb1 --- /dev/null +++ b/evals/registry/data/twenty_questions/lexicon_nouns.jsonl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:754d1f85637de87dac8aadfa5163f073d65289f27677031e765334c786742171 +size 112218 diff --git a/evals/registry/evals/twenty_questions.yaml b/evals/registry/evals/twenty_questions.yaml new file mode 100644 index 0000000000..af3491ffcf --- /dev/null +++ b/evals/registry/evals/twenty_questions.yaml @@ -0,0 +1,60 @@ +twenty_questions: + id: twenty_questions.full + description: Tests models on the 20 questions game. + metrics: [score, accuracy, average_num_guesses, average_num_questions, average_num_violations, average_num_gamemaster_refusals, average_num_incorrect_guesses, average_word_difficulty] + +twenty_questions.full: + class: evals.elsuite.twenty_questions.eval:TwentyQuestions + args: + samples_jsonl: twenty_questions/dataset.jsonl + gamemaster_spec: twenty_questions/gamemaster/gpt-4-turbo-preview + max_questions: 20 + max_replies: 40 + +twenty_questions.shortlist.full: + class: evals.elsuite.twenty_questions.eval:TwentyQuestions + args: + samples_jsonl: twenty_questions/dataset.jsonl + gamemaster_spec: twenty_questions/gamemaster/gpt-4-turbo-preview + shortlist_variant: True + max_questions: 20 + max_replies: 40 + +twenty_questions.dev5: + class: evals.elsuite.twenty_questions.eval:TwentyQuestions + args: + samples_jsonl: twenty_questions/dataset.jsonl + gamemaster_spec: twenty_questions/gamemaster/gpt-4-turbo-preview + n_samples: 5 + max_questions: 20 + max_replies: 40 + +twenty_questions.shortlist.dev5: + class: evals.elsuite.twenty_questions.eval:TwentyQuestions + args: + samples_jsonl: twenty_questions/dataset.jsonl + gamemaster_spec: twenty_questions/gamemaster/gpt-4-turbo-preview + n_samples: 5 + shortlist_variant: True + num_shortlist_items: 5 + max_questions: 20 + max_replies: 40 + +twenty_questions.dev100: + class: evals.elsuite.twenty_questions.eval:TwentyQuestions + args: + samples_jsonl: twenty_questions/dataset.jsonl + gamemaster_spec: twenty_questions/gamemaster/gpt-4-turbo-preview + n_samples: 100 + max_questions: 20 + max_replies: 40 + +twenty_questions.shortlist.dev100: + class: evals.elsuite.twenty_questions.eval:TwentyQuestions + args: + samples_jsonl: twenty_questions/dataset.jsonl + gamemaster_spec: twenty_questions/gamemaster/gpt-4-turbo-preview + n_samples: 100 + shortlist_variant: True + max_questions: 20 + max_replies: 40 diff --git a/evals/registry/solvers/twenty_questions.yaml b/evals/registry/solvers/twenty_questions.yaml new file mode 100644 index 0000000000..81cc65468c --- /dev/null +++ b/evals/registry/solvers/twenty_questions.yaml @@ -0,0 +1,80 @@ +# CoT solvers with a custom extract template. +twenty_questions/cot/gpt-3.5-turbo: + class: evals.solvers.nested.cot_solver:CoTSolver + args: + cot_solver: + class: evals.solvers.openai_solver:OpenAISolver + args: + completion_fn_options: + model: gpt-3.5-turbo + extra_options: + temperature: 1 + max_tokens: 512 + extract_template: &extract_template Given the above reasoning, ask a question or make a guess following the task instructions. + extract_solver: + class: evals.solvers.openai_solver:OpenAISolver + args: + completion_fn_options: + model: gpt-3.5-turbo + extra_options: + temperature: 1 + max_tokens: 512 + +twenty_questions/cot/gpt-4-turbo-preview: + class: evals.solvers.nested.cot_solver:CoTSolver + args: + cot_solver: + class: evals.solvers.openai_solver:OpenAISolver + args: + completion_fn_options: + model: gpt-4-turbo-preview + extra_options: + temperature: 1 + max_tokens: 512 + extract_template: *extract_template + extract_solver: + class: evals.solvers.openai_solver:OpenAISolver + args: + completion_fn_options: + model: gpt-4-turbo-preview + extra_options: + temperature: 1 + max_tokens: 512 + +twenty_questions/cot_hhh/gpt-4-base: + class: evals.solvers.nested.cot_solver:CoTSolver + args: + cot_solver: + class: evals.solvers.nested.hhh_solver:HHHSolver + args: + solver: + class: evals.solvers.openai_solver:OpenAISolver + args: + completion_fn_options: + model: gpt-4-base + extra_options: + temperature: 1 + max_tokens: 512 + extract_template: *extract_template + extract_solver: + class: evals.solvers.nested.hhh_solver:HHHSolver + args: + solver: + class: evals.solvers.openai_solver:OpenAISolver + args: + completion_fn_options: + model: gpt-4-base + extra_options: + temperature: 1 + max_tokens: 512 + +# Game-master uses a fixed solver, currently set to the latest-generation model. +twenty_questions/gamemaster/gpt-4-turbo-preview: + class: evals.solvers.openai_solver:OpenAISolver + args: + completion_fn_options: + model: gpt-4-turbo-preview + extra_options: + temperature: 0 + max_tokens: 1 + valid_answers: ["yes", "no", "skip"] \ No newline at end of file