Skip to content

Repository for the extended versions of prompts used in the research paper titled "Challenging LLMs Beyond Information Retrieval: Reasoning Degradation with Long Context Windows."

Notifications You must be signed in to change notification settings

natanaelwf/LLM_AdditionalTests_LongPrompts

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 

Repository files navigation

LLM Additional Tests: Long Prompts Repository

Overview

This repository contains the extended versions of prompts used in the research paper titled "Challenging LLMs Beyond Information Retrieval: Reasoning Degradation with Long Context Windows." The study explores how Large Language Models (LLMs) handle reasoning tasks as input size increases, and documents how their performance degrades as the context window lengthens.

The paper presents three additional tests:

  1. Highlight Inefficient Code
  2. Decrypting Cryptography from a Clue
  3. Unlock $100.00

These tests were designed to challenge LLMs in different reasoning tasks, demonstrating that while the model performs well with shorter prompts, its accuracy diminishes as the length of the prompt increases.

Purpose

The primary goal of this repository is to provide transparency and reproducibility for researchers and practitioners interested in the study of LLM performance. It includes the long versions of the prompts used in these additional tests, which were not fully presented in the published paper due to space constraints.

Structure

The repository is organized as follows:

  • /prompts/: Contains the text files for each of the three tests. Each file has the long version of the prompt used in the paper.

How to Use

  1. Clone the Repository:

    git clone https://github.com/natanaelwf/LLM_AdditionalTests_LongPrompts.git
  2. Explore the Prompts: Navigate to the /prompts/ directory to review the different prompts.

License

This repository is licensed under the MIT License.

About

Repository for the extended versions of prompts used in the research paper titled "Challenging LLMs Beyond Information Retrieval: Reasoning Degradation with Long Context Windows."

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published