Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add request rate limiting for users on OpenAI Usage tier 1 #31

Open
jamiehaywood opened this issue Jun 13, 2024 · 0 comments
Open

Add request rate limiting for users on OpenAI Usage tier 1 #31

jamiehaywood opened this issue Jun 13, 2024 · 0 comments
Labels
help wanted Extra attention is needed

Comments

@jamiehaywood
Copy link

jamiehaywood commented Jun 13, 2024

My current usage limits are pretty low (I'm not a prolific user of OpenAI API). I've not looked into the source code yet, but I suspect that micro-agent is sending more requests / using more tokens than the Usage tier 1 allows:


MODELTOKEN LIMITSREQUEST AND OTHER LIMITSBATCH QUEUE LIMITS
gpt-4o
30,000 TPM
500 RPM
90,000 TPD
gpt-3.5-turbo
60,000 TPM
500 RPM
10,000 RPD
200,000 TPD
gpt-4
10,000 TPM
500 RPM
10,000 RPD
100,000 TPD
gpt-4-turbo
30,000 TPM
500 RPM
90,000 TPD
text-embedding-3-small
1,000,000 TPM
3,000 RPM
3,000,000 TPD
dall-e-3
5 images per minute
tts-1
50 RPM
whisper-1
50 RPM
@steve8708 steve8708 added the help wanted Extra attention is needed label Jun 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants