-
Notifications
You must be signed in to change notification settings - Fork 472
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Seems like we didn't handle error chunk here? #25
Comments
@Nutlope Hello, sorry to bother you. I'm not sure what "fragmented into multiple chunks" means exactly because when I tested the fetch API locally, I didn't encounter any situations where the chunks were truncated. Why would it happen like this after deploying to Vercel's edge function (without used these code)? // stream response (SSE) from OpenAI may be fragmented into multiple chunks
// this ensures we properly read chunks and invoke an event for each SSE event stream
const parser = createParser(onParse);
// https://web.dev/streams/#asynchronous-iteration
for await (const chunk of res.body as any) {
parser.feed(decoder.decode(chunk));
} |
Hi @shezhangzhang,
Are you saying that, there may be a case that the completion API returns a JSON that indicates some sort of internal error in the middle of the SSE stream? Do you have any links to the documentation which mentions that? Or, if you're talking about general API errors where the API request as a whole fails with non-successful status code (e.g., rate limit, 500 error, etc), then yes, this example is also not checking the response status code. But that can be easily added, otherwise. For the second question, that's exactly why we're using the |
@smaeda-ks Thank you very much for clarifying this! I have checked the |
twitterbio/utils/OpenAIStream.ts
Line 74 in e1890ba
If the chunk is an error data ({"error":{}}), the
onParse()
function will NOT be invoke. Need error handler here.The text was updated successfully, but these errors were encountered: