Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

部署好项目以后启动 #5165

Open
long-hands opened this issue Dec 30, 2024 · 2 comments
Open

部署好项目以后启动 #5165

long-hands opened this issue Dec 30, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@long-hands
Copy link

报错:INFO: 127.0.0.1:40600 - "POST /chat/chat/completions HTTP/1.1" 200 OK
2024-12-30 13:57:43,327 httpx 36435 INFO HTTP Request: POST http://127.0.0.1:7861/chat/chat/completions "HTTP/1.1 200 OK"
2024-12-30 13:57:43,330 openai._base_client 36383 INFO Retrying request to /chat/completions in 0.387410 seconds
2024-12-30 13:57:43,720 openai._base_client 36383 INFO Retrying request to /chat/completions in 0.801640 seconds
2024-12-30 13:57:44.523 | ERROR | chatchat.server.api_server.openai_routes:generator:105 - openai request error: Connection error.

最后一行这是什么情况,我也没有调用OPENAI啊,我用的是国产大模型

@long-hands long-hands added the bug Something isn't working label Dec 30, 2024
Copy link

dosubot bot commented Dec 30, 2024

Hey @long-hands, I'm here to help you with any bugs, questions, or contributions. Let's tackle this issue together!

I found a similar discussion that might be relevant to your issue: 为什么每次执行agent都会自动调用 这两个接口POST /v1/chat/completions HTTP/1.1 [1]. This discussion addresses unexpected calls to the OpenAI API, which seems similar to your situation where you're experiencing connection errors despite not using OpenAI.

To resolve this, you might want to check your configuration settings to ensure that the correct model is being used. You can configure Langchain-Chatchat to use a specific language model by modifying the model configuration using the chatchat-config command. Make sure that the model inference framework is running and the required models are loaded. You can set the model platforms and specify the models you want to use with a command like:

chatchat-config model --set_model_platforms "[{
    \"platform_name\": \"your_platform_name\",
    \"platform_type\": \"your_platform_type\",
    \"api_base_url\": \"http://your_api_base_url\",
    \"llm_models\": [
        \"your_model_name\"
    ]
}]"

Verify your configuration with:

chatchat-config model --show

Additionally, ensure that the CHATCHAT_PROXY_URL in your .env file is correctly set to point to your intended service URL, not the OpenAI endpoint [2].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@Woodude97
Copy link

我也是这样,您解决了吗

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants