Skip to content

feat: implement inference server by using vllm (#624) #225

feat: implement inference server by using vllm (#624)

feat: implement inference server by using vllm (#624) #225

Triggered via push October 24, 2024 15:01
Status Cancelled
Total duration 6d 9h 45m 24s
Artifacts
determine-models
0s
determine-models
Matrix: build-models
Fit to window
Zoom out
Zoom in

Annotations

1 error
determine-models
Canceled by the server.