feat: implement inference server by using vllm #221
Triggered via pull request
October 23, 2024 23:18
Status
Cancelled
Total duration
16m 23s
Artifacts
–
preset-image-build-1ES.yml
on: pull_request
determine-models
0s
Matrix: build-models
Annotations
1 error
determine-models
Canceling since a higher priority waiting request for 'Build and Push Preset Models 1ES-zhuangqh/support-vllm' exists
|