-
Notifications
You must be signed in to change notification settings - Fork 280
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
size mismatch for vision_model.post_layernorm.weight #355
Comments
i also have the same question, could the authors' team help us to fix this propblem? |
I am attempting to load CLIPConfig to resolve an issue in the following code:
I am now encountering only one dimension mismatch error when trying to initialize the weights: ValueError: Trying to set a tensor of shape torch.Size([729, 1152]) in "weight" (which has shape torch.Size([730, 1152])), this looks incorrect. |
Hi @Luodian @kcz358 @ZhangYuanhan-AI @ChunyuanLI , Could you kindly take a look at this issue? Your input would be greatly appreciated in resolving it. |
I got the same problem of size mismatch for vision_model.embeddings/encoder/post_layernorm....... for inference with LlaVA-OV. |
Hey guys, I just tried the method in #246 (comment), and it works for me! |
when i use the llavavideo to inference, load the model get these error:
The text was updated successfully, but these errors were encountered: