Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Error(s) in loading state_dict for Blip2OPT: size mismatch for opt_proj.weight: copying a param with shape torch.Size([2560, 768]) from checkpoint, the shape in current model is torch.Size([768, 768]). size mismatch for opt_proj.bias: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([768]). #773

Open
chilljudaoren opened this issue Dec 7, 2024 · 0 comments

Comments

@chilljudaoren
Copy link

load typr: model = load_model("blip2_opt", "caption_coco_opt2.7b", is_eval=True, device=device)

Traceback (most recent call last):

File "/home/czh/.pycharm_helpers/pydev/pydevd.py", line 1534, in _exec

pydev_imports.execfile(file, globals, locals) # execute the script

File "/home/czh/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile

exec(compile(contents+"\n", file, 'exec'), glob, loc)

File "/home/czh/UAP_VLP-main/Eval_ImgCap_BLIP.py", line 322, in

main(args, config)

File "/home/czh/UAP_VLP-main/Eval_ImgCap_BLIP.py", line 200, in main

t_model = load_eval_model(args.target_model, args.target_ckpt, device)

File "/home/czh/UAP_VLP-main/Eval_ImgCap_BLIP.py", line 148, in load_eval_model

model = load_model("blip2_opt", "caption_coco_opt2.7b", is_eval=True, device=device)

File "/home/czh/UAP_VLP-main/lavis/models/init.py", line 117, in load_model

model = registry.get_model_class(name).from_pretrained(model_type=model_type)

File "/home/czh/UAP_VLP-main/lavis/models/base_model.py", line 70, in from_pretrained

model = cls.from_config(model_cfg)

File "/home/czh/UAP_VLP-main/lavis/models/blip2_models/blip2_opt.py", line 423, in from_config

model.load_checkpoint_from_config(cfg)

File "/home/czh/UAP_VLP-main/lavis/models/base_model.py", line 95, in load_checkpoint_from_config

self.load_checkpoint(url_or_filename=finetune_path)

File "/home/czh/UAP_VLP-main/lavis/models/base_model.py", line 51, in load_checkpoint

msg = self.load_state_dict(state_dict, strict=False)

File "/home/czh/.conda/envs/sam/lib/python3.8/site-packages/torch/nn/modules/module.py", line 2215, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(

RuntimeError: Error(s) in loading state_dict for Blip2OPT:

size mismatch for opt_proj.weight: copying a param with shape torch.Size([2560, 768]) from checkpoint, the shape in current model is torch.Size([768, 768]).

size mismatch for opt_proj.bias: copying a param with shape torch.Size([2560]) from checkpoint, the shape in current model is torch.Size([768]).

error position: self.opt_proj = nn.Linear(self.Qformer.config.hidden_size, self.opt_model.config.hidden_size)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant