Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

运行main.py出现RuntimeError #45

Open
JoyeIEC opened this issue Sep 30, 2024 · 1 comment
Open

运行main.py出现RuntimeError #45

JoyeIEC opened this issue Sep 30, 2024 · 1 comment

Comments

@JoyeIEC
Copy link

JoyeIEC commented Sep 30, 2024

Some weights of the model checkpoint at ./model_hub/chinese-bert-wwm-ext/ were not used when initializing BertModel: ['cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.bias', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.bias']

  • This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
    D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\nn\modules\rnn.py:58: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.1 and num_layers=1
    warnings.warn("dropout option adds dropout after all but last "
    D:\ANACONDA\envs\py38nlp\lib\site-packages\transformers\optimization.py:391: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set no_deprecation_warning=True to disable this warning
    warnings.warn(
    Traceback (most recent call last):
    File "E:/DataProcess/ner/main.py", line 189, in
    main(data_name)
    File "E:/DataProcess/ner/main.py", line 183, in main
    report = train.test()
    File "E:/DataProcess/ner/main.py", line 66, in test
    self.model.load_state_dict(torch.load(os.path.join(self.output_dir, "pytorch_model_ner.bin")))
    File "D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\serialization.py", line 594, in load
    return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
    File "D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\serialization.py", line 853, in _load
    result = unpickler.load()
    File "D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\serialization.py", line 845, in persistent_load
    load_tensor(data_type, size, key, _maybe_decode_ascii(location))
    File "D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\serialization.py", line 834, in load_tensor
    loaded_storages[key] = restore_location(storage, location)
    File "D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\serialization.py", line 175, in default_restore_location
    result = fn(storage, location)
    File "D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\serialization.py", line 151, in _cuda_deserialize
    device = validate_cuda_device(location)
    File "D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\serialization.py", line 135, in validate_cuda_device
    raise RuntimeError('Attempting to deserialize object on a CUDA '
    RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
    出现这种问题,大佬可以帮忙看以下吗?
@JoyeIEC
Copy link
Author

JoyeIEC commented Sep 30, 2024

已解决

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant