You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some weights of the model checkpoint at ./model_hub/chinese-bert-wwm-ext/ were not used when initializing BertModel: ['cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.bias', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.bias']
This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\nn\modules\rnn.py:58: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.1 and num_layers=1
warnings.warn("dropout option adds dropout after all but last "
D:\ANACONDA\envs\py38nlp\lib\site-packages\transformers\optimization.py:391: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set no_deprecation_warning=True to disable this warning
warnings.warn(
Traceback (most recent call last):
File "E:/DataProcess/ner/main.py", line 189, in
main(data_name)
File "E:/DataProcess/ner/main.py", line 183, in main
report = train.test()
File "E:/DataProcess/ner/main.py", line 66, in test
self.model.load_state_dict(torch.load(os.path.join(self.output_dir, "pytorch_model_ner.bin")))
File "D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\serialization.py", line 594, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\serialization.py", line 853, in _load
result = unpickler.load()
File "D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\serialization.py", line 845, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File "D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\serialization.py", line 834, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File "D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\serialization.py", line 175, in default_restore_location
result = fn(storage, location)
File "D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\serialization.py", line 151, in _cuda_deserialize
device = validate_cuda_device(location)
File "D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\serialization.py", line 135, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
出现这种问题,大佬可以帮忙看以下吗?
The text was updated successfully, but these errors were encountered:
Some weights of the model checkpoint at ./model_hub/chinese-bert-wwm-ext/ were not used when initializing BertModel: ['cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.bias', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.bias']
D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\nn\modules\rnn.py:58: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.1 and num_layers=1
warnings.warn("dropout option adds dropout after all but last "
D:\ANACONDA\envs\py38nlp\lib\site-packages\transformers\optimization.py:391: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set
no_deprecation_warning=True
to disable this warningwarnings.warn(
Traceback (most recent call last):
File "E:/DataProcess/ner/main.py", line 189, in
main(data_name)
File "E:/DataProcess/ner/main.py", line 183, in main
report = train.test()
File "E:/DataProcess/ner/main.py", line 66, in test
self.model.load_state_dict(torch.load(os.path.join(self.output_dir, "pytorch_model_ner.bin")))
File "D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\serialization.py", line 594, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\serialization.py", line 853, in _load
result = unpickler.load()
File "D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\serialization.py", line 845, in persistent_load
load_tensor(data_type, size, key, _maybe_decode_ascii(location))
File "D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\serialization.py", line 834, in load_tensor
loaded_storages[key] = restore_location(storage, location)
File "D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\serialization.py", line 175, in default_restore_location
result = fn(storage, location)
File "D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\serialization.py", line 151, in _cuda_deserialize
device = validate_cuda_device(location)
File "D:\ANACONDA\envs\py38nlp\lib\site-packages\torch\serialization.py", line 135, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
出现这种问题,大佬可以帮忙看以下吗?
The text was updated successfully, but these errors were encountered: