We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
""" PyTorch的backward参数, 默认是标量参数。因此如果计算非张量的梯度输出,需要使用 一个张量作为梯度参数, 用于存储各个元素梯度参数 """
X = torch.randn(size=(8 ,8)) conv2d_layer = Conv2D(kernel_size=(2, 2))
X.requires_grad = True output = conv2d_layer(X)
print(output)
#运行报错
#解决方法
grad_output = torch.ones_like(output)
output.backward(grad_output)
print("the gradients of weights :", conv2d_layer.weight.grad) print("the gradients of bias :", conv2d_layer.bias.grad)
The text was updated successfully, but these errors were encountered:
No branches or pull requests
RuntimeError: grad can be implicitly created only for scalar outputs
"""
PyTorch的backward参数, 默认是标量参数。因此如果计算非张量的梯度输出,需要使用
一个张量作为梯度参数, 用于存储各个元素梯度参数
"""
torch.conv2d 默认输入为 4维 code as:
X = torch.randn(size=(8 ,8))
conv2d_layer = Conv2D(kernel_size=(2, 2))
get grtadients
set auto gradients for x
X.requires_grad = True
output = conv2d_layer(X)
compute gradients
print(output)
#运行报错
output.backward()
assess gradients
print("the gradients of weights :", conv2d_layer.weight.grad)
#解决方法
假设output是卷积操作的输出,它是一个矩阵
创建一个与output形状相同的张量,并将其所有元素设置为1
grad_output = torch.ones_like(output)
调用backward并传递grad_output作为参数
output.backward(grad_output)
现在,可以正确地评估权重和偏差的梯度
print("the gradients of weights :", conv2d_layer.weight.grad)
print("the gradients of bias :", conv2d_layer.bias.grad)
The text was updated successfully, but these errors were encountered: