Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

练习6.2.2 在我们创建的Conv2D自动求导时,有什么错误消息? #96

Open
SeeseeYep opened this issue May 8, 2024 · 0 comments

Comments

@SeeseeYep
Copy link

RuntimeError: grad can be implicitly created only for scalar outputs

"""
PyTorch的backward参数, 默认是标量参数。因此如果计算非张量的梯度输出,需要使用
一个张量作为梯度参数, 用于存储各个元素梯度参数
"""

torch.conv2d 默认输入为 4维 code as:

X = torch.randn(size=(8 ,8))
conv2d_layer = Conv2D(kernel_size=(2, 2))

get grtadients

set auto gradients for x

X.requires_grad = True
output = conv2d_layer(X)

compute gradients

print(output)

#运行报错

output.backward()

assess gradients

print("the gradients of weights :", conv2d_layer.weight.grad)

#解决方法

假设output是卷积操作的输出,它是一个矩阵

创建一个与output形状相同的张量,并将其所有元素设置为1

grad_output = torch.ones_like(output)

调用backward并传递grad_output作为参数

output.backward(grad_output)

现在,可以正确地评估权重和偏差的梯度

print("the gradients of weights :", conv2d_layer.weight.grad)
print("the gradients of bias :", conv2d_layer.bias.grad)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant