We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
我看到使用的实例中: `from SETR.transformer_seg import SETRModel import torch
if name == "main": net = SETRModel(img_size=(32, 32), in_channels=3, out_channels=1, hidden_size=1024, num_hidden_layers=8, num_attention_heads=16, decode_features=[512, 256, 128, 64]) t1 = torch.rand(1, 3, 256, 256) print("input: " + str(t1.shape))
# print(net) print("output: " + str(net(t1).shape))`
SETRModel(img_size=(32, 32), 有这个,但是下面的输入使用的img_size是(256,256)。所以这个模型的img_size参数对应的是什么呀
The text was updated successfully, but these errors were encountered:
分块的size。
Sorry, something went wrong.
对于长宽不一样的输入图片 这个img_size要设成不一样的吗 比如输入图片是256x512 这个切片是要变成16x32吗 还是可以不变
都可以的~
No branches or pull requests
我看到使用的实例中:
`from SETR.transformer_seg import SETRModel
import torch
if name == "main":
net = SETRModel(img_size=(32, 32),
in_channels=3,
out_channels=1,
hidden_size=1024,
num_hidden_layers=8,
num_attention_heads=16,
decode_features=[512, 256, 128, 64])
t1 = torch.rand(1, 3, 256, 256)
print("input: " + str(t1.shape))
SETRModel(img_size=(32, 32), 有这个,但是下面的输入使用的img_size是(256,256)。所以这个模型的img_size参数对应的是什么呀
The text was updated successfully, but these errors were encountered: