You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to train the SAM model for instance segmentation for multiple cells in image. I have the ground truth masks and bounding boxes.
I have prepared the training data by creating binary masks for each cell along with bounding boxes
Source Image 1
Ground Truth 1
Binary Cell_1 Mask + bounding box co-ordinates
Binary Cell_2 Mask + bounding box co-ordinates
Source Image 2
Ground Truth 2
Binary Cell_1 Mask + bounding box co-ordinates
Binary Cell_2 Mask + bounding box co-ordinates
But when training the model I get the following error: RuntimeError: stack expects each tensor to be equal size, but got [340, 4] at entry 0 and [276, 4] at entry 1
Should I create binary mask combining all the cell masks into single image?
[[Edit]]:
I see that we need to pass single label at a time:
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi
I am trying to train the SAM model for instance segmentation for multiple cells in image. I have the ground truth masks and bounding boxes.
I have prepared the training data by creating binary masks for each cell along with bounding boxes
But when training the model I get the following error:
RuntimeError: stack expects each tensor to be equal size, but got [340, 4] at entry 0 and [276, 4] at entry 1
Should I create binary mask combining all the cell masks into single image?
[[Edit]]:
Beta Was this translation helpful? Give feedback.
All reactions