Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resize onnx operator: Optimization for Compute and Space performance of its linear option. #3773

Open
wants to merge 4 commits into
base: develop
Choose a base branch
from

Conversation

lakhinderwalia
Copy link
Contributor

@lakhinderwalia lakhinderwalia commented Jan 21, 2025

Optimize the space overhead required for Linear Resize operation: it is now 4x smaller for its 2D images. There were very large data-structures, getting to be over 16 times the total input_pixels for a 4D tensor. And now it becomes 4x smaller in size, followed with fewer reduction steps. (Similar optimization for its compute overhead.)

A comparison of parsing test/onnx/upsample_linear_test.onnx:
(Before)
Calculated resize-tensor size:
@4 = @literal{ ... } -> int32_type, {16, 1, 4, 4}, {16, 16, 4, 1}

Reading: ../test/onnx/upsample_linear_test.onnx
module: "main"
@0 = @literal{ ... } -> float_type, {1, 1, 4, 4}, {16, 16, 4, 1}
@1 = @literal{ ... } -> float_type, {2, 1, 4, 4}, {16, 16, 4, 1}
@2 = @literal{ ... } -> float_type, {4, 1, 4, 4}, {16, 16, 4, 1}
@3 = @literal{ ... } -> float_type, {8, 1, 4, 4}, {16, 16, 4, 1}
@4 = @literal{ ... } -> int32_type, {16, 1, 4, 4}, {16, 16, 4, 1}
X = @param:X -> float_type, {1, 1, 2, 2}, {4, 4, 2, 1}
@6 = @literal{1, 1, 2, 2} -> float_type, {4}, {1}
@7 = undefined -> float_type, {}, {}
@8 = reshape[dims={4}](X) -> float_type, {4}, {1}
@9 = gather[axis=0](@8,@4) -> float_type, {16, 1, 4, 4}, {16, 16, 4, 1}
@10 = slice[axes={0},starts={0},ends={8}](@9) -> float_type, {8, 1, 4, 4}, {16, 16, 4, 1}
@11 = slice[axes={0},starts={8},ends={16}](@9) -> float_type, {8, 1, 4, 4}, {16, 16, 4, 1}
@12 = sub(@11,@10) -> float_type, {8, 1, 4, 4}, {16, 16, 4, 1}
@13 = mul(@12,@3) -> float_type, {8, 1, 4, 4}, {16, 16, 4, 1}
@14 = add(@13,@10) -> float_type, {8, 1, 4, 4}, {16, 16, 4, 1}
@15 = slice[axes={0},starts={0},ends={4}](@14) -> float_type, {4, 1, 4, 4}, {16, 16, 4, 1}
@16 = slice[axes={0},starts={4},ends={8}](@14) -> float_type, {4, 1, 4, 4}, {16, 16, 4, 1}
@17 = sub(@16,@15) -> float_type, {4, 1, 4, 4}, {16, 16, 4, 1}
@18 = mul(@17,@2) -> float_type, {4, 1, 4, 4}, {16, 16, 4, 1}
@19 = add(@18,@15) -> float_type, {4, 1, 4, 4}, {16, 16, 4, 1}
@20 = slice[axes={0},starts={0},ends={2}](@19) -> float_type, {2, 1, 4, 4}, {16, 16, 4, 1}
@21 = slice[axes={0},starts={2},ends={4}](@19) -> float_type, {2, 1, 4, 4}, {16, 16, 4, 1}
@22 = sub(@21,@20) -> float_type, {2, 1, 4, 4}, {16, 16, 4, 1}
@23 = mul(@22,@1) -> float_type, {2, 1, 4, 4}, {16, 16, 4, 1}
@24 = add(@23,@20) -> float_type, {2, 1, 4, 4}, {16, 16, 4, 1}
@25 = slice[axes={0},starts={0},ends={1}](@24) -> float_type, {1, 1, 4, 4}, {16, 16, 4, 1}
@26 = slice[axes={0},starts={1},ends={2}](@24) -> float_type, {1, 1, 4, 4}, {16, 16, 4, 1}
@27 = sub(@26,@25) -> float_type, {1, 1, 4, 4}, {16, 16, 4, 1}
@28 = mul(@27,@0) -> float_type, {1, 1, 4, 4}, {16, 16, 4, 1}
@29 = add(@28,@25) -> float_type, {1, 1, 4, 4}, {16, 16, 4, 1}
@30 = @return(@29)

With this PR:
Calculated resize-tensor size:
@2 = @literal{ ... } -> int32_type, {4, 1, 4, 4}, {16, 16, 4, 1}

Reading: ../test/onnx/upsample_linear_test.onnx
module: "main"
@0 = @literal{ ... } -> float_type, {1, 1, 4, 4}, {16, 16, 4, 1}
@1 = @literal{ ... } -> float_type, {2, 1, 4, 4}, {16, 16, 4, 1}
@2 = @literal{ ... } -> int32_type, {4, 1, 4, 4}, {16, 16, 4, 1}
X = @param:X -> float_type, {1, 1, 2, 2}, {4, 4, 2, 1}
@4 = @literal{1, 1, 2, 2} -> float_type, {4}, {1}
@5 = undefined -> float_type, {}, {}
@6 = reshape[dims={4}](X) -> float_type, {4}, {1}
@7 = gather[axis=0](@6,@2) -> float_type, {4, 1, 4, 4}, {16, 16, 4, 1}
@8 = slice[axes={0},starts={0},ends={2}](@7) -> float_type, {2, 1, 4, 4}, {16, 16, 4, 1}
@9 = slice[axes={0},starts={2},ends={4}](@7) -> float_type, {2, 1, 4, 4}, {16, 16, 4, 1}
@10 = sub(@9,@8) -> float_type, {2, 1, 4, 4}, {16, 16, 4, 1}
@11 = mul(@10,@1) -> float_type, {2, 1, 4, 4}, {16, 16, 4, 1}
@12 = add(@11,@8) -> float_type, {2, 1, 4, 4}, {16, 16, 4, 1}
@13 = slice[axes={0},starts={0},ends={1}](@12) -> float_type, {1, 1, 4, 4}, {16, 16, 4, 1}
@14 = slice[axes={0},starts={1},ends={2}](@12) -> float_type, {1, 1, 4, 4}, {16, 16, 4, 1}
@15 = sub(@14,@13) -> float_type, {1, 1, 4, 4}, {16, 16, 4, 1}
@16 = mul(@15,@0) -> float_type, {1, 1, 4, 4}, {16, 16, 4, 1}
@17 = add(@16,@13) -> float_type, {1, 1, 4, 4}, {16, 16, 4, 1}
@18 = @return(@17)

@lakhinderwalia lakhinderwalia self-assigned this Jan 21, 2025
@coxuamd
Copy link

coxuamd commented Jan 22, 2025

Is this sort of dup of #3731?

@lakhinderwalia
Copy link
Contributor Author

Is this sort of dup of #3731?

No. Orthogonal and a more fundamental change to Resize parsing.
This PR doesn't change the recursive nature of calc_neighbor_points().

Copy link

codecov bot commented Jan 22, 2025

Codecov Report

Attention: Patch coverage is 97.14286% with 1 line in your changes missing coverage. Please review.

Project coverage is 92.29%. Comparing base (50c6848) to head (27cd210).
Report is 3 commits behind head on develop.

Files with missing lines Patch % Lines
src/onnx/parse_resize.cpp 97.14% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #3773      +/-   ##
===========================================
+ Coverage    92.28%   92.29%   +0.01%     
===========================================
  Files          519      519              
  Lines        22227    22233       +6     
===========================================
+ Hits         20512    20520       +8     
+ Misses        1715     1713       -2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@coxuamd
Copy link

coxuamd commented Jan 22, 2025

Is this sort of dup of #3731?

No. Orthogonal and a more fundamental change to Resize parsing. This PR doesn't change the recursive nature of calc_neighbor_points().

Thanks for the explanation.

@lakhinderwalia
Copy link
Contributor Author

(Background: going beyond the issue, #2129, the Resize Op could use more optimization in its basic calculations, hence this PR).

@migraphx-bot
Copy link
Collaborator

Test Batch Rate new
27cd21
Rate old
43488c
Diff Compare
torchvision-resnet50 64 3,253.70 3,254.25 -0.02%
torchvision-resnet50_fp16 64 6,929.14 6,916.65 0.18%
torchvision-densenet121 32 2,454.89 2,455.95 -0.04%
torchvision-densenet121_fp16 32 4,180.95 4,183.26 -0.06%
torchvision-inceptionv3 32 1,628.11 1,629.97 -0.11%
torchvision-inceptionv3_fp16 32 2,716.21 2,715.98 0.01%
cadene-inceptionv4 16 763.21 762.93 0.04%
cadene-resnext64x4 16 812.80 812.98 -0.02%
slim-mobilenet 64 7,458.69 7,459.33 -0.01%
slim-nasnetalarge 64 208.64 208.65 -0.01%
slim-resnet50v2 64 3,445.77 3,445.08 0.02%
bert-mrpc-onnx 8 1,148.37 1,147.48 0.08%
bert-mrpc-tf 1 481.23 481.97 -0.15%
pytorch-examples-wlang-gru 1 480.74 478.37 0.49%
pytorch-examples-wlang-lstm 1 441.37 439.41 0.45%
torchvision-resnet50_1 1 806.67 803.28 0.42%
cadene-dpn92_1 1 427.26 427.49 -0.05%
cadene-resnext101_1 1 385.38 376.23 2.43%
onnx-taau-downsample 1 373.47 373.50 -0.01%
dlrm-criteoterabyte 1 33.32 33.31 0.03%
dlrm-criteoterabyte_fp16 1 52.73 52.64 0.17%
agentmodel 1 8,512.96 8,546.35 -0.39%
unet_fp16 2 58.37 58.43 -0.10%
resnet50v1_fp16 1 1,016.96 1,033.48 -1.60%
resnet50v1_int8 1 1,021.04 1,018.13 0.29%
bert_base_cased_fp16 64 1,181.36 1,181.00 0.03%
bert_large_uncased_fp16 32 365.21 365.36 -0.04%
bert_large_fp16 1 198.60 201.35 -1.37%
distilgpt2_fp16 16 2,227.13 2,228.65 -0.07%
yolov5s 1 523.97 527.51 -0.67%
tinyllama 1 43.62 43.62 0.01%
vicuna-fastchat 1 175.99 173.15 1.64%
whisper-tiny-encoder 1 418.15 416.80 0.32%
whisper-tiny-decoder 1 427.54 429.87 -0.54%

This build is OK for merge ✅

@migraphx-bot
Copy link
Collaborator


     ✅ bert-mrpc-onnx: PASSED: MIGraphX meets tolerance

     ✅ bert-mrpc-tf: PASSED: MIGraphX meets tolerance

     ✅ pytorch-examples-wlang-gru: PASSED: MIGraphX meets tolerance

     ✅ pytorch-examples-wlang-lstm: PASSED: MIGraphX meets tolerance

     ✅ torchvision-resnet50_1: PASSED: MIGraphX meets tolerance

     ✅ cadene-dpn92_1: PASSED: MIGraphX meets tolerance

     ✅ cadene-resnext101_1: PASSED: MIGraphX meets tolerance

     ✅ dlrm-criteoterabyte: PASSED: MIGraphX meets tolerance

     ✅ agentmodel: PASSED: MIGraphX meets tolerance

     ✅ unet: PASSED: MIGraphX meets tolerance

     ✅ resnet50v1: PASSED: MIGraphX meets tolerance

     ✅ bert_base_cased_fp16: PASSED: MIGraphX meets tolerance

🔴bert_large_uncased_fp16: FAILED: MIGraphX is not within tolerance - check verbose output


     ✅ bert_large: PASSED: MIGraphX meets tolerance

     ✅ yolov5s: PASSED: MIGraphX meets tolerance

     ✅ tinyllama: PASSED: MIGraphX meets tolerance

     ✅ vicuna-fastchat: PASSED: MIGraphX meets tolerance

     ✅ whisper-tiny-encoder: PASSED: MIGraphX meets tolerance

     ✅ whisper-tiny-decoder: PASSED: MIGraphX meets tolerance

     ✅ distilgpt2_fp16: PASSED: MIGraphX meets tolerance

@CharlieL7
Copy link
Collaborator

I think these code changes make a merge conflict with the code in #3731 though?

std::vector<std::vector<std::size_t>> vv_ind(2, std::vector<std::size_t>(out_elements));
std::vector<std::vector<std::vector<std::size_t>>> vvv_ind(n_dim, vv_ind);
std::vector<std::vector<float>> delta(n_dim, std::vector<float>(out_elements));
std::vector<std::vector<std::vector<std::size_t>>> vvv_ind(r_dim, vv_ind);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we add some brief explanation what these variables are

vvv_ind, 0, 0, std::vector<std::vector<std::size_t>>(out_elements), in_s, out_s);

auto dim_lens = out_lens;
dim_lens[0] *= (1u << r_dim);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does this bitshift do? Brief explanation would be sufficient

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bit-shift was/is basically a calculation of the actual size of the final index tensor that is being generated. It is getting doubled by every lens dimension in the previous algorithm, and now by the just the modified lens dimensions -- the fundamental change in this PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add a comment.

// get the number of dimensions
std::size_t n_dim = out_lens.size();
std::size_t n_dim = out_lens.size();
std::size_t r_dim = 0; // count: lens dimensions that are resized
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
std::size_t r_dim = 0; // count: lens dimensions that are resized
std::size_t resized_dims = 0; // count: lens dimensions that are resized

@@ -359,41 +374,55 @@ struct parse_resize : op_parser<parse_resize>
auto nearest_floor = op::resize::get_nearest_op("floor");
auto nearest_ceil = op::resize::get_nearest_op("ceil");

// get the number of dimensions
std::size_t n_dim = out_lens.size();
std::size_t n_dim = out_lens.size();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
std::size_t n_dim = out_lens.size();
std::size_t n_dims = out_lens.size();

@@ -35,11 +35,15 @@ namespace onnx {

static std::vector<int>
calc_neighbor_points(const std::vector<std::vector<std::vector<std::size_t>>>& vvv_ind,
int i_dim,
size_t i_dim, // input lens index
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
size_t i_dim, // input lens index
size_t input_ind, // input lens index

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or input_idx, seems like we're not consistent in this file anyways

@@ -35,11 +35,15 @@ namespace onnx {

static std::vector<int>
calc_neighbor_points(const std::vector<std::vector<std::vector<std::size_t>>>& vvv_ind,
int i_dim,
size_t i_dim, // input lens index
size_t r_dim, // resized index
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
size_t r_dim, // resized index
size_t resized_ind, // resized index

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

or resized_idx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request Perf Improve
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants