Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ONNX] Add support for asymmetric padding for Onnx.AveragePool op #3923

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

vivekkhandelwal1
Copy link
Collaborator

This commit also refactors the code for the Onnx's AveragePool and MaxPool op by creating a common utility for both the op lowerings to get the pooling op parameters.

This commit also refactors the code for the Onnx's AveragePool and
MaxPool op by creating a common utility for both the op lowerings
to get the pooling op parameters.

Signed-off-by: Vivek Khandelwal <[email protected]>
binder.getLoc(), rewriter.getI64IntegerAttr(i)));
}
// Onnx pads format: [x1_begin, x2_begin…x1_end, x2_end,…]
// Pytorch pads format: [x1, x2,...] or [x], assume begin==end for all
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add e2e tests in shark-testsuite if the change work. I don't think torch to linalg support this pattern.

Torch::ValueTensorType resultType;
Value operand;
bool ceilMode, countIncludePad;
int64_t ceilMode, countIncludePad;
Copy link

@tuukkjs tuukkjs Dec 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why change ceilMode and countIncludePad from bool to int64_t?

1) *
strides[dimIdx] +
dilatedKernelSize - inputShape[dimIdx + 2];
totalPad = totalPad >= 0 ? totalPad : 0;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is ceilMode used in calculating the padding when using autopad? If not, why? I think the formulas in https://onnx.ai/onnx/operators/onnx__AveragePool.html differ depending on ceilMode.

if (binder.s64IntegerArrayAttr(kernel, "kernel_shape", {}))
SmallVector<int64_t> kernel, padding, strides, dilations,
stridesDilations;
if (failed(checkAndGetOnnxPoolingOpParameters(
Copy link

@tuukkjs tuukkjs Dec 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about countIncludePad = false? If we pad using AtenConstantPadNdOp and after that do AtenAvgPoolOp don’t we lose the ability to support countIncludePad = false?

@tuukkjs
Copy link

tuukkjs commented Dec 20, 2024

Looks very good! I made some comments since I have been working on similar changes. However, I am not very familiar with the project itself so some of my comments may be off.

In general, I would suggest to add more tests to test different cases of auto_pad and asymmetric and symmetric padding.

Also please note that I am out of office until January 7th and likely won't respond during that time. Maybe others can chip in. Previously we have been discussing some changes along these lines with @zjgarvey.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants