Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some doubt in backpropagation of PointNet++ while solving it manually. #250

Open
utkarsh0902311047 opened this issue Dec 21, 2023 · 0 comments

Comments

@utkarsh0902311047
Copy link

In PointNet++ model there are three SetAbstraction layers (each having three convolutional, Batchnorm and ReLU layers) and in last two the number of channels are increased by 3.
I am trying to do backpropagation manually to understand how it actually trains.
I am stuck in the third SetAbstraction layer's first convolutional layer. Here the gradient coming from backpropagation has shape (BatchSize,256,128,1), the input to this first convolutional layer is the output from the second SetAbstraction after doing max operation and increasing the channels by 3 which is of shape (BatchSize, 259,128,1). The weights of this convolutional layer has shape (256,259,1,1). Now when I try to find this convolutional layer weight gradients, it comes correct with shape of (256,259,1,1). But for input gradients the shape comes out to be (BatchSize,259,128,1). But the shape of the output of third ReLU of second SetAbstraction is (BatchSize,256,64,128) and its max operation leads to shape (BatchSize,256,128). Now how should I carry the gradient I calculated back through max operation and then relu operation as its shape is (BatchSize,259,128,1).
Please help me with this step. Thank you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant