Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于Vgg16中compress_rate的问题 #16

Open
wzd-fanshan opened this issue Jan 21, 2021 · 5 comments
Open

关于Vgg16中compress_rate的问题 #16

wzd-fanshan opened this issue Jan 21, 2021 · 5 comments

Comments

@wzd-fanshan
Copy link

您好!
对于vgg16,您readme里所给的参数设置是compress_rate [0.95]+[0.5]*6+[0.9]*4+[0.8]*2 ,我使用您的程序算出的Flops和Params和readme中的数据相符合。
但是我用自己验算了一下(我考虑了卷积层和全连接层),得出的Flops比这个低了的很多(大约42M,少了约60M),不知是哪里出了问题,我想是否能麻烦您手动或者使用别的程序算一下?

@wzd-fanshan
Copy link
Author

仅对于VGG16的Flops计算,我们debug了一下,发现当卷积层时它每次进入的是“get_flops.py”文件中的第73-76行:
            delta_ops = layer.in_channels * (layer.out_channels-pruned_num) * layer.kernel_size[0] *  
                    layer.kernel_size[1] * out_h * out_w / layer.groups * multi_add
这句等号右边的第一项输入的channel个数是不是没考虑上一层剪枝后channel个数的删去量?不知是否是我理解有问题?

@lmbxmu
Copy link
Owner

lmbxmu commented Jan 22, 2021

仅对于VGG16的Flops计算,我们debug了一下,发现当卷积层时它每次进入的是“get_flops.py”文件中的第73-76行:
            delta_ops = layer.in_channels * (layer.out_channels-pruned_num) * layer.kernel_size[0] *  
                    layer.kernel_size[1] * out_h * out_w / layer.groups * multi_add
这句等号右边的第一项输入的channel个数是不是没考虑上一层剪枝后channel个数的删去量?不知是否是我理解有问题?

这些问题我们在readme里都已经讲的很清楚了,请自行参考

@wzd-fanshan
Copy link
Author

好的好的,就是说hrank比论文中展示的还要优秀,感谢!

@lmbxmu
Copy link
Owner

lmbxmu commented Jan 22, 2021

好的好的,就是说hrank比论文中展示的还要优秀,感谢!

参考 https://github.com/lmbxmu/HRankPlus

@lyc728
Copy link

lyc728 commented Sep 14, 2021

好的好的,就是说hrank比论文中展示的还要优秀,感谢!

参考 https://github.com/lmbxmu/HRankPlus

你好,可以直接对vgg训练好模型进行压缩吗?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants