Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MADNet Tensorflow 2 Implementation #78

Open
ChristianOrr opened this issue Feb 22, 2022 · 2 comments
Open

MADNet Tensorflow 2 Implementation #78

ChristianOrr opened this issue Feb 22, 2022 · 2 comments

Comments

@ChristianOrr
Copy link

ChristianOrr commented Feb 22, 2022

Thanks to @AlessioTonioni and @mattpoggi for your amazing work.

I've created an implementation of MADNet in Tensorflow 2 using the Keras Functional API and want to share my results with you. I think you would enjoy seeing your work implemented with a different set of tools. I learned a huge amount while creating it, so It might also give you some new ideas. Here's the link to my repo: madnet-deep-stereo-with-keras.

Adaptation Disparities

Training on the synthetic dataset and Kitti datasets worked very well. The online adaptation using MAD and Full MAD also worked as expected. My predicted disparities from the different methods are shown below. Starting from the left to right column:

  1. Left rectified RGB images.
  2. Disparity predictions from the pretrained Kitti model (no adaptation).
  3. Disparity predictions from pretrained synthetic weights with full MAD (adapting all 6 modules).
  4. Disparity predictions from pretrained synthetic weights with MAD (adapting 1 module randomly).
  5. Disparity predictions from pretrained synthetic weights (no adaptation).

adaptation_results

Inferencing Speed

My inferencing speed results were very interesting. The biggest thing I noticed was that dataset size has a huge impact on speed, which I believe is due to the graph tracing optimization. Did you notice any impact on speed when changing the dataset size?

The MAD adaptation modes (adapting 1-5 modules) were also very tricky to make graph traceable. I realized that without graph tracing the MAD modes were about 10x slower than Full MAD, so they become useless. I pushed through and eventually made them work with graph tracing by using some clever hacks, but I think the workarounds had an impact on performance. The random MAD mode (which was the best performer of the MADs) only started showing decent performance gains over Full MAD at the 10,000 batch size, and even then it wasn't anywhere near the gains you were achieving in your paper. The sequential MAD actually performed worse than Full MAD on all batch sizes. I didn't implement the standard MAD, because it would require a lot more workarounds to get losses from each module using the Functional API implementation. Judging from the sequential MADs performance, I don't think it would even be worth trying to make it work.

I'm not an expert with optimization with Tensorflow so I'm not sure if my MAD implementations were just very inefficient or if the backend code is not very well optimized for this type of use case. I would love to hear other people's thoughts on this.

The frame rates with the different inferencing modes are shown below. The results were achieved using my RTX 3050 Ti laptop GPU, no other programs were running while the tests were being performed.

FPS 100 FPS 1,000 FPS 10,000
No Adaptation 11.1 31.8 37.8
Full MAD 2.6 10.3 14.2
MAD 1 Random 2.4 11.1 18.6
MAD 2 Random 2.4 10.7 18.4
MAD 3 Random 2.3 10.8 16.9
MAD 4 Random 2.2 11.0 15.1
MAD 5 Random 2.1 9.9 14.4
MAD 1 Sequential 2.1 9.6 14.5
MAD 2 Sequential 2 8.1 11.6
MAD 3 Sequential 1.9 6.7 8.9
MAD 4 Sequential 1.8 5.7 7.2
MAD 5 Sequential 1.8 5.1 6.2

frame_rate_line_plot

Thanks for sharing your code and the huge amount of information around the model architecture, training, and inferencing. I wouldn't have been able to recreate your work without it.

@AlessioTonioni
Copy link
Member

Thanks for the great work!
Judging by the results you showed here it seems that your implementation is working more or less as expected.
If you agree I can add a reference to your repo in our README to provide a tf2 implementation to whoever might be interested in it.

@ChristianOrr
Copy link
Author

Yes, you're welcome to add a link to it in your readme!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants