Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FR]: Use OpenCL instead privative alternatives (CUDA, Metal) #595

Open
RafaelLinux opened this issue Aug 16, 2019 · 65 comments
Open

[FR]: Use OpenCL instead privative alternatives (CUDA, Metal) #595

RafaelLinux opened this issue Aug 16, 2019 · 65 comments
Labels
CUDA do not close issue that should stay open (avoid automatically close because stale) feature request feature request from the community wip work in progress

Comments

@RafaelLinux
Copy link

I just reported previosly the impossibility to render with Meshroom, probably cause despite I have an NVidia GPU, Nvidia does not provide any CUDA package for OpenSUSE 15.1 . I use Blender, GIMP ... all of them are using OpenCL. Meshroom is developed for Linux and Windows. OpenCL is updated continuously for both platforms. OpenCL performance is slightly under propietary Nvidia or AMD APIs, so, why do not let Meshroom to use OpenCL GPGPU API? Even Intel GPU users could use Meshroom if it uses OpenCL framework.

Please, could you consider this suggestion?

Thank you

@natowi
Copy link
Member

natowi commented Aug 16, 2019

Read alicevision/AliceVision#439
Here is the Background on why CUDA is used in many applications:
https://www.quora.com/Why-cant-a-deep-learning-framework-like-TensorFlow-support-all-GPUs-like-a-game-does-Many-games-in-the-market-support-almost-all-GPUs-from-AMD-and-Nvidia-Even-older-GPUs-are-supported-Why-cant-these-frameworks

@RafaelLinux
Copy link
Author

I read the thread. Some commentaries are from 2018, and OpenCL 2.2 didn't exist, and many changes come from then. CUDA is used in many applications, but OpenCL too () . In that list is Darktable too, that I usually use.

Anyway, Fabencastian wrote

Currently, we have neither the interest nor the resources to do another implementation of the CUDA code to another GPU framework.

That's a pity, cause lot of users could not try Meshroom, despite it's a great develop. I'm just now in the PC with the Intel GPU, so there is no way to use Meshroom and tried alternatives, like Metashape, that doesn't require necessarily and Nvidia GPU.

@skinkie
Copy link

skinkie commented Aug 20, 2019

That @fabiencastan does not have the time to do a port of a - for him working implementation - does not mean that other cannot implement it in their own time. A very big thing here is, would you implement it in OpenCL, or something different. Some good pointers on the wiki what are viable alternatives could help people that want to start on this task.

@RafaelLinux
Copy link
Author

Hi skinkie, I have no sufficient skills to code in C/C++. I'll give a try if it were Python, PHP or even JS. I point to the fact that "less users able to run an application = less interest in the application = less feedback" and finally, the great idea falls in an lost effort. It's true it's easier to work with the CUDA API, but a lot of users in this forum has reported info about how to migrate or simplify change to OpenCL. That could be a good point to start. That's only my opinion, of course.

@skinkie
Copy link

skinkie commented Aug 20, 2019

@RafaelLinux As user you can use Meshroom without CUDA, the only part of the application that is 'hidden' is the DepthMap stage and even that allows for preview without CUDA. As developer MeshRoom is Python + QML low entry level to make impact. The first acceleration CUDA is used in is the feature extraction. You could just try to get this to work: https://github.com/pierrepaleo/sift_pyocl

Personally my focus for Meshroom is introducing some heuristics for matching images and supervised learning opposed to the current brute force approach. Not that I am a photogrammetry specialist, but I can surely try to work on this open source project.

@RafaelLinux
Copy link
Author

Maybe I'm using incorrectly Meshroom, cause if I only reach DepthMap, I only see a cloud of points, so I can see the model result.

@skinkie
Copy link

skinkie commented Aug 21, 2019

@simogasp simogasp added CUDA feature request feature request from the community labels Aug 21, 2019
@RafaelLinux
Copy link
Author

Thank you, is a good workaround. I ll try it. Anyway, remember users don't mind how long it takes, quality is the priority, so please, don't forget this feature request ;)

@aviallon
Copy link

One could also use hipfy from AMD to convert CUDA code to HIP, wich can be built to work on either NVIDIA or AMD cards (with very nice performance, I currently use it for Tensorflow, and it works like a charm !)

@natowi
Copy link
Member

natowi commented Aug 25, 2019

@aviallon The last time (2018) hip did not support some cuda functions alicevision/AliceVision#439 (comment)
and there was no full support for windows and amdgpu linux alicevision/AliceVision#439 (comment).

You are welcome to try again using hipfy.

@arpu
Copy link

arpu commented Sep 17, 2019

for reference https://github.com/cpc/hipcl

@pppppppp783
Copy link

for reference https://github.com/cpc/hipcl

This is interesting, have anyone tried it?

@ShalokShalom
Copy link

ShalokShalom commented Sep 27, 2019

Nvidia does not provide any CUDA package_ for OpenSUSE 15.1.

This is simply a packaging issue since Arch has CUDA despite being not in the list here.

You already reported that issue to both, the open SUSE packagers and the NVidea CUDA team?

And you can probably repackage either the 15.0 variant of openSUSE package or the Arch package, which uses an independent source, as you can see in the link.

@skinkie
Copy link

skinkie commented Sep 27, 2019

@ShalokShalom the problem with Cuda remains that older hardware absolutely does not work with newer CUDA versions. This causes problems for nvidia-drivers and cuda, where one is effectively searching for the 'ideal pair' between them. I would be very interested if opencl could bridge this gap even by choosing the execution pipeline of choice.

@ShalokShalom
Copy link

ShalokShalom commented Sep 27, 2019

And how is that with HiP? Nvidia hardware runs on it as well?

I consider using a Geforce GT 610 for CUDA, can you tell me how to choose the suitable CUDA version?

Thanks a lot

@natowi
Copy link
Member

natowi commented Sep 27, 2019

@ShalokShalom

And how is that with HiP? Nvidia hardware runs on it as well?

"HIP allows developers to convert CUDA code to portable C++. The same source code can be compiled to run on NVIDIA or AMD GPUs"

I consider using a Geforce GT 610 for CUDA, can you tell me how to choose the suitable CUDA version?

On Windows, install the latest version, on Linux this might depend on your Distro. GT 610 supports CUDA 2.1, MR requires 2+

@ShalokShalom
Copy link

I am on Linux, what decides which version is optimal? I am on KaOS, that is a rolling distribution.

So, does HiP negligible the version differences between CUDA and the different NVidia hardware?

Could or should we replace CUDA entirely with it or is the overhead to big?

@natowi
Copy link
Member

natowi commented Sep 27, 2019

@ShalokShalom With HiP we can compile two versions of Meshroom: for CUDA and AMD GPUs. For CUDA users nothing changes. (https://kaosx.us/docs/nvidia/ But you won´t get far with a 1GB GT 610)

@natowi
Copy link
Member

natowi commented Sep 27, 2019

We have to wait for HiP to support cudaMemcpy2DFromArray. Then we can add AMD support for AV/MR and try HiPCL.

@skinkie
Copy link

skinkie commented Sep 27, 2019

@natowi But you won´t get far with a 1GB GT 610)

If Meshroom would allow parallel computation for nodes where both CPU and GPU could for example do feature extraction. Any additional computing resource could help. It depends on how much overhead the GPU would give in compare to a (faster) decent CPU but I would still see the potential for independent computation tasks.

@natowi natowi added the do not close issue that should stay open (avoid automatically close because stale) label Oct 27, 2019
@arpu
Copy link

arpu commented Nov 16, 2019

looks like hip supports now cudaMemcpy2DFromArray any progress on this?

@natowi
Copy link
Member

natowi commented Nov 16, 2019

@skinkie see #175

@arpu Yes, all CUDA functions are now supported by HiP and I was able to convert the code to HiP using the conversion tool (read here for details). The only thing left is to write a new cmake file that includes HiP and supports both CUDA and AMD compilation and the different platforms. Here is the Meshroom PopSift plugin I used for testing. At the moment I don´t have the time to figure out how to rewrite the cmake file, but I think @ShalokShalom wanted to look into this.
You are welcome to do so as well.

@ShalokShalom
Copy link

One question is very critical, I think: Will we ship two versions?

Linux distributions do their packaging themselves and we could benefit enormously by finding someone who is willing to maintain Alice for their userbase since that could result in new developers and funding.

2 versions, one for CUDA and one for HIP is something they will never do.

@natowi
Copy link
Member

natowi commented Nov 19, 2019

@ShalokShalom from the HiP code we can compile both CUDA and AMD versions. Similar to the parameter target platform/os in the cmake, CUDA or AMD can be defined. So depending on the compiler parameters we can define the versions (OS+cuda/amd).
So once we can compile all supported plattforms from our hipified code, we can create a PR to use HiP instead of CUDA code by default in the official repo.

@PickUpYaAmmo
Copy link

Any idea how long that approximately takes? I feel like a child just before Christmas eve :D

@natowi
Copy link
Member

natowi commented Nov 25, 2019

@PickUpYaAmmo I will take another look at this over the winter holidays.

@kalidem
Copy link

kalidem commented May 25, 2020

https://github.com/alicevision/meshroom/wiki/Draft-Meshing

@simogasp simogasp pinned this issue May 25, 2020
@Mhowser
Copy link

Mhowser commented Aug 30, 2020

Hi guys, has there been any progress on this?

@Jimw338
Copy link

Jimw338 commented Sep 22, 2020

As a thought experiment, if the functionality used by MeshRoom were rewritten using the CPU instead of GPU (if that is possible), how much slower would it be? My (little) understanding is that GPGPU basically lets you do massive parallel computation (and of course offload stuff from the CPU itself). If this were rewritten with say, loops, what would the slow-down be?

@zicklag
Copy link

zicklag commented Sep 22, 2020

My not well informed estimation is that the slowdown could be enormous. I'm pretty sure that many types of computations can run hundreds of times slower on CPU and it seems that Meshroom really makes full use of my GPU when doing what it does.

I think it's pretty much the perfect kind of work to run on the GPU because it can be massively parallel, which means that losing that massive parallelism would slow it down a lot.

But anybody feel free to correct me if I'm wrong, some of that is kind of guess/impression.

@ShalokShalom
Copy link

You could eventually use both, as Blender does.

@natowi
Copy link
Member

natowi commented Nov 10, 2020

We are looking into an alternative to the DepthMap node that runs on the CPU (it is not yet ready to use). It is not as good as the native DepthMap node quality wise, but better than DraftMeshing.
You will be informed once it is ready. Asking for updates does not speed up the process ;)


I did some research and the best solution for porting is still HIP. I´ll continue to see if I am able to build a test version, but it will definitively take time as I am learning by trial and error and success is not guaranteed.
There is the long term option to try to crowdfund the development, but at the moment it is hard to guess how many people would actually support this. So let´s find out (short survey)

@skinkie
Copy link

skinkie commented Dec 11, 2020

This looks like an even better argument to switch to HIP: https://www.phoronix.com/scan.php?page=news_item&px=AMD-HIP-CPU-Implementation

@skinkie
Copy link

skinkie commented Jan 9, 2021

I think we also have to consider that AMD - with respect to compatibility - might be just as bad as nVidia. I just notice that ROCm started to drop support for GPU's that could still be consider very beneficial for our computation tasks. This basically means: even if we migrate to OpenCL you will see, that the GPU that you want to run it on, still needs to be "very recent" otherwise you will loose your compatibility. Feels sad though.

@aviallon
Copy link

aviallon commented Jan 9, 2021

@skinkie there is a difference between ROCm and OpenCL. If there is an OpenCL port, there won't be any such issue, since even very old devices receive updates from the opensource community.
ROCm however, is very close to CUDA in the way it works, and although AMD might drop support for older GPUs in the future, since it is opensource, support might very well continue to exist in an unofficial way.

Sent from my MI 5S Plus using FastHub-Libre

@skinkie
Copy link

skinkie commented Jan 9, 2021

I just ran into a kernel panic over tesseract using opencl. While I agree generally with your statement, OpenCL may fall back to a CPU implementation, but as I just noticed, that is not a given thing. Even it worked gracefully, a CPU computation might render some operations useless or costing extreme amounts of time (and therefore: power).

@ShalokShalom
Copy link

We could change the title?

@acxz
Copy link

acxz commented Apr 23, 2022

I apologize if this is a bit out of touch with the current direction of the conversion, but wanted to share nonetheless:

https://github.com/illuhad/hipSYCL

@skinkie
Copy link

skinkie commented Apr 23, 2022

I apologize if this is a bit out of touch with the current direction of the conversion, but wanted to share nonetheless:

https://github.com/illuhad/hipSYCL

Would that allow the program to use all interfaces simultaniously? (Read: the ability to schedule the tasks over multiple targets)

@acxz

This comment was marked as off-topic.

@skinkie
Copy link

skinkie commented Apr 23, 2022

Would that allow the program to use all interfaces simultaniously?

this would depend on if hip or SYCL can do this, which I don't think they can yet. (maybe they can if so do point me to some resources for this)

My question was more in the direction, would the glue code take care of it ;)

@acxz

This comment was marked as off-topic.

@skinkie
Copy link

skinkie commented Apr 23, 2022

Considering that hip or SYCL do not have that feature, hipSYCL cannot write the code for that feature even if they wanted to. From the motivation of hipSYCL, if hip or SYCL does have that feature, then yes, hipSYCL will expose that feature for users.

I don't think hip or SYCL would require the functionality on their own, if the intermediate would take care of it. Like starting a new thread on the CPU or GPU, anything that would be available.

@acxz

This comment was marked as off-topic.

@skinkie
Copy link

skinkie commented Apr 23, 2022

this is getting offtopic, if you want to discuss more feel free to open an issue over at https://github.com/illuhad/hipSYCL

I really don't think this is offtopic. Meshroom splits a huge task in many smaller steps, but is then limited to a specific backend. Anything that would natively allows to schedule the task transparantly to cpu, gpu, etc. that would motivate people to integrate that technology sooner.

@ShalokShalom
Copy link

Why would this be offtopic? 👀

@michal2229
Copy link

michal2229 commented May 19, 2022

I think this could be helpful: https://www.phoronix.com/news/Intel-SYCLomatic-20220829,
SYCLomatic on GitHub

@Nosenzor
Copy link

SYCL or Vulkan Compute Shader can be the open solution (with a preference to SYCL). SYCL principles are pretty close to CUDA.
What about the Mushroom CL version that has existed in parallel : https://github.com/openphotogrammetry/meshroomcl ? Is there a way for AliceVision to get the source code and work from here ?

@balaclava9
Copy link

a more practical suggestion here: Regard3D is another OpenMVG based photogrammetry solution. he wrote a densification procedure that doesn't need CUDA and is platform agnostic. He hasn't updated in a while. maybe you can add his densification module to meshroom. It's open source. https://github.com/rhiestan/Regard3D/tree/master

also OpenMVS has a very nice platform independent densification module which works very well, which I've been using.

@zicklag
Copy link

zicklag commented Feb 13, 2024

Something to look into:

https://github.com/vosen/ZLUDA

ZLUDA is currently alpha quality, but it has been confirmed to work with a variety of native CUDA applications: Geekbench, 3DF Zephyr, Blender, Reality Capture, LAMMPS, NAMD, waifu2x, OpenFOAM, Arnold (proof of concept) and more.

@vosen
Copy link

vosen commented Mar 26, 2024

To those interested in the topic: could you please test Meshroom-compatible ZLUDA version? More info here: vosen/ZLUDA#79 (comment)

@natowi
Copy link
Member

natowi commented Mar 30, 2024

For those of you who want to test it now:
Download the official Windows build and replace the AliceVision folder with the one shared by vosen. Then start Meshroom with ZLUDA (download provided by vosen).

You can download a ready to use ZIP here if you prefer. I put it together to simplify testing.

(It includes Meshroom 2023.3.0 and AliceVision+ZLUDA as provided by vosen. I added Run-Meshroom-ZLUDA.bat that hopefully works and ZLUDA-Info.txt with some info on ZLUDA from the git)

It would be great if you could do some tests with the https://github.com/alicevision/dataset_monstree dataset (mini3 and full) so we can compare the performance.

@polarathene
Copy link

You can download a ready to use ZIP here if you prefer. I put it together to simplify testing.

This just loads a webpage that says "Not found", response is 404. Perhaps it's only available to you?


I only have a laptop with a 780M APU + RTX 4060 (laptop part not desktop, so weaker part AFAIK?), paired with a Ryzen 7940HS (8/16 core/threads @ 4GHz) and 32GB RAM. A 780M probably isn't ideal for an AMD GPU to test with? 🤷‍♂️

I might find time to give it a try with the dataset if you like, although I haven't done photogrammetry in a while, I only have about 30GB of disk to spare atm, if that's sufficient I can probably tackle it by next weekend 👍

@natowi
Copy link
Member

natowi commented Apr 1, 2024

@polarathene sorry, my bad. Link is fixed. Just give it a test run on your machines. 30gb should be more than enough to test with the monstree dataset.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CUDA do not close issue that should stay open (avoid automatically close because stale) feature request feature request from the community wip work in progress
Projects
None yet
Development

No branches or pull requests