Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tips for getting a better colored mesh model #8

Closed
SpongeGirl opened this issue May 13, 2020 · 16 comments
Closed

tips for getting a better colored mesh model #8

SpongeGirl opened this issue May 13, 2020 · 16 comments
Labels
good first issue Good for newcomers

Comments

@SpongeGirl
Copy link

SpongeGirl commented May 13, 2020

I've tested this repo with my own 360 degree images successfully, for getting a better colored mesh model, i will suggest:

  1. use FULL resolution photos to run COLMAP and imgs2poses.py file, like 3968*2976, make sure you take photos horizontally ;

  2. if you can't see your center object clearly when you run the extract_mesh.py file, probably your poses_bounds.npy file is not right, check this file with np.load('poses_bounds.npy')[:, -2:] to see whether there are many small values or not, make sure these array values at normal level ;
    (by the way, if you've trained a good model, you will not have this problem)

  3. a good model should converge to psnr 25 in the first 10k steps, if not then something is problematic ;

  4. when you tune the xyz range and sigma_threshold parameters to get a better volume box, start with (x_range, y_range=-1.5,1.5, z_range=-4, -1, sigma_threshold=5) , cause i found the object always at a lower place, make sure you can see your object completely and clearly, then it will be easy for you to tune these parameters slightly to get a better result.

Thanks for the author's help, i've solved many problems, so i want to post my advices here for those people who have the same problems.

Finally, i can show you my colored mesh model, it turns out this repo is good for 3D reconstruction:

newscreen16846093

@kwea123 kwea123 pinned this issue May 13, 2020
@kwea123
Copy link
Owner

kwea123 commented May 13, 2020

Clarification for point 4:
z stands for up/down axis. The reason why you need negative values is because the origin is the center of all camera positions, so if you took images from camera facing downwards, your object will be located at -z axis.

@stone100010
Copy link

docker or anaconda ? @SpongeGirl

@SpongeGirl
Copy link
Author

SpongeGirl commented May 14, 2020 via email

@stone100010
Copy link

Thank you! @kwea123 @SpongeGirl

@kwea123 kwea123 added the good first issue Good for newcomers label May 17, 2020
@stone100010
Copy link

stone100010 commented May 21, 2020

Blender (Realistic Synthetic 360)
LLFF (Real Forward-Facing)
Your own data (Forward-Facing/360 inward-facing)

Which one do you use? @kwea123

@kwea123
Copy link
Owner

kwea123 commented May 21, 2020

if it's your own data then it's the last one! I have a tutorial video, did you watch it?

@kwea123 kwea123 closed this as completed May 22, 2020
@phongnhhn92
Copy link

Hey @kwea123, I am just started working on this repo and I am a bit confused on training using our own data part ? So the basic idea is to use colmap to get poses from a captured video around a single object right ? Then what is the next step to use it with your repo because I see that u are suggesting to train with LLFF method which confuses me :)

@kwea123
Copy link
Owner

kwea123 commented May 29, 2020

I mean after the reconstruction of colmap, run the same command as for llff data. My expression was not precise, I fixed it. You can also watch my video for detailed explanation.

@sixftninja
Copy link

Thanks @kwea123 this is helpful!

@sixftninja
Copy link

for x_range, y_range = -1.5,1.5 and z_range = -4,-1 the Colab notebook runtime restarts and nothing happens! help...

@huying163
Copy link

hi,I am just started working on this repo and I am a bit confused on training.
Install torchsearchsorted by cd torchsearchsorted then pip install .
this step is not work

@Yes-Jumby
Copy link

Yes-Jumby commented Dec 16, 2021

cd torchsearchsorted then python setup.py install will work

@huying163
Copy link

yes,solve it.Thanks

@huying163
Copy link

huying163 commented Dec 16, 2021 via email

@povolann
Copy link

@SpongeGirl Thank you for a very helpful comment, I think that I might have a similar problem because I took photos of my flower pot which is in the centre of the photos, but in the generated images (360 inward-facing) it's kind of in the bottom part of the image. I know that your comment is more about extracting the mesh, wh but I was wondering if you have some advice for me :)
Thank you for your reply!

@Holmes-Alan
Copy link

Another tip is: when training the NeRF on your own data, use downscaled images, rather than the original size. More importantly, downsampling your own images by 8 times (about 400*400) works. Otherwise, the model cannot successfully extract the 3D mesh.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

9 participants