You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to create a Gaussian Splatting representation of my render projects so that I am able to navigate them in 6DoF.
The way I envision the workflow is that I take my existing blender project, render out a few hundred scenes from it with known camera positions, then create a point cloud from the project's mesh and use that to seed the gaussian splatting learning algorithm. Since both point cloud and rendered camera scenes come from the same project, I know the relative positions of everything perfectly and using SfM to reconstruct the scene seems wasteful.
All the Gaussian Splatting training algorithms I can find only take COLMAP or similar datasets as input, so I wonder if it is possible to synthetically construct the dataset without going through the SfM reconstruction and what would it take to do that?
The text was updated successfully, but these errors were encountered:
I am trying to create a Gaussian Splatting representation of my render projects so that I am able to navigate them in 6DoF.
The way I envision the workflow is that I take my existing blender project, render out a few hundred scenes from it with known camera positions, then create a point cloud from the project's mesh and use that to seed the gaussian splatting learning algorithm. Since both point cloud and rendered camera scenes come from the same project, I know the relative positions of everything perfectly and using SfM to reconstruct the scene seems wasteful.
All the Gaussian Splatting training algorithms I can find only take COLMAP or similar datasets as input, so I wonder if it is possible to synthetically construct the dataset without going through the SfM reconstruction and what would it take to do that?
The text was updated successfully, but these errors were encountered: