Skip to content

Releases: kwea123/CasMVSNet_pl

release full fusion results for blendedmvs

02 Apr 12:59
Compare
Choose a tag to compare

Fusion results for all scans using

python eval.py \
    --dataset_name blendedmvs \
    --root_dir /home/ubuntu/data/BlendedMVS/dataset_full_res/ \
    --split all \
    --ckpt_path ckpts/exp_g8_blended/epoch.15.ckpt \
    --num_groups 8 --depth_interval 192.0

and other default parameters in eval.py.
The point cloud sizes vary from 1M points to maximum 250M points for large scenes. Large scenes require ~10G RAM to open, so pay attention to free your memory before open the files otherwise your pc will freeze.
Due to the large size of some scenes, I compressed them. Please decompress before visualization.
Put under results/blendedmvs/points.

Use python visualize_ply.py --dataset_name blendedmvs --scan {scan name} to visualize.

Take a look at BlendedMVS_scenes to quickly find out how the scenes look like.

Note: scenes 5bbb6eb2ea1cfa39f1af7e0c and 5b558a928bbfb62204e77ba2 are still more than 2GB after compression, so I cannot put them here. All other 111 scenes are available.

release blendedmvs trained model

31 Mar 06:02
ac816f1
Compare
Choose a tag to compare

release blendedmvs pretrained model and training logs
trained with --depth_interval 192.0 --num_groups 8 !

Note:

  • add --num_groups 8 for DTU evaluation
  • add --depth_interval 192.0 --num_groups 8 for blendedmvs evaluation

release full fusion results for tanks and temples

06 Mar 02:11
c76f977
Compare
Choose a tag to compare

Fusion results for all scans using default parameters in eval.py (except that indoor scenes have --min_geo_consistent=3).
Each point cloud contains 100M~300M points.
Due to the large size of some scenes, I compressed them. Please decompress before visualization.
Put under results/tanks/points.

Use python visualize_ply.py --dataset_name tanks --scan {scan name} to visualize.

release full fusion results for dtu

01 Mar 06:19
Compare
Choose a tag to compare

Fusion results for all scans (train, val and test) using default parameters in eval.py.
Each point cloud contains 20M~30M points.
Put under results/dtu/points.

Use python visualize_ply.py --dataset_name dtu --scan {scan name} to visualize.

A viewpoint viewpoint.json is also provided. add --use_viewpoint to use the same viewpoint across scans.

release DTU trained model

13 Feb 12:23
Compare
Choose a tag to compare

release DTU pretrained model and training logs