Shape reconstruction from RGBD images from ShapeNet dataset.
- Created a draft of a report: see overleaf
- Generated animations
- Voxel scaling methods and their results are in the
src/voxel_grid_scaling.ipynb
notebook. - We created the following dataset: sampled 50% of the already possessed voxel grids of size 32 (compared to 20% in the previous experiments). Moreover, we took 21 small categories from the original ShapeNet, downsampled voxel grids from 128 to 32.
- We trained the model on the dataset from point 2.
- For the best model:
- POCO
- 3D Reconstruction of Novel Object Shapes from Single Images
- 3D Reconstruction from RGB-D
- Papers with code
- Large-Scale 3D Shape Reconstruction and Segmentation from ShapeNet Core55
Activities performed:
- implementation of ShapeNet sampling script (
scripts/sample_shapenet.py
) - implementation of RenderBlender(TM) - a script parsing meshes into RGB and depth images (
scripts/render_blender.py
) - preprocessing of the sampled pared of the dataset (10%)
RGB | DEPTH |
---|---|