🚨 This repository contains download links to our code of our prototype "Multi-setup depth perception through virtual image hallucination", ECCV 2024 DEMOs. Our prototype is based on our previous works "Active Stereo Without Pattern Projector", ICCV 2023 and "Stereo-Depth Fusion through Virtual Pattern Projection", Journal Extension of ICCV paper, Revisiting Depth Completion from a Stereo Matching Perspective for Cross-domain Generalization 3DV 2024, LiDAR-Event Stereo Fusion with Hallucinations ECCV 2024. You can find all links here: Related Papers
by Luca Bartolomei1,2, Matteo Poggi1,2, Fabio Tosi2, Andrea Conti2, and Stefano Mattoccia1,2
Advanced Research Center on Electronic System (ARCES)1 University of Bologna2
Active Stereo Without Pattern Projector (ICCV 2023)
Project Page | Paper | Supplementary | Poster | Code
Stereo-Depth Fusion through Virtual Pattern Projection (Journal Extension)
Project Page | Paper | Code
Revisiting Depth Completion from a Stereo Matching Perspective for Cross-domain Generalization (3DV 2024)
Project Page | Paper | Supplementary | Code
Note: 🚧 Kindly note that this repository is currently in the development phase. We are actively working to add and refine features and documentation. We apologize for any inconvenience caused by incomplete or missing elements and appreciate your patience as we work towards completion.
The demo aims to showcase a novel matching paradigm, proposed at ICCV 2023, based on projecting virtual patterns onto conventional stereo pairs according to the sparse depth points gathered by a depth sensor to achieve robust and dense depth estimation at the resolution of the input images. We will showcase to the ECCV community how flexible and effective the virtual pattern projection paradigm is through a real-time demo based on off-the-shelf cameras and depth sensors.
🖋️ If you find this code useful in your research, please cite:
@InProceedings{Bartolomei_2023_ICCV,
author = {Bartolomei, Luca and Poggi, Matteo and Tosi, Fabio and Conti, Andrea and Mattoccia, Stefano},
title = {Active Stereo Without Pattern Projector},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2023},
pages = {18470-18482}
}
@misc{bartolomei2024stereodepth,
title={Stereo-Depth Fusion through Virtual Pattern Projection},
author={Luca Bartolomei and Matteo Poggi and Fabio Tosi and Andrea Conti and Stefano Mattoccia},
year={2024},
eprint={2406.04345},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@inproceedings{bartolomei2024revisiting,
title={Revisiting depth completion from a stereo matching perspective for cross-domain generalization},
author={Bartolomei, Luca and Poggi, Matteo and Conti, Andrea and Tosi, Fabio and Mattoccia, Stefano},
booktitle={2024 International Conference on 3D Vision (3DV)},
pages={1360--1370},
year={2024},
organization={IEEE}
}
@inproceedings{bartolomei2024lidar,
title={LiDAR-Event Stereo Fusion with Hallucinations},
author={Bartolomei, Luca and Poggi, Matteo and Conti, Andrea and Mattoccia, Stefano},
booktitle={European Conference on Computer Vision (ECCV)},
year={2024},
}
You can build our prototype from scratch using our code and your L515 and OAK-D Lite sensors. We tested our code using both Jetson Nano (arm64) and a standard amd64 ubuntu PC.
-
Dependencies: Ensure that you have installed all the necessary dependencies. The list of dependencies can be found in the
./vpp/requirements.txt
and./vppdc/requirements.txt
files. The Livox MID70 requires a PTP Master running on the same network. -
Build the Python Livox Lib: The demos that use Livox LiDAR needs our python porting that can be installed using our script
./mypylivox/compile.sh
. Please install the LivoxSDK replacingCMakeLists.txt
with our custom./mypylivox/CMakeLists.txt
. -
Calibration (1): Ensure that LiDAR and OAK-D Lite are rigidly attached to each other (you can 3D print our supports using the step files). Given a chessboard calibration object, please record a sequence of frame where the chessboard is visible on both OAK-D left camera and L515 IR camera / MID70 Reflectivity pseudo-image using our scripts:
python calibration_recorder_l515.py --outdir <chessboard_folder>
python calibration_recorder_mid70.py --outdir <chessboard_folder>
- Calibration (2): Estimate the rigid transformation between L515 IR camera and OAK-D left camera using the previous recorder frames and our script (edit arguments
square_size
andgrid_size
to match your chessboard object):
python calibration_l515.py --dataset_dir <chessboard_folder> --square_size 17 --grid_size 9 6
python calibration_mid70.py --dataset_dir <chessboard_folder> --square_size 92 --grid_size 9 6 --binary_threshold 7 --median_kernel_size 3
- Launch the demo: Run our
demo_*.py
scripts to see our virtual pattern projection (VPP) and VPP for Depth Completion in real-time.
For questions, please send an email to luca.bartolomei5@unibo.it
We would like to extend our sincere appreciation to Nicole Ferrari who developed the time synchronization algorithm and to PyRealSense, DepthAI and LivoxSDK developers.