Author: Muye Jia
The pipeline starts with camera calibration for the RGB camera located on the front of the drone; then the images captured will be converted to grayscale, and subsequently fed to ORB or ShiTomasi feature extractor. Features extracted from the previous frame will be matched in the next frame to find the image coordinates of the same set of features in the previous frame and current frame, which can then be used to solve the epipolar constraint equation for the essential matrix that contains the camera rotation and translation matrix representing the pose transformation between the two frames. Next, the transformation matrices will be stitched together to form the entire camera trajectory
Real-time visual odometry on DJI Tello Drone
draw_points.mp4
The following are the trajectories obtained from real deployment on DJI Tello Drone:
-
Calibration photos using a grided chessboard need to be taken using the target camera, from at least 10 different angles and views. Run
python3 cam_calibrate.py
with thetake_calibration_photos
function enabled, and the host computer connect to the DJI Tello drone WiFi; the script will take photos using Tello drone camera at at a frequency of 10Hz. -
Next, use the photos taken and run
python3 cam_calibrate.py
with thecalibrate_camera
function enabled, thecam_matrix
variable contains the intrinsic parameters of the camera.
-
Using KITTI dataset as validation. Run
python3 visual_odometry.py
withplot_KITTI
function enabled to test the VO pipeline on the KITTI dataset, one can choose to enable the bundle-adjustment. -
To test the VO on the drone, the user need to record images using the drone and then perform the odometry offline by running
python3 visual_odometry.py
withplot_drone
function enabled.