$ \newcommand{\cc}[1]{\color{black}{#1}} \newcommand{\mvec}[1]{\mathbf{#1}} \newcommand{\cvec}[2]{^\mathrm{#2}\mathbf{#1}} \newcommand{\ctrans}[3]{^\mathrm{#3}\mathbf{#1}_\mathrm{#2}} \newcommand{\rmat}[9]{\cc{\begin{bmatrix}{#1}&{#2}&{#3}\\\ {#4}&{#5}&{#6}\\\ {#7}&{#8}&{#9}\end{bmatrix}}} \newcommand{\hmat}[3]{\cc{\begin{bmatrix}{#1}\\\ {#2}\\\ {#3}\\\ \end{bmatrix}}} \newcommand{\mq}[4]{\cc{\begin{bmatrix}{#1}&{#2}&{#3}&{#4} \end{bmatrix}}} \newcommand{\nvec}[3]{\cc{\begin{bmatrix}{#1}&{#2}&{#3} \end{bmatrix}}} \newcommand{\vvec}[3]{\cc{\begin{bmatrix}{#1}\\\ {#2}\\\ {#3}\end{bmatrix}}} \newcommand{\vvt}[2]{\cc{\begin{bmatrix}{#1}\\\ {#2}\end{bmatrix}}} \newcommand{\calmat}[4]{\cc{\begin{bmatrix}{#1}&0&{#3}\\\ 0&{#2}&{#4}\end{bmatrix}}} \newcommand{\dp}{^{\prime\prime}} $

Calibration

Calibration Parameters

Each camera was intrinsically calibrated using Kalibr, with the DAVIS images calibrated using the equidistant distortion model, and the VI-Sensor images calibrated using the standard radtan distortion model. The two different distortion models is due to the slightly smaller focal length (more fisheye) lenses used on the DAVIS cameras compared to the stock VI-Sensor lenses.

To rectify the VI-Sensor images, you can use the standard OpenCV or ROS rectification functions.

To rectify the DAVIS images and events, you will need to use the OpenCV fisheye rectification functions. This amounts to simply adding the fisheye namespace in front of the usual function (e.g. cv::fisheye::undistortPoints vs cv::undistortPoints). Note that the same cv::remap function works on both sets of images (no fisheye namespace needed). ROS does not currently support the equidistant distortion model. However, you can look at these pull requests: one, two, to the common_msgs and vision_opencv repos to find changes to the ROS image processing pipeline that allow for this model. Once these pull requests are merged in, this will no longer be necessary.

For convenience, the mapping between each pixel in the distorted image and the corresponding pixel in the rectified image is stored for each camera as $SEQUENCE_(left/right)_(x/y)_map.txt. For example, to rectify an event (or any point) (x, y) in the left DAVIS camera for outdoor_day:

x_rect = outdoor_day_left_x(y, x)
y_rect = outdoor_day_left_y(y, x)

The extrinsics between the lidar and the left DAVIS camera are provided, as well as extrinsics between all cameras, as well as between each camera and its own IMU. In addition, the ground truth pose has been transformed into the left DAVIS camera frame.

All intrinsic and extrinsic calibrations are stored in yaml format, roughly following the calibration yaml files output from Kalibr.

Calibration File Format

Each scene (corresponding to a single day of recording) has its own calibration file. Each file consists of:

  • T_cam0_lidar: The 4x4 transformation that takes a point from the Velodyne frame to the left DAVIS camera frame.
  • For each camera (0-3):
    • Distortion model and coefficients
    • Intrinsics
    • Rectification matrix
    • Projection matrix
    • Resolution
    • The ROS topic corresponding to this camera
    • T_cam_imu: The 4x4 transformation that takes a point from this camera’s IMU frame (where applicable) to this camera’s camera frame.
    • T_cn_cnm1: The 4x4 transformation that takes a point in the previous camera’s camera frame to this camera’s camera frame (e.g. cam0->cam1, cam1->cam2).