$ \newcommand{\cc}[1]{\color{black}{#1}} \newcommand{\mvec}[1]{\mathbf{#1}} \newcommand{\cvec}[2]{^\mathrm{#2}\mathbf{#1}} \newcommand{\ctrans}[3]{^\mathrm{#3}\mathbf{#1}_\mathrm{#2}} \newcommand{\rmat}[9]{\cc{\begin{bmatrix}{#1}&{#2}&{#3}\\\ {#4}&{#5}&{#6}\\\ {#7}&{#8}&{#9}\end{bmatrix}}} \newcommand{\hmat}[3]{\cc{\begin{bmatrix}{#1}\\\ {#2}\\\ {#3}\\\ \end{bmatrix}}} \newcommand{\mq}[4]{\cc{\begin{bmatrix}{#1}&{#2}&{#3}&{#4} \end{bmatrix}}} \newcommand{\nvec}[3]{\cc{\begin{bmatrix}{#1}&{#2}&{#3} \end{bmatrix}}} \newcommand{\vvec}[3]{\cc{\begin{bmatrix}{#1}\\\ {#2}\\\ {#3}\end{bmatrix}}} \newcommand{\vvt}[2]{\cc{\begin{bmatrix}{#1}\\\ {#2}\end{bmatrix}}} \newcommand{\calmat}[4]{\cc{\begin{bmatrix}{#1}&0&{#3}\\\ 0&{#2}&{#4}\end{bmatrix}}} \newcommand{\dp}{^{\prime\prime}} $

Download

HDF5 Files

The data streams from the individual sensors have been combined into hdf5 files that mirror the ROS bag structure. hdf5 is a standard format with support in almost any language, and should enable easier development for non-ROS users.

The timestamps for each topic are embedded as an additional array with suffix ‘_ts’. For example in python, loading a file and reading the left grayscale images and timestamps would involve the following lines of code:

import h5py
data = h5py.File('outdoor_day2_data.hdf5')
images = data['davis']['left']['image_raw']
image_ts = data['davis']['left']['image_raw_ts']

In addition, we provide a mapping for the nearest event to each DAVIS image in time, as, for example,

image_raw_event_inds = data['davis']['left']['image_raw_event_inds']

, where image_raw_event_inds[image_ind] would be the event index corresponding to image image_ind.

Note that the events are concatenated into a single array, and as such do not have the associated ROS message timestamps. However, each individual event retains its timestamp.

The files can be found in the Google Drive folder here: https://drive.google.com/open?id=1rwyRk26wtWeRgrAx_fgPc-ubUzTFThkV

ROS Bags

To process the bag files, you will need the rpg_dvs_ros package to read the events (in particular dvs_msgs). You may also optionally install the visensor_node to have access to the /cust_imu0 topic, which includes the magnetometer, pressure and temperature outputs of the VI-Sensor.

Sequences will be added to this page on a rolling basis. We also plan to include videos for each sequence.

Note that these bags are large (up to 27G).

If the server is down (links no longer work), the individual files can be found here.

SceneCalibrationSequenceMap/Image
Indoor flying (Note: No VI-Sensor data is available for this scene). Calibration Indoor Flying 1 Data (1.2G) Ground truth (2.6G)
Indoor Flying 2 Data (1.7G) Ground truth (3.2G)
Indoor Flying 3 Data (1.8G) Ground truth (3.5G)
Indoor Flying 4 Data (419M) Ground truth (738M)
Outdoor Driving Day (Note: A hardware failure caused the grayscale images on the right DAVIS grayscale images for this scene to be corrupted. However, VI-Sensor grayscale images are available). Calibration Outdoor Day 1 Data (9.7G) Ground truth (9.5G) outdoor_day1
Outdoor Day 2 Data (27G) Ground truth (23G) outdoor_day2
Outdoor Driving Night Calibration Outdoor Night 1 Data (8.1G) Ground truth (9.5G) outdoor_night1
Outdoor Night 2 Data (11G) Ground truth (11G) outdoor_night2
Outdoor Night 3 Data (9G) Ground truth (11G) outdoor_night3
Motorcycle (Note: No lidar). Calibration Highway 1 Data (42G) Ground truth (659K)


Ground Truth Optical Flow Generation

In addition to the ground truth provided by the original dataset, we provide code to generate dense ground truth optical flow for each sequence with ground truth poses and depths. For storage and bandwidth reasons, we do not provide the optical flow directly, but instead provide the code to generate it from the ground truth provided here. The method for this is outlined in the paper:
EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras.

The processed optical flow ground truth has been saved in numpy format (.npz), and can be found here. The ground truth flow for each sequence has a suffix of _gt_flow_dist. Each npz file contains a dictionary with keys: 'timestamps', 'x_flow_dist', 'y_flow_dist'.

We also provide files with suffix _odom, which contains a dictionary with keys: 'timestamps', 'lin_vel', 'ang_vel', 'pos', 'quat'.

The git repo for this ground truth can be found here: https://github.com/daniilidis-group/mvsec/tree/master/tools/gt_flow.

If you use this optical flow dataset, please cite:

Zhu, A. Z., Yuan, L., Chaney, K., Daniilidis, K. (2018). EV-FlowNet: Self-Supervised Optical Flow Estimation for Event-based Cameras Robotics: Science and Systems 2018.