$ \newcommand{\cc}[1]{\color{black}{#1}} \newcommand{\mvec}[1]{\mathbf{#1}} \newcommand{\cvec}[2]{^\mathrm{#2}\mathbf{#1}} \newcommand{\ctrans}[3]{^\mathrm{#3}\mathbf{#1}_\mathrm{#2}} \newcommand{\rmat}[9]{\cc{\begin{bmatrix}{#1}&{#2}&{#3}\\\ {#4}&{#5}&{#6}\\\ {#7}&{#8}&{#9}\end{bmatrix}}} \newcommand{\hmat}[3]{\cc{\begin{bmatrix}{#1}\\\ {#2}\\\ {#3}\\\ \end{bmatrix}}} \newcommand{\mq}[4]{\cc{\begin{bmatrix}{#1}&{#2}&{#3}&{#4} \end{bmatrix}}} \newcommand{\nvec}[3]{\cc{\begin{bmatrix}{#1}&{#2}&{#3} \end{bmatrix}}} \newcommand{\vvec}[3]{\cc{\begin{bmatrix}{#1}\\\ {#2}\\\ {#3}\end{bmatrix}}} \newcommand{\vvt}[2]{\cc{\begin{bmatrix}{#1}\\\ {#2}\end{bmatrix}}} \newcommand{\calmat}[4]{\cc{\begin{bmatrix}{#1}&0&{#3}\\\ 0&{#2}&{#4}\end{bmatrix}}} \newcommand{\dp}{^{\prime\prime}} $

The PennCOSYVIO Data Set

ground truth from markers

What is PennCOSYVIO?

The PennCOSYVIO data set is collection of synchronized video and IMU data recorded at the University of Pennsylvania’s Singh Center in April 2016. It is geared towards benchmarking of Visual Inertial Odometry algorithms on hand-held devices, but can also be used for other platforms such as micro aerial vehicles or ground robots.

Trajectory

What sets this benchmark apart from previous ones is that it goes from outdoors to indoors:
Singh Center from the outside
inside the Singh Center

The total path length is about 150m. We use an optical method to localize the sensors via fiducial markers (AprilTags) to within about 10cm. The animation at the top of the page shows which markers along the path are visible. These “ground truth” positions can be used to benchmark the results of VIO algorithms such as the Google Tango’s: sequence AS trajectory

Sensors

We loaded a bunch of sensors onto the rig: seven cameras and three IMUs total, including two Google Project Tango tablets, four GoPro Hero 4 Cameras, and a VI (Visual-Inertial) sensor. Nitin then hauled the rig into and through the Singh center:

Here are the sensor characteristics:

SensorCharacteristics
C1,C2,C3
  • GoPro Hero 4 Black
  • rolling shutter
  • FOV: 69.5deg vert., 118.2deg horiz
VI-Sensor
  • Skybotix integrated VI-sensor
  • stereo camera: 2 x Aptina MT9V034
  • gray 2x752x480 at 20fps (rectified), global shutter
  • FOV: 57deg vert., 2 x 80deg horiz.
  • IMU: ADIS16488 at 200Hz
Tango Bottom
  • Google Project Tango ‘Yellowstone’ 7in tablet
  • RGB 1920x1080 at 30fps, rolling shutter
  • FOV: 31deg vert., 52deg horiz.
  • proprietary VIO pose estimation
  • accelerometer at 128Hz
  • gyroscope at 100Hz
Tango Top
  • Google Project Tango ‘Yellowstone’ 7in tablet
  • gray 640x480 at 30fps, global shutter
  • FOV: 100deg vert., 132deg horiz.
  • accelerometer at 128Hz
  • gyroscope at 100Hz

Citations

If you are using this dataset, please cite the following publication:

@inproceedings{DBLP:conf/icra/PfrommerSDC17,
author    = {Bernd Pfrommer and
Nitin Sanket and
Kostas Daniilidis and
Jonas Cleveland},
title     = {PennCOSYVIO: {A} challenging Visual Inertial Odometry benchmark},
booktitle = {2017 {IEEE} International Conference on Robotics and Automation, {ICRA}
2017, Singapore, Singapore, May 29 - June 3, 2017},
pages     = {3847--3854},
year      = {2017},
crossref  = {DBLP:conf/icra/2017},
url       = {https://doi.org/10.1109/ICRA.2017.7989443},
doi       = {10.1109/ICRA.2017.7989443},
timestamp = {Wed, 26 Jul 2017 15:17:30 +0200},
biburl    = {http://dblp.uni-trier.de/rec/bib/conf/icra/PfrommerSDC17},
bibsource = {dblp computer science bibliography, http://dblp.org}
}