Mid-Air

Dataset

Mid-Air

Mid-Air, The Montefiore Institute Dataset of Aerial Images and Records, is a multi-purpose synthetic dataset for low altitude drone flights. It provides a large amount of synchronized data corresponding to flight records for multi-modal vision sensors and navigation sensors mounted on board of a flying quadcopter. Our multi-modal vision sensors capture RGB pictures, relative surface normal orientation, depth, object semantics and stereo disparity.

Additionally, each flight trajectory was recorded several times in the same place but in different climate conditions in order to change the visuals of the scene. This offers the opportunity to train algorithms for robustness to visual changes. A test set for benchmarking this particular criteria is proposed alongside the training data.

Features


420k

frames


79min

flight



Large training set

Our dataset contains 79 minutes of drone flight records extracted out of more than 5 hours of flight records. Records were captured by manually flying the drone in a virtual environment thanks to an RC controller connected to the computer. The 79 minutes of flight are divided into 54 individual trajectories of equal length.

Since each trajectory is rendered several times for different climate scenarios, Mid-Air offers more than 420,000 individual training frames.


Multi-modal sensors

One of the important features of Mid-Air is the types of data which are proposed. Our drone is equipped with 3 RGB cameras and records several ground-truth visual maps such as relative surface normal orientation, depth, object semantics, and stereo disparity.

On top of that, our dataset provides drone positioning information. Additionally to ground truths, flight records also contain data logs for several simulated positioning sensors, i.e. accelerometer, gyroscope and GPS.

Complete information related to sensor features can be found on the Technical specifications page.


3

cameras


IMU

GPS



7

climates


3

maps



Train for robustness

With its 4 weather setups, 3 different seasons, 3 environment maps, and high quality visuals, our dataset should give a good insight on potential performances which can be expected from algorithms in real-world scenarios.

Additionally, since each trajectory is recorded several times in different climate conditions, Mid-Air can be used to test the robustness of vision algorithms to visual changes. By the way, we propose a benchmark designed to assess the latter.

All considerations taken into account during the design phase of the dataset and the benchmark are explained in detail in our paper.


Ease of use

We wanted to reduce the time required to set up our dataset as much as possible. We did this by choosing a straightforward Data organization. We also provide simple example scripts ( GitHub) to show how to use it.

Additionally, we have an automated download procedure that lets you finely select what you want to download. Our Download page also contains a small preview for each trajectory so that you can get an idea of its content without having to download it completely.

     


     

Terms of Use

Copyright

Creative Commons License

All datasets and benchmarks on this website are copyrighted and published under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

This means that you must attribute the work in the manner specified by the authors, you may not use this work for commercial purposes and if you alter, transform, or build upon this work, you may distribute the resulting work only under the same license.

If you need a commercial license, please contact us.

Citation
When using this dataset in your research, we would appreciate that you cite our CVPRW paper:

@INPROCEEDINGS{Fonder2019MidAir,
author = {Michael Fonder and Marc Van Droogenbroeck},
title = {Mid-Air: A multi-modal dataset for extremely low altitude drone flights},
booktitle = {Conference on Computer Vision and Pattern Recognition Workshop (CVPRW)},
year = {2019},
month = {June}
}