During the review of our paper, or even during the development phase of the dataset, we received some questions which could not be answered in the paper due the lack of space. We however felt that we should answer them. This is what we do here.
Developing such a dataset is time consuming and possibilities are endless. Therefore, we had to make choices on the features to prioritize. We did that based on a series of constraints which are ours and may unfortunately be different from yours.
In a general manner, we tried to be complimentary to already existing datasets in terms of proposed features. We believe that performances of good robust algorithms for tasks such as ego-motion or depth estimation should be similar on different datasets. We therefore think that learning simultaneously on different datasets to increase the diversity of the training content should be considered while waiting for a all-in-one, ultra-complete dataset... if it ever comes.
For example, we received questions about te lack of moving and man-made objects. Our dataset features only few moving and/or man-made objects simply because there are not a lot of them in typical unstructured environments. There are however several of other datasets featuring such objects (the KITTI dataset is one of them for example). These datasets can be merged together (and with ours if you need some particular feature of Mid-Air) to combine all of their features.
Finally, we want to emphasize that we will maintain and maybe continue to develop our dataset. So, a feature that is missing now may be be added to it later on.
Most of these imperfections can easily be simulated in postprocessing on existing images. Since there are so much different camera models with different imperfections, we didn't want to restrict everybody to just one imperfection model. Rather than that, we prefer to leave everyone free of implementing the imperfection model which suits its use case.
There are two main reasons for this. The first one is that drones are not supposed to be flying in such severe weathers. This therefore strongly lessen the need for the presence of such additions in the dataset. And from a technical point of view, it appeared that, with our current render pipeline, it is difficult to guarantee that rain- and snowfalls will be consistent between two consecutive frames.
Unreal Engine 4 (UE4) indeed supports motion blur, but not for all renderings. We use the Airsim simulator as an API to UE4. Airsim uses RenderTargets to simulate the different onboard cameras. Unfortunately, it appears that the motion blur feature is not supported by RenderTargets.
We agree that, in some softwares (such as Blender for example), it is trivial to get optical flow. However, as for motion blur, optical flow is not available for Unreal Engine 4 RenderTargets (which are the simulated cameras).