Datasets R12-E, ES, ELP20

4 minute read

Published:

Datasets containing ground truth data on 3D complex real-world scene acquired with a 3D lidar scanner Leica ScanStation P20 (LP20) and a multi-focus plenoptic camera! See https://github.com/comsee-research/plenoptic-datasets!


banner-logo

Details

The datasets can be downloaded here.

Experimental setup

For our experiments we used a Raytrix R12 color 3D-light-field-camera, with a MLA of F/2.4 aperture. The camera is in Galilean internal configuration. The mounted lens is a Nikon AF Nikkor F/1.8D with a 50 mm focal length. The MLA organization is hexagonal row-aligned, and composed of 176 x 152 (width x height) micro-lenses with I = 3 different types. The sensor is a Basler beA4000-62KC with a pixel size of s = 0.0055 mm. The raw image resolution is 4080 x 3068 pixel.

All images have been acquired using the MultiCamStudio free software (v6.15.1.3573) of the Euresys company. We set the shutter speed to 5 ms.

3D scenes setup. We use a 3D lidar scanner, a Leica ScanStation P20 (LP20), that allows us to capture a color point cloud with high precision that can be used as ground truth data. The LP20 is configured with no HDR and with a resolution of 1.6 mm @ 10 m.

Datasets

The configuration corresponds to a focus distance h = 2133 mm. We built a calibration dataset, using a 6 x 4 of 30 mm side checkerboard, which is composed of:

  • white raw plenoptic images acquired at different apertures (N in {2.8, 4, 5.66, 8, 11.31, 16}) using a light diffuser mounted on the main objective for pre-calibration,
  • free-hand calibration target images acquired at various poses (in distance and orientation) for the calibration process (31 images),
  • a white raw plenoptic image acquired in the same luminosity condition and with the same aperture as in the calibration targets acquisition for devignetting.

With this configuration, we created two sub-datasets:

  • a simulated dataset built upon our own simulator PRISM based on raytracing to generate images (with 1500 rays/pixel) with known absolute position for quantitative evaluation, named R12-ES (15 images, from 500 mm to 1900 mm with a step of 100 mm);
  • a dataset composed of several 3D scenes with ground truth acquired with the LP20, for object distances ranging from 400 mm to 1500 mm.

The latter dataset, named R12-ELP20, includes fives scenes:

  • one scene for extrinsic parameters calibration, containing checker corner targets, named Calib;
  • two scenes containing textured planar objects, named Plane-1 and Plane-2;
  • and two more complex scenes containing various figurines, named Figurines-1 and Figurines-2.

Each scene is composed of: a colored point cloud (with spatial (x,y,z) information, color information (r,g,b), and intensity information) in format .ptx, .pts and .xyz; 3D positions of the targets in the lidar reference frame; two raw plenoptic images in rgb color and two raw plenoptic images in bayer; finally, photos and labels of the scene.

Applications

For instance, the datasets can be used with the following applications :

  • COMPOTE (Calibration Of Multi-focus PlenOpTic camEra), a collection of tools to pre-calibrate and calibrate (multifocus) plenoptic cameras.
  • PRISM (Plenoptic Raw Image Simulator), a collection of tools to generate and simulate raw images from (multifocus) plenoptic cameras.
  • BLADE (BLur Aware Depth Estimation with a plenoptic camera), a collection of tools to estimate depth map from raw images obtained by (multifocus) plenoptic cameras.

Based on the libpleno, an open-source C++ computer-vision library for plenoptic cameras modeling and processing.

Citing

If you use our datasets, the libpleno, the COMPOTE tools, the PRISM tools or the BLADE tools in an academic context, please cite the following publication:

@inproceedings{labussiere2020blur,
  title 	=	{Blur Aware Calibration of Multi-Focus Plenoptic Camera},
  author	=	{Labussi{\`e}re, Mathieu and Teuli{\`e}re, C{\'e}line and Bernardin, Fr{\'e}d{\'e}ric and Ait-Aider, Omar},
  booktitle	=	{Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  pages		=	{2545--2554},
  year		=	{2020}

or

@article{labussiere2022calibration
  title		=	{Leveraging blur information for plenoptic camera calibration},
  author	=	{Labussi{\`{e}}re, Mathieu and Teuli{\`{e}}re, C{\'{e}}line and Bernardin, Fr{\'{e}}d{\'{e}}ric and Ait-Aider, Omar},
  doi		=	{10.1007/s11263-022-01582-z},
  journal	=	{International Journal of Computer Vision},
  year		=	{2022},
  month		=	{may},
  number	=	{2012},
  pages		=	{1--23}
}

License

This work is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License. Enjoy!

Note: if download links are broken, don’t hesitate to contact me!