ITLCampus-SM

0

Описание

The dataset was recorded on the Husky robotics platform on the university campus and consists of 5 tracks recorded at different times of day (day/dusk/night) and different seasons (winter/spring).

Сообщить о нарушении
2 года назад
2 года назад
2 года назад
2 года назад
2 года назад
README.md

ITLCampus-SM

The dataset was recorded on the Husky robotics platform on the university campus and consists of 5 tracks recorded at different times of day (day/dusk/night) and different seasons (winter/spring).

Data

TrackSeasonTime of dayFrames, pcsFront cam, resBack cam, resLiDAR, rays6 DoF poseSemantic masks
00_2023-02-21winterday$620$$1920\times 1080$$1920\times 1080$16front + back
$1920\times 1080 \times 65$ classes
01_2023-03-15winternight$626$$1920\times 1080$$1920\times 1080$16front + back
$1920\times 1080 \times 65$ classes
02_2023-02-10wintertwilight$609$$1920\times 1080$$1920\times 1080$16front + back
$1920\times 1080 \times 65$ classes
03_2023-04-11springday$638$$1920\times 1080$$1920\times 1080$16front + back
$1920\times 1080 \times 65$ classes
11_2023-04-13springnight$631$$1920\times 1080$$1920\times 1080$16front + back
$1920\times 1080 \times 65$ classes

6 DoF poses obtained using ALeGO-LOAM localization method refined with Interactive SLAM.

Sensors

SensorModelResolution
Front camZED (stereo)$1920\times 1080$
Back camRealSense D435$1920\times 1080$
LiDARVLP-16$16\times 1824$

Semantics

Semantic masks are obtained using the Oneformer pre-trained on the Mapillary dataset.

The masks are stored as mono-channel images.Each pixel stores a semantic label. Examples of semantic information are shown in the table below:

LabelSemantic classColor, [r, g, b]
.........
10Parking[250, 170, 160]
11Pedestrin Area[96, 96, 96]
12Rail Track[230, 150, 140]
13Road[128, 64, 128]
.........

The complete list of semantic labels and their colors are described in the file anno_config.json.

An example of a mask over the image:

Structure

The data are organized by tracks, the length of one track is about 3 km, each track includes about 600 frames. The distance between adjacent frames is ~5 m.

The structure of track data storage is as follows:

00_2023-02-21
├── back_cam
│   ├── ####.png
│   └── ####.png
├── demo.mp4
├── front_cam
│   ├── ####.png
│   └── ####.png
├── labels
│   ├── back_cam
│   │   ├── ####.png
│   │   └── ####.png
│   └── front_cam
│   ├── ####.png
│   └── ####.png
├── lidar
│   ├── ####.bin
│   └── ####.bin
├── test.png
├── track.csv
└── track_map.png

where

  • ####
    - file name, which is the timestamp of the image/scan (virtual timestamp of the moment when the image/scan was taken)
  • .bin
    - files - LiDAR scans in binary format
  • .png
    - images and semantic masks
  • .csv
    - timestamp mapping for all data and 6DoF robot poses

An example of a track trajectory (track_map.png):

Использование cookies

Мы используем файлы cookie в соответствии с Политикой конфиденциальности и Политикой использования cookies.

Нажимая кнопку «Принимаю», Вы даете АО «СберТех» согласие на обработку Ваших персональных данных в целях совершенствования нашего веб-сайта и Сервиса GitVerse, а также повышения удобства их использования.

Запретить использование cookies Вы можете самостоятельно в настройках Вашего браузера.