Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 82, in _split_generators
                  raise ValueError(
              ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

VVSim Dataset

VVSim is a large-scale dataset created for aerial–ground cooperative perception (AGCP). It integrates synchronized multimodal sensing data and state information collected simultaneously from vehicles and UAVs. The dataset contains 61K fully annotated frames that cover 19 interaction scenarios (e.g., cut-in and lane change), along with 5 weather conditions (e.g., sunny, foggy, rainy, cloudy, snowy) and 11 scene types such as city, town, university, highway, and mountain environments. Beyond these frames, VVSim provides 255K LiDAR sweeps and 3.5M images (e.g., 1.2M RGB images, 1.2M semantic segmentation images, and 1.1M depth images), accompanied by detailed annotations for 2D and 3D bounding boxes, object trajectories, and agent states.

Data Annotation

VVSim provides comprehensive annotations for both 3D and 2D perception tasks. Each object in the scene is annotated with a 3D bounding box (986K), capturing its spatial extent and orientation in the world coordinate system. In addition, 2D bounding boxes are generated via projection of the corresponding 3D boxes onto the image plane for camera-based tasks.

Beyond geometric information, VVSim records detailed object state attributes, including precise position (x, y, z), rotation angles (roll, pitch, yaw), and velocity vectors. All annotated objects are categorized into one of 7 classes: car, pedestrian, van, trailer, truck, traffic cone, and speed limit sign, and the detailed size information is shown in the table below. These rich annotations enable comprehensive evaluation of both spatial localization and motion prediction, supporting a wide range of cooperative perception tasks across aerial and ground agents. The class distribution is illustrated in the figure below.

VVSim further provides 30 semantic segmentation categories that span road structures, dynamic agents, traffic infrastructure, vegetation, buildings, and diverse environmental objects.

Coordinate Systems

VVSim employs several coordinate systems to represent sensor data:

  • LiDAR coordinate system (FLU): x points forward, y points left, z points upward.
  • Ego-vehicle coordinate system (FRD): x points forward, y points right, z points downward. All vehicle-mounted sensors are defined relative to this frame.
  • Global/world coordinate system (RFU): x points right, y points forward, z points upward.
  • Camera coordinate system (FRD): x points forward, y points right, z points downward.

Sensor Parameters

In VVSim, both vehicle and aerial platforms are equipped with sensors to support diverse perception tasks. All sensors operate at a frame rate of 25 Hz.

Vehicle sensors:

  • Camera: Four sets (front, rear, left, right), each consisting of RGB, depth, and semantic segmentation cameras. Resolution: 1600 × 900 px, FOV 90°. Approximate translations (m) relative to the vehicle origin: front (0.5, 0.0, -2.0), rear (-0.65, 0.0, -2.0), left (0.0, -0.5, -2.0), right (0.0, 0.5, -2.0). Camera orientations (yaw, pitch, roll, deg): front (0,0,0), rear (180,0,0), left (-90,0,0), right (90,0,0).
  • LiDAR: One 64-beam LiDAR on top at (0.0, 0.0, -1.0) m, orientation (0,0,0), providing high-density 3D point clouds.

UAV sensors:

  • Camera: Two downward-facing cameras providing RGB and semantic segmentation images. Resolution: 3840 × 2160 px, FOV 90°. Translation relative to drone origin: (0.0, 0.0, 0.2) m, orientation (0, -90, 0) deg (yaw, pitch, roll).
Downloads last month
1