Teaser image

Unlike humans, who can effortlessly estimate the entirety of objects even when partially occluded, modern computer vision algorithms still find this aspect extremely challenging. Leveraging this amodal perception for autonomous driving remains largely untapped due to the lack of suitable datasets. The curation of these datasets is primarily hindered by significant annotation costs and mitigating annotator subjectivity in accurately labeling occluded regions. To address these limitations, we introduce AmodalSynthDrive, a synthetic multi-task multi-modal amodal perception dataset. The dataset provides multi-view camera images, 3D bounding boxes, LiDAR data, and odometry for 150 driving sequences with over 1M object annotations in diverse traffic, weather, and lighting conditions. AmodalSynthDrive supports multiple amodal scene understanding tasks including the introduced amodal depth estimation for enhanced spatial understanding. We evaluate several baselines for each of these tasks to illustrate the challenges and set up public benchmarking servers. The dataset is available here.

AmodalSynthDrive Benchmarking

Explore each specific benchmarking task page to learn more about the specific challenges of each task, and gain insights into our evaluation methodologies and criteria.

Dataset Download

The dataset is organized into three splits: training, validation, and testing. For each set, you can download the specific data type individually.
  1. To do so, simply click on the respective button corresponding to the data type of a particular set.

  2. This action will initiate the download of a bash script.

  3. Once the script is downloaded, provide it with execution permissions by running the following command in your terminal:
    chmod +x x_bash_script.sh

  4. Now, you can execute the bash script using the command:
    ./x_bash_script.sh path_to_folder_where_amodalSynthDrive_root_folder_will_be_created
    This will create the AmodalSynthDrive root folder in the specified path.

Optional: Scripts for Visualization
The provided scripts facilitate the conversion of raw ground truths into colored representations for visualization purposes.

License Agreement

The data is provided for non-commercial use only. By downloading the data, you accept the license agreement which can be downloaded here.

Training Set

Validation Set

Testing Set


Coming soon...


If you find our work useful, please consider citing our paper:

Ahmed Rida Sekkat, Rohit Mohan, Oliver Sawade, Elmar Matthes, and Abhinav Valada
AmodalSynthDrive: A Synthetic Amodal Perception Dataset for Autonomous Driving

(PDF) (BibTeX)



This work was funded by the German Research Foundation Emmy Noether Program grant number 468878300 and an academic grant from NVIDIA.