The challenge will have 5 benchmarks, details of which can be seen below:
Important Note:We are making the code for Domain Adaptation baselines public here. If anyone is interested, feel free to use it. For Domain Adaptation Challenges, the participants may be requested to submit the code and requirements.txt (containing all the required installations).
This challenge involves domain adaptation from around 20k samples of Mapillary, Cityscapes (fine annotations only), Berkeley Deep Drive, and GTA as the source dataset (S) to the IDD as target dataset (T). For the IDD dataset, participants have to submit the results for Level-3 (26 classes) hierarchy.
python preperation/createLabels.py --datadir $ANUE --id-type level3Id --num-workers $C
./domain_adaptation/source/prep_all.sh
This will create the folder public-code/domain_adaptation/source/source_datasets_dir/source_datasets_dir/ where you will find the images and annotations for the source dataset to be used for this challenge.
python evaluate/evaluate_mIoU.py --gts $GT --preds $PRED --num-workers $C
The output format is a png image with the same resolution as the input image, where the value of every pixel is an integer in {0. .... , 26}, where the first 0-25 classes correspond to the level 3 ids (see Overview, for details of the level 3 ids) and the class 26 is used as a miscellaneous class.
We will be using the mean Intersection over Union metric. All the ground truth and predictions maps will be resized to 720p (using nearest neighbor) and True positives (TP), False Negatives (FN) and False positives (FP) will be computed for each class (except 26) over the entire test split of the dataset. Intersection over Union (IoU) will be computed for each class by the formula TP/(TP+FN+FP) and the mean value is taken as the metric (commonly known as mIoU) for the segmentation challenge.
Additionally we will also be reporting the mIoU for level 2 and level 1 ids also at 720p resolution in the leaderboard. Evaluation scripts are available here: https://github.com/AutoNUE/public-code
Team/Uploader Name | Method Name | mIoU for L3 IDs at 720p | mIoU for L2 IDs at 720p | mIoU for L1 IDs at 720p |
---|---|---|---|---|
Anonymous | Anonymous | 0.7538 | 0.7816 | 0.8953 |
BASELINE | DRND 38 | 0.5615 | 0.6489 | 0.8026 |
This challenge involves domain adaptation from around 20k samples of Mapillary, Cityscapes (fine annotations only), Berkeley Deep Drive, and GTA as the source dataset (S) to the IDD as target dataset (T). For the IDD dataset, participants have to submit the results for Level-3 (26 classes) hierarchy.
python preperation/createLabels.py --datadir $ANUE --id-type level3Id --num-workers $C --semisup_da True
Note that only selected train masks, which can be used for this challenge, will be generated for the training stage. All validation masks will be generated for the evaluation stage (refer to step 10 below).
./domain_adaptation/source/prep_all.sh
This will create the folder public-code/domain_adaptation/source/source_datasets_dir/source_datasets_dir/ where you will find the images and annotations for the source dataset to be used for this challenge.
python evaluate/evaluate_mIoU.py --gts $GT --preds $PRED --num-workers $C
The output format is a png image with the same resolution as the input image, where the value of every pixel is an integer in {0. .... , 26}, where the first 0-25 classes correspond to the level 3 ids (see Overview, for details of the level 3 ids) and the class 26 is used as a miscellaneous class.
We will be using the mean Intersection over Union metric. All the ground truth and predictions maps will be resized to 720p (using nearest neighbor) and True positives (TP), False Negatives (FN) and False positives (FP) will be computed for each class (except 26) over the entire test split of the dataset. Intersection over Union (IoU) will be computed for each class by the formula TP/(TP+FN+FP) and the mean value is taken as the metric (commonly known as mIoU) for the segmentation challenge.
Additionally we will also be reporting the mIoU for level 2 and level 1 ids also at 720p resolution in the leaderboard. Evaluation scripts are available here: https://github.com/AutoNUE/public-code
Team/Uploader Name | Method Name | mIoU for L3 IDs at 720p | mIoU for L2 IDs at 720p | mIoU for L1 IDs at 720p |
---|---|---|---|---|
Anonymous | kl loss | 0.6974 | 0.752 | 0.877 |
Anonymous | Anonymous | 0.6954 | 0.7349 | 0.8623 |
BASELINE | USSS | 0.3044 | 0.4031 | 0.5173 |
This challenge involves domain adaptation from around 20k samples of Mapillary, Cityscapes (fine annotations only), Berkeley Deep Drive, and GTA as the source dataset (S) to the IDD as target dataset (T). For the IDD dataset, participants have to submit the results for Level-3 (26 classes) hierarchy.
python preperation/createLabels.py --datadir $ANUE --id-type level3Id --num-workers $C --weaksup_da True
Note that only validation masks will be generated for this challenge towards the evaluation stage (refer to step 11 below). Bounding box annotations to be used for training in this challenge are present here: https://github.com/AutoNUE/public-code/tree/master/domain_adaptation/target/weakly-supervised
./domain_adaptation/source/prep_all.sh
This will create the folder public-code/domain_adaptation/source/source_datasets_dir/source_datasets_dir/ where you will find the images and annotations for the source dataset to be used for this challenge.
python evaluate/evaluate_mIoU.py --gts $GT --preds $PRED --num-workers $C
The output format is a png image with the same resolution as the input image, where the value of every pixel is an integer in {0. .... , 26}, where the first 0-25 classes correspond to the level 3 ids (see Overview, for details of the level 3 ids) and the class 26 is used as a miscellaneous class.
We will be using the mean Intersection over Union metric. All the ground truth and predictions maps will be resized to 720p (using nearest neighbor) and True positives (TP), False Negatives (FN) and False positives (FP) will be computed for each class (except 26) over the entire test split of the dataset. Intersection over Union (IoU) will be computed for each class by the formula TP/(TP+FN+FP) and the mean value is taken as the metric (commonly known as mIoU) for the segmentation challenge.
Additionally we will also be reporting the mIoU for level 2 and level 1 ids also at 720p resolution in the leaderboard. Evaluation scripts are available here: https://github.com/AutoNUE/public-code
Team/Uploader Name | Method Name | mIoU for L3 IDs at 720p | mIoU for L2 IDs at 720p | mIoU for L1 IDs at 720p |
---|---|---|---|---|
Anonymous | Anonymous | 0.5973 | 0.6332 | 0.8224 |
BASELINE | DRND 22 | 0.2551 | 0.3522 | 0.492 |
This challenge involves domain adaptation from around 20k samples of Mapillary, Cityscapes (fine annotations only), Berkeley Deep Drive, and GTA as the source dataset (S) to the IDD as target dataset (T). For the IDD dataset, participants have to submit the results for Level-3 (26 classes) hierarchy.
python preperation/createLabels.py --datadir $ANUE --id-type level3Id --num-workers $C --unsup_da True
Note that only validation masks will be generated for this challenge towards the evaluation stage (refer to step 11 below). IDD Training labels cannot be used for this challenge. Images from training data can be used.
./domain_adaptation/source/prep_all.sh
This will create the folder public-code/domain_adaptation/source/source_datasets_dir/source_datasets_dir/ where you will find the images and annotations for the source dataset to be used for this challenge.
python evaluate/evaluate_mIoU.py --gts $GT --preds $PRED --num-workers $C
The output format is a png image with the same resolution as the input image, where the value of every pixel is an integer in {0. .... , 26}, where the first 0-25 classes correspond to the level 3 ids (see Overview, for details of the level 3 ids) and the class 26 is used as a miscellaneous class.
We will be using the mean Intersection over Union metric. All the ground truth and predictions maps will be resized to 720p (using nearest neighbor) and True positives (TP), False Negatives (FN) and False positives (FP) will be computed for each class (except 26) over the entire test split of the dataset. Intersection over Union (IoU) will be computed for each class by the formula TP/(TP+FN+FP) and the mean value is taken as the metric (commonly known as mIoU) for the segmentation challenge.
Additionally we will also be reporting the mIoU for level 2 and level 1 ids also at 720p resolution in the leaderboard. Evaluation scripts are available here: https://github.com/AutoNUE/public-code
Team/Uploader Name | Method Name | mIoU for L3 IDs at 720p | mIoU for L2 IDs at 720p | mIoU for L1 IDs at 720p |
---|---|---|---|---|
Tencent YouTu Lab | Fix | 0.3627 | 0.4948 | 0.611 |
BASELINE | DRND 22 | 0.2551 | 0.3522 | 0.492 |
The segmentation challenge involves pixel level predictions for all the 26 classes at level 3 of the label hierarchy (see Overview, for details of the level 3 ids).
python preperation/createLabels.py --datadir $ANUE --id-type level3Id --num-workers $C
python evaluate/evaluate_mIoU.py --gts $GT --preds $PRED --num-workers $C
The output format is a png image with the same resolution as the input image, where the value of every pixel is an integer in {0. .... , 26}, where the first 0-25 classes correspond to the level 3 ids (see Overview, for details of the level 3 ids) and the class 26 is used as a miscellaneous class.
We will be using the mean Intersection over Union metric. All the ground truth and predictions maps will be resized to 720p (using nearest neighbor) and True positives (TP), False Negatives (FN) and False positives (FP) will be computed for each class (except 26) over the entire test split of the dataset. Intersection over Union (IoU) will be computed for each class by the formula TP/(TP+FN+FP) and the mean value is taken as the metric (commonly known as mIoU) for the segmentation challenge.
Additionally we will also be reporting the mIoU for level 2 and level 1 ids also at 720p resolution in the leaderboard. Evaluation scripts are available here: https://github.com/AutoNUE/public-code
Team/Uploader Name | Method Name | mIoU for L3 IDs at 720p | mIoU for L2 IDs at 720p | mIoU for L1 IDs at 720p |
---|---|---|---|---|
PaddleSeg | PaddleSeg | 0.7862 | 0.8046 | 0.9099 |
YTSeg | YTSeg | 0.7845 | 0.8028 | 0.9081 |
Prabahkar | HR | 0.769 | 0.7929 | 0.9044 |
Александр | SENet | 0.767 | 0.7914 | 0.9035 |
Tsubasa | infomer_40 | 0.7655 | 0.7904 | 0.9022 |
OCRNet | final ocr | 0.7649 | 0.7887 | 0.9006 |
lovasz loss | Lovasz | 0.7637 | 0.7857 | 0.899 |
SKK.AL | HRNet | 0.7621 | 0.786 | 0.898 |
zzzzz | ocr | 0.7617 | 0.7889 | 0.902 |
CitySpace | Hierarchical | 0.7602 | 0.789 | 0.8978 |
CityCase | HRNET | 0.7596 | 0.7877 | 0.8973 |
Anonymous | OCRNet | 0.7596 | 0.7839 | 0.8967 |
swin | swin | 0.7592 | 0.7836 | 0.8952 |
ttttt | hrnet | 0.7581 | 0.7857 | 0.8983 |
Julius Zhang | hr | 0.756 | 0.7805 | 0.8944 |
first commit | try | 0.7504 | 0.7767 | 0.8925 |
ocr | ocr | 0.7494 | 0.7812 | 0.8968 |
Transformer | ATrans++ | 0.7451 | 0.7729 | 0.8901 |
Anonymous | RMI + abn | 0.7337 | 0.767 | 0.8868 |
Anonymous | HRNet | 0.6928 | 0.7213 | 0.8909 |
bupt noob boy | swin | 0.6705 | 0.748 | 0.8676 |