STS 2024 : 2nd Semi-supervised Teeth Segmentation MICCAI Challenge

MICCAI 2024, MARRAKESH, MOROCCO

October 6-10, 2024

You should ⬇️download the form, sign it, and then, send it to our contact email. Once submitted, you'll get our challenge dataset download link. For more details, please check below.

🔢 Evaluation Metrics

Based on the results in 🤓STS 2023 Challenge, we found that segmentation models can not achieve a good tradeoff between segmentation accuracy and efficiency. Thus, in STS 2024, the challenge evaluation criteria are not limited to segmentation accuracy, but also include runtime and GPU memory consumption, providing a comprehensive assessment of segmentation efficiency.

In specific, the segmentation inference results are evaluated using Dice Similarity Coefficient (DSC), Normalized Surface Dice (NSD), Intersection over Union (IoU), Identification Accuracy (IA), Running time (RT) and Area under GPU memory-time curve (GPU-area). DSC and IoU are used to measure the region error, while NSD assesses the boundary error. The IA metric evaluates the object-level localization (detection) performance of teeth. RT measures the inference speed. Moreover, we also include the area under the GPU utilization-time curve to measure the GPU consumption. You can learn the details from the python3 version of the code for calculating performance metrics that we released.

Special Explanation: The calculation definition of IA is: #{ D ∩ G } / #{D ∪ G}. where G is the set of all teeth in ground truth data, and D is the set of teeth prediction. #{ D ∩ G } indicates the intersection between the D and the G, reflecting the number of correctly labeled tooth instances detected by the algorithm, and #{D ∪ G} represents the union between the prediction result and the ground truth. In particular, the localization criterion is Mask IoU score localization criterion. Besides, we have implemented a greedy strategy to match reference and predicted objects. This means that only objects with consistent predicted categories and a Mask IoU greater than 0.5 will have their #{ D ∩ G } counts + 1.

🔔Now, we have updated and released the evaluation code and submission instructions. If you encounter a bug or have questions about the evaluation metrics, you can check out our Github: https://github.com/STS-challenge/STS.

🏆 Ranking Scheme

All metrics will be used to compute the ranking. However, considering the computing resource limitations of the CodeBench, RT and GPU_area will not be calculated. We will include these two metrics in the ranking in the final testing phase. The ranking scheme includes the following five steps:

🎁 Challenge Awards

To determine the winners, we will rank the participants based on their final algorithm on the independent test set. We have planned a range of comprehensive awards for both tracks, which we hope will attract and motivate you to participate.We look forward to your involvement and to seeing the outstanding work that will emerge from this competition.