Login

Forgot password? Request a new one here!

Page navigation

Shape 2014 - Statistical Shape Model Challenge

Information about the evaluation are now available. Scroll down to read the instructions. For now, you can download the challenge set consisting of 59 liver segmentations.

Acknowledgments

The provided liver segmentations are part of the training data from the "VISCERAL Organ Segmentation and Landmark Detection Challenge" at the IEEE International Symposium on Biomedical Imaging (ISBI) in May 1st, 2014 in Beijing, China. For more information on the ongoing VISCERAL Anatomy Benchmark series, please visit www.visceral.eu (which should also be acknowledged by citation, footnote, or other suitable means in any publication using any part of its dataset).

Please reference the Virtual Skeleton Database in your work as:

Registration

  • You have to use your institutional email address for the registration.
  • Select the Shape2014 Research Unit during registration.
  • If you already have an account, go to MySMIR/Settings/Group Membership and apply to the Shape2014 group

Download

Please register to receive, after approval, an email with your login link and activate your account

Challenge Data

59 liver segmentations are available:

You need to log in to download the challenge data!

Challenge Rules and Information

All participants obtain a set of training data as binary images, which they use to build a model of the liver. The participants are free in the choice of registration and modeling algorithm. In order to make an evaluation possible, the final model needs to be a parametric, Gaussian model (which comprises e.g. many linear models, such as.the standard PCA models). The evaluation of the model quality is performed once the model is uploaded on the virtual skeleton database.

The model will be evaluated according to the following criteria:

Evaluation

Specificity:

Description
We will draw 1000 samples from each model and compare it to the nearest member of the training set. The average distance will be the score.
Goal
The lower the value for the specificity the better the ranking.

Compactness:

Description
The number of parameters of the models
Goal
The lower the number of dimensions the better the ranking.

Generalization ability

Definition
We will fit the models to a set of test images of segmented livers and compute the following measure to assess generalization ability:
- Symmetric average distance between the surfaces
- Hausdorff distance
Goal
The smaller the distance the better the ranking

Scoring

We compute for each of our evaluation criteria:

  • Generalization (average distance)
  • Generalization (hausdorff distance)
  • Specificity
  • Compactness

ranking of the participant according to their performance (e.g. if there are 10 participants, the best performing algorithm gets 1 point and the worst performing algorithm gets 10 point). The final ranking will be the avarage of the individual rankings.

Upload

Model format

The model has to be provided in statismo format, which is described here: statismo format document

A first version of the validation procedure can be found on github: validation code repo

Model naming

Name your model according to this template: VSD.your_description.26857.h5

This allows the system to identify your model to be part of the challenge.

The description shall not include any spaces or dots.

Evaluation Results: Challenge