A complete and successful submission to this challenge will require the following within deadlines:¶
- Successful run of your algorithm container on final testing phase
- Detailed methodology report with all the requirements (e.g GitHub repo link with documentation on how to run and reproduce algorithm container, private dataset used link etc) sent to isi.challenges@intusurg.com
- 3 mins (max) recorded talk detailing your methodology (sent with report)
Please see additional details regarding specifics below
Please note that these algorithm submission instructions are the same as those of the category 2 in 2022 SurgToolLoc challenge.
The challenge submission process will be based on grand-challenge automated docker submission and evaluation protocol. Each team will need to produce an “algorithm” container (different for each category) and submit it on the challenge website. Please check out the official grand-challenge documentation on building your own algorithm at https://grand-challenge.org/documentation/create-your-own-algorithm/.
Github repo with sample submission algorithm containers:¶
Following is a link to github repo's that contains example submission containers along with detailed instructions for the algorithm submission:
https://github.com/aneeqzia-isi/surgtoolloc2022-category-2
Challenge Phases:¶
The submission to the challenge is divided into two phases which are detailed below.
Preliminary testing phase:
This phase will allow the teams to test their algorithm containers on a small dataset (~10 videos from the actual test set) and debug any issues in making their submissions. Each team will be allowed a max of 10 tries to get their algorithm container working within the grand-challenge environment. Since the teams may not be able to see logs for the output of their algorithm submission, the teams are requested to post a question regarding their failed submission on the forum and the organizing team will post the logs back as a reply to that thread. The aim of this phase will only be for the teams to get used to grand-challenge submission process and have a working algorithm container.
Final testing phase:
In this phase, all teams who were successful in generating a working algorithm container from the preliminary phase will submit their final algorithm container that will be run on the complete testing dataset. There will only be 3 submission allowed per team in this phase so teams will have to ensure that they have a working code/container in the preliminary phase.
Prediction format :¶
For bounding box detections detection, the model (packaged into the algorithm container) will need to generate a dictionary (as json file) containing predictions for each frame in the videos. Specific formats of the json file is given below:
**Surgical tool classification and localization: **
The output json file needs to be a dictionary containing the set of tools detected in each frame with its correspondent bounding box corners (x, y), again generating a single json file for each video like given below:
{
"type": "Multiple 2D bounding boxes",
"boxes": [
{
"corners": [
[ 54.7, 95.5, 0.5],
[ 92.6, 95.5, 0.5],
[ 92.6, 136.1, 0.5],
[ 54.7, 136.1, 0.5]
],
"name": "slice_nr_1_needle_driver",
"probability": 0.452
},
{
"corners": [
[ 54.7, 95.5, 0.5],
[ 92.6, 95.5, 0.5],
[ 92.6, 136.1, 0.5],
[ 54.7, 136.1, 0.5]
],
"name": "slice_nr_2\_monopolar_curved_scissor",
"probability": 0.783
}
], "version": { "major": 1, "minor": 0 } }
Please note that the third value of each corner coordinate is not necessary for predictions but must be kept 0.5 always to comply with the Grand Challenge automated evaluation system (which was built to also consider datasets of 3D images). To standardize the submissions, the first corner is intended to be the top left corner of the bounding box, with the subsequent corners following the clockwise direction. The “type” and “version” entries are to comply with grand-challenge automated evaluation system. Please use the "probability" entry to include the confidence score for each detected bounding box.
Final Report :¶
Along with your algorithm submission, all teams are asked to submit a final report explaining their chosen methodology and the results they obtained. In the interest of transparency, this report must also contain a link to your code and any data beyond what was made available through this challenge (e.g. public data sets, or additional labels you have created for the training data) that was used to train your model. This information may be used to verify model results. Your report will be especially important if your team is one of the finalists, in which case your model will be presented at the MICCAI read-out and in our subsequent publication on the results of the challenge. Team submissions will not be eligible for cash prizes without a suitable final report.
To help you with the report, we've created a rough guide for you to follow available here.