AI: Phases & Rules🌱


The PANORAMA: AI Study (grand challenge) takes place in two phases:

  • Open Development Phase (Duration: 4-6 months): Anyone can participate in this phase of the challenge. Interested teams must join the PANORAMA24 challenge at https://panorama.grand-challenge.org/. Afterwards, they will be provided access to download the Public Training and Development Dataset, and in turn, they can start developing and training AI models using their private or public compute resources (e.g. Google ColaboratoryKaggle). Each team can submit a single trained algorithm (in a Docker container) for evaluation every week (similar to the AIROGSMIDOG2021 and CoNIC2022 challenges). During evaluation, algorithms are executed on the grand-challenge.org platform, their performance is estimated on the Hidden Tuning Cohort, and team rankings are updated accordingly on a live, public leaderboard. Facilitating validation in such a manner, ensures that any image used for evaluation remains truly unseen, and that AI predictions cannot be tampered with. At the end of this phase, each team can choose to submit a single AI algorithm (presumably their top-performing model) for evaluation on the Hidden Testing Cohort. Based on their performance on this cohort, all-new rankings will be drawn and the top 5 AI algorithms will be determined.
  • Open Development Phase Dummy (Duration: 4-6 months): This phase will be open for as long as the development phase and will have an identical structure regarding scoring metrics, but it consists of only 6 sample cases (from the training dataset). The goal of this phase is to facilitate algorithm debugging, as participants will have access to their submission logs. This phase will not be used to rank teams based on performance or to select challenge winners in any way.
  • Closed Testing Phase (Duration: 1 month): At the end of this phase, each team can choose to submit a single AI algorithm (presumably their top-performing model) for evaluation on the Hidden Testing Cohort. Based on their performance on this cohort, all-new rankings will be drawn and the top 5 AI algorithms of the PANORAMA24 challenge will be determined. To qualify as one of these top teams, participants must also submit a short paper on their methodology (2-3 pages) and a public/private URL of their source code on GitHub - to ensure fairness, traceability and reproducibility of all proposed solutions

Rules:

  • All participants must form teams (even if the team is composed of a single participant), and each participant can only be a member of a single team.

  • Any individual participating with multiple or duplicate Grand Challenge profiles will be disqualified.

  • Anonymous participation is not allowed. To qualify for ranking on the validation/testing leaderboards, true names and affiliations [university, institute, or company (if any), country] must be displayed accurately on verified Grand Challenge profiles for all participants.

  • Members of sponsoring or organizing centers (Radboud University Medical CenterZiekenhuis Groep TwenteUniversity Medical Center Groningen, Karolinska Institute, Haukeland University Hospital Bergen) may participate in the challenge but are not eligible for prizes or the final ranking in the testing phase.

  • This challenge only supports the submission of fully automated methods in Docker containers. It is not possible to submit semi-automated or interactive methods.

  • All Docker containers submitted to the challenge will be run offline (i.e., they will not have access to the internet and cannot download/upload any resources). All necessary resources (e.g., pre-trained weights) must be encapsulated in the submitted containers apriori.

  • Participants competing for prizes can use pre-trained AI models based on computer vision and/or medical imaging datasets (e.g. ImageNetMedical Segmentation Decathlon). They can also use external datasets to train their AI algorithms. However, such data and/or models must be published under a permissive license (within 3 months of the Open Development Phase deadline) to give all other participants a fair chance at competing on equal footing. They must also clearly state the use of external data in their submission, using the algorithm name [e.g., "PDAC Detection Model (trained w/ private data)"], algorithm page, and/or a supporting publication/URL.

  • Researchers and companies interested in benchmarking their institutional AI models or products but not competing for prizes can freely use private or unpublished external datasets to train their AI algorithms. They must clearly state the use of external data in their submission, using the algorithm name [e.g., "PDAC Detection AI Model (trained w/ private data)"], algorithm page, and/or a supporting publication/URL. They are not obligated to publish their AI models and/or datasets before or anytime after the submission.

  • To participate in the Testing Phase, participants must submit a short arXiv paper on their methodology (2–3 pages) and a public/private URL to their source code on GitHub (hosted with a permissive license). We take these measures to ensure the credibility and reproducibility of all proposed solutions and to promote open-source AI development.

  • The top 5 winning algorithms of the PANORAMA challenge, as trained on the Public Training and Development Dataset and evaluated on the Hidden Testing Cohort in the Testing Phase, will be made publicly available as Grand Challenge Algorithms once the challenge has officially concluded.

  • Participants of the PANORAMA challenge and non-participating researchers using the PANORAMA public training dataset can publish their own results at any time, separately. They do not have to adhere to any embargo period. While doing so, they are requested to cite this document (BIAS preregistration form for the PANORAMA challenge). Once a challenge paper has been published, they are requested to refer to that publication instead.

  • Organizers of the PANORAMA challenge reserve the right to disqualify any participant or participating team at any time on grounds of unfair or dishonest practices.

  • All participants reserve the right to drop out of the PANORAMA challenge and forego any further participation. However, they will not be able to retract their prior submissions or any published results till that point in time.