Skip to the content.

Rules

Registration

Teams who want to join the challenge can register sending an email to salvatore.calcagno@phd.unict.it or simone.carnemolla@phd.unict.it indicating the team name and for each participant:

Evaluation Criteria and Methodology

The performance of the submitted models will be evaluated using the following criteria:

  1. For task 1 (subject identification), we will compute balanced accuracy score on the held-out-trials test set.
  2. For task 2 (emotion recognition), we will compute balanced accuracy score separately on both held-out-trials and held-out-subjects test sets. These two scores will be averaged together to obtain the final score.

In the event of a tie, the originality of the approach will be used as an additional criterion, with the organizers taking responsibility for the final decision.

Participants are allowed to submit results for one or both tasks. We will select the top-5 performing teams, based on the distribution of participants among the tasks.

Please note that the held-out trials test set for task 1 and task 2 are not completely identical. Please use only IDs specified in the JSON files.

Guidelines for participants

Submission guidelines

Each team should submit their predictions by e-mailing salvatore.calcagno@phd.unict.it and simone.carnemolla@phd.unict.it.

Submission e-mail Structure

Each submission e-mail MUST include the following:

Each participating team is permitted up to five submissions per task and may choose to submit for one or both tasks. Any team member can submit the final results on behalf of the team. Each e-mail containing result files will count as a single submission.

Please provide a self-contained explanation of the methodology in the .pdf file. Teams participating in both tasks should submit a single e-mail that includes results for both tasks along with a single .pdf describing the methodology used.

Submission File Structure

For subject identification, we expect a single .csv file named results_subject_identification_test_trial.csv for the held-out-trial test set.

For emotion recognition, we expect two .csv files:

Each .csv file should contain only two columns:

Here is an example of the structure:

id,prediction
3784258358,12
1378746257,19
2395445698,8

Ensure all files adhere to this format to meet the submission requirements.

FAQ

Is it allowed to download music using the Spotify ID and encode audio information as features for training?
Yes, stimulus information can be used for training. Then, downloading music using the Spotify ID and encoding audio information as features is allowed.

When will the leaderboard be available?
The leaderboard will be released after the challenge concludes. No real-time evaluation will be conducted. We encourage participants to use validation sets to assess model generalization, as this approach is better aligned with the competition’s objectives.