Scientific database
Introduction Providing Association Questions News Contact Scientific database
Multimodal Emotion Recognition Challenge (MEC 2017)
Multimodal Emotion Recognition Challenge (MEC 2017)

The Multimodal Emotion Recognition Challenge (MEC 2017) will be the second competition event aimed at the comparison of multimedia processing and machine learning methods for automatic audio and visual emotion analysis, with all participants competing under strictly the same conditions. The goal of the Challenge is to provide a common benchmark data set and to bring together the audio and video emotion recognition communities, and to promote the research in multimodal emotion recognition. For information about the previous first MEC 2016, please visit

    The Challenge is jointly organized by Institute of Automation, Chinese Academy of Science (CASIA), Technical Committee of Artificial Psychology and Artificial Emotion of Chinese Association for Artificial Intelligence (CAAI), Technical Committee of Human-Computer Interaction of China Computer Federation (CCF) and the Association for the Advancement of Affective Computing (AAAC). Every participating team may present their papers in the Asian Affective Computing and Intelligent Interaction (AACII) conference 2018.

Task Description

The second Multimodal Emotion Recognition Challenge (MEC 2017) consists of three sub-challenges, namely the Audio-based emotion recognition sub-challenge, the Video-based facial expression recognition sub-challenge, and the Audiovisual emotion recognition sub-challenge.

    The data used in this challenge is the Chinese Natural Audio-Visual Emotion Database (CHEAVD) 2.0, which is selected from Chinese movies and TV programs. The discrete emotion (Angry, Disgust, Happiness, Sad, Surprise, Anxious, Worried and Neutral) labels are provided for this challenge. The dataset is split into a training/validation/testing set. The participating teams are required to train their model on the training and validation set, and a rank will be given according to the performance on the test set.

Organization Committee

    Jianhua Tao, Institute of Automation, Chinese Academy of Sciences
    Björn Schuller, University of Passau, Imperial College London
Program Committee:
    Shiguang Shan, Institute of Computing Technology, Chinese Academy of Sciences
    Dongmei Jiang, Northwestern Polytechnical University
    Jia Jia, Tsinghua University
    Ya Li, Institute of Automation, Chinese Academy of Sciences


Please fill in the registration form RegistrationForm and send it to the organizer by email. Participation should be signed up as research institutions or companies, and no individual is accepted. If you have any questions, please contact (Please change # to @).

Results Submission

1) Participants are free to use additional data and/or discarding instances from the data set for training. Please however, consider that the description of these additional datasets is mandatory when writing your paper.
2) Each participant has up to five submission attempts. Your best results will be used in determining the winner of the challenge.
3) Please keep the same format and the order of the test samples when you submit the result. The results should be uploaded in the result folder on the FTP server. The organizer will not count badly formatted results as a valid submission.
4) Since there are three sub challenges in MEC 2017, please use “test_audio_1.txt”, “test_audio_2.txt”,… “test_video_1.txt”,…, “test_multimodal_1.txt”, … for the names of the submissions to distinguish them.

Important Dates

2017-03-20:         Challenge open, accept registration;
2017-04-20:         Data released;
2017-06-10:         Deadline for registration;
2017-07-20:         Test data released;
2017-08-10:         Submit results;
2017-08-25:         Challenge results announced;
2017-09-15:         Submit paper;
2017-10-25:         Notification of the accepted papers;
2017-12-15:         Submit camera-ready paper;
2018-04-09,10:    AACII, Beijing, China.

版权所有:中国中文信息学会   备案序号:京ICP备05036949号