تلفیق الکتروانسفالوگرافی و حالت چهره
ترجمه نشده

تلفیق الکتروانسفالوگرافی و حالت چهره

عنوان فارسی مقاله: تلفیق الکتروانسفالوگرافی و حالت چهره برای تشخیص احساسات مداوم
عنوان انگلیسی مقاله: The Fusion of Electroencephalography and Facial Expression for Continuous Emotion Recognition
مجله/کنفرانس: دسترسی – IEEE Access
رشته های تحصیلی مرتبط: مهندسی پزشکی
گرایش های تحصیلی مرتبط: بیوالکتریک
کلمات کلیدی فارسی: تشخیص احساسات مداوم، الکتروانسفالوگرافی، حالت های چهره، پردازش سیگنال، ترکیب سطح تصمیم، دینامیک های زمانی
کلمات کلیدی انگلیسی: Continuous emotion recognition, EEG, facial expressions, signal processing, decision level fusion, temporal dynamics
نوع نگارش مقاله: مقاله پژوهشی (Research Article)
شناسه دیجیتال (DOI): https://doi.org/10.1109/ACCESS.2019.2949707
دانشگاه: Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, Tianjin University of Technology, Tianjin 300384, China
صفحات مقاله انگلیسی: 13
ناشر: آی تریپل ای - IEEE
نوع ارائه مقاله: ژورنال
نوع مقاله: ISI
سال انتشار مقاله: 2019
ایمپکت فاکتور: 4.641 در سال 2018
شاخص H_index: 56 در سال 2019
شاخص SJR: 0.609 در سال 2018
شناسه ISSN: 2169-3536
شاخص Quartile (چارک): Q2 در سال 2018
فرمت مقاله انگلیسی: PDF
وضعیت ترجمه: ترجمه نشده است
قیمت مقاله انگلیسی: رایگان
آیا این مقاله بیس است: خیر
آیا این مقاله مدل مفهومی دارد: ندارد
آیا این مقاله پرسشنامه دارد: ندارد
آیا این مقاله متغیر دارد: ندارد
کد محصول: E13914
رفرنس: دارای رفرنس در داخل متن و انتهای مقاله
فهرست مطالب (انگلیسی)

Abstract

I. Introduction

II. Related Work

III. Materials and Methods

IV. Temporal Dynamics of Emotions

V. Results

Authors

Figures

References

بخشی از مقاله (انگلیسی)

Abstract

Recently, the study of emotion recognition has received increasing attentions by the rapid development of noninvasive sensor technologies, machine learning algorithms and compute capability of computers. Compared with single modal emotion recognition, the multimodal paradigm introduces complementary information for emotion recognition. Hence, in this work, we presented a decision level fusion framework for detecting emotions continuously by fusing the Electroencephalography (EEG) and facial expressions. Three types of movie clips (positive, negative, and neutral) were utilized to elicit specific emotions of subjects, the EEG and facial expression signals were recorded simultaneously. The power spectrum density (PSD) features of EEG were extracted by time-frequency analysis, and then EEG features were selected for regression. For the facial expression, the facial geometric features were calculated by facial landmark localization. Long short-term memory networks (LSTM) were utilized to accomplish the decision level fusion and captured temporal dynamics of emotions. The results have shown that the proposed method achieved outstanding performance for continuous emotion recognition, and it yields 0.625±۰٫۰۲۹ of concordance correlation coefficient (CCC). From the results, the fusion of two modalities outperformed EEG and facial expression separately. Furthermore, different numbers of time-steps of LSTM was applied to analyze the temporal dynamic capturing.

Introduction

Emotion is a psychophysiological process of perception and cognition to object or situation, and it plays an important role in human-human natural communication. However, the emotion recognition was neglected in the field of human-computer interaction (HCI). Due to the explosion of machine learning in the cognitive science, affective computing has emerged to integrate emotion recognition into HCI. Nowadays, emotion recognition system aims to establish a harmonious HCI by endowing computers with the ability to recognize, understand, express and adapt to human emotions [1]. Hence, it provides potentially applications for emotion recognition in many fields, such as human robot interaction (HRI) [2], safe driving [3], social networking [4] and distance education [5]. These applications manifested the different modalities interaction which provides complementary information to improve the precision and robust of the emotion recognition system. In order to represent emotions, the discrete models and dimensional models were proposed by psychologists [6]. The discrete emotion models turned emotion recognition into classification problem. Six basic emotions (happiness, anger, sadness, surprise, fear and disgust) can be recognized as prototypes from which other emotions are derived [7]. However, the emotions expressed in communication are complex, and one basic emotion can hardly describe the human feeling under certain situation. In addition, emotions could be mapped in multi-dimensional spaces that could maximize the largest variance of all the possible emotions [8]. The valence-arousal plane is one of the well-known dimensional models of emotion that maps the emotions into a two-dimensional circular space.