Mi DSpace
Usuario
Contraseña
Please use this identifier to cite or link to this item: http://hdl.handle.net/20.500.12590/17035
Title: A multi-modal visual emotion recognition method to instantiate an ontology
Authors: A. Heredia, Juan Pablo
Cardinale, Yudith
Dongo, Irvin
Díaz-Amado, Jose
Keywords: Emotion Ontology;Emotion Recognition;Multi-modal Method;Visual Expressions
Issue Date: 2021
Publisher: SciTePress
metadata.dc.relation.uri: https://www.scopus.com/record/display.uri?eid=2-s2.0-85111776744&origin=resultslist&sort=plf-f&src=s&nlo=&nlr=&nls=&sid=388854f699364393473c7d2625e8af59&sot=aff&sdt=cl&cluster=scopubyr%2c%222021%22%2ct&sl=48&s=AF-ID%28%22Universidad+Cat%c3%b3lica+San+Pablo%22+60105300%29&relpos=60&citeCnt=0&searchTerm=&featureToggles=FEATURE_NEW_DOC_DETAILS_EXPORT:1
Abstract: "Human emotion recognition from visual expressions is an important research area in computer vision and machine learning owing to its significant scientific and commercial potential. Since visual expressions can be captured from different modalities (e.g., face expressions, body posture, hands pose), multi-modal methods are becoming popular for analyzing human reactions. In contexts in which human emotion detection is performed to associate emotions to certain events or objects to support decision making or for further analysis, it is useful to keep this information in semantic repositories, which offers a wide range of possibilities for implementing smart applications. We propose a multi-modal method for human emotion recognition and an ontology-based approach to store the classification results in EMONTO, an extensible ontology to model emotions. The multi-modal method analyzes facial expressions, body gestures, and features from the body and the environment to determine an emotional state; this processes each modality with a specialized deep learning model and applying a fusion method. Our fusion method, called EmbraceNet+, consists of a branched architecture that integrates the EmbraceNet fusion method with other ones. We experimentally evaluate our multi-modal method on an adaptation of the EMOTIC dataset. Results show that our method outperforms the single-modal methods."
URI: http://hdl.handle.net/20.500.12590/17035
ISBN: 978-989758523-4
Appears in Collections:Artículos - Ciencia de la computación

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.