Please use this identifier to cite or link to this item:
http://hdl.handle.net/20.500.12590/17053
Title: | Emotion detection for social robots based on nlp transformers and an emotion ontology |
Authors: | Graterol, Wilfredo Diaz-Amado, Jose Cardinale, Yudith Dongo, Irvin Lopes-Silva, Edmundo Santos-Libarino, Cleia |
Keywords: | Emotion detection;Natural language processing;Ontology;Social robots;Text classification |
Issue Date: | 2021 |
Publisher: | MDPI AG |
metadata.dc.relation.uri: | https://www.scopus.com/record/display.uri?eid=2-s2.0-85100779547&origin=resultslist&sort=plf-f&src=s&nlo=&nlr=&nls=&sid=dd875f57cfc47807be0ca661a187cd3a&sot=aff&sdt=cl&cluster=scopubyr%2c%222021%22%2ct&sl=48&s=AF-ID%28%22Universidad+Cat%c3%b3lica+San+Pablo%22+60105300%29&relpos=28&citeCnt=5&searchTerm=&featureToggles=FEATURE_NEW_DOC_DETAILS_EXPORT:1 |
Abstract: | For social robots, knowledge regarding human emotional states is an essential part of adapting their behavior or associating emotions to other entities. Robots gather the information from which emotion detection is processed via different media, such as text, speech, images, or videos. The multimedia content is then properly processed to recognize emotions/sentiments, for example, by analyzing faces and postures in images/videos based on machine learning techniques or by converting speech into text to perform emotion detection with natural language processing (NLP) techniques. Keeping this information in semantic repositories offers a wide range of possibilities for implementing smart applications. We propose a framework to allow social robots to detect emotions and to store this information in a semantic repository, based on EMONTO (an EMotion ONTOlogy), and in the first figure or table caption. Please define if appropriate. an ontology to represent emotions. As a proof-of-concept, we develop a first version of this framework focused on emotion detection in text, which can be obtained directly as text or by converting speech to text. We tested the implementation with a case study of tour-guide robots for museums that rely on a speech-to-text converter based on the Google Application Programming Interface (API) and a Python library, a neural network to label the emotions in texts based on NLP transformers, and EMONTO integrated with an ontology for museums; thus, it is possible to register the emotions that artworks produce in visitors. We evaluate the classification model, obtaining equivalent results compared with a state-of-the-art transformer-based model and with a clear roadmap for improvement. |
URI: | http://hdl.handle.net/20.500.12590/17053 |
ISSN: | 14248220 |
Appears in Collections: | Artículos - Ingeniería Electrónica y de Telecomunicaciones |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.