Mi DSpace
Usuario
Contraseña
Please use this identifier to cite or link to this item: http://hdl.handle.net/UCSP/16008
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorCámara Chávez, Guillermo-
dc.contributor.authorGuzmán Zenteno, Leonardo Braulio-
dc.date.accessioned2019-07-09T16:15:56Z-
dc.date.available2019-07-09T16:15:56Z-
dc.date.issued2018-
dc.identifier.other1070065-
dc.identifier.urihttp://repositorio.ucsp.edu.pe/handle/UCSP/16008-
dc.description.abstractIn recent years, there has been increasing interest in developing automatic Sign Language Recognition (SLR) systems because Sign Language (SL) is the main mode of communication between deaf people all over the world. However, most people outside the deaf community do not understand SL, generating a communication problem, between both communities. Recognizing signs is a challenging problem because manual signing (not taking into account facial gestures) has four components that have to be recognized, namely, handshape, movement, location and palm orientation. Even though the appearance and meaning of basic signs are well-defined in sign language dictionaries, in practice, many variations arise due to different factors like gender, age, education or regional, social and ethnic factors which can lead to significant variations making hard to develop a robust SL recognition system. This project attempts to introduce the alignment of videos into isolated SLR, given that this approach has not been studied deeply, even though it presents a great potential for correctly recognize isolated gestures. We also aim for a user-independent recognition, which means that the system should give have a good recognition accuracy for the signers that were not represented in the data set. The main features used for the alignment are the wrists coordinates that we extracted from the videos by using OpenPose. These features will be aligned by using Generalized Canonical Time Warping. The resultant videos will be classified by making use of a 3D CNN. Our experimental results show that the proposed method has obtained a 65.02% accuracy, which places us 5th in the 2017 Chalearn LAP isolated gesture recognition challenge, only 2.69% away from the first place.es_PE
dc.description.uriTrabajo de investigaciónes_PE
dc.formatapplication/pdfes_PE
dc.language.isoenges_PE
dc.publisherUniversidad Católica San Pabloes_PE
dc.rightsinfo:eu-repo/semantics/openAccesses_PE
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/es_PE
dc.sourceUniversidad Católica San Pabloes_PE
dc.sourceRepositorio Institucional - UCSPes_PE
dc.subjectArtificial Intelligencees_PE
dc.subjectVideo Processinges_PE
dc.subjectAlignment of Multiple Sequenceses_PE
dc.titleGCTW Alignment for isolated gesture recognitiones_PE
dc.typeinfo:eu-repo/semantics/masterThesises_PE
thesis.degree.nameMaestro en Ciencia de la Computaciónes_PE
thesis.degree.grantorUniversidad Católica San Pablo. Facultad de Ingeniería y Computaciónes_PE
thesis.degree.levelMaestríaes_PE
thesis.degree.disciplineCiencia de la Computaciónes_PE
thesis.degree.programEscuela Profesional de Ciencia de la Computaciónes_PE
Appears in Collections:Tesis Postgrado - Maestría en Ciencia de la Computación

Files in This Item:
File Description SizeFormat 
GUZMAN_ZENTENO_LEO_GCT.pdf4.1 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.