This project aims at developing an intelligent agent able to translate a signal language to a speech-written language in real time. We hope this project approximates deaf communities to communities that do not speak a signal language.
A signal language can be split into five parameters: hand gestures, orientation, articulation point, move, and facial or body expression. Each parameter is responsible for a signal mean element, like punctuation and intensity. We are employing Deep Learning, Parallel Processing with GPUs, Big Data, Software Engineering, and Linguistics in order to reach our goal.