A Mobile Application That Allows People Who Do Not Know Sign Language to Teach Hearing-Impaired People by Using Speech-to-Text Procedures

Authors

  • Emre BIÇEK VAN YÜZÜNCÜ YIL ÜNİVERSİTESİ, REKTÖRLÜK, ENFORMATİK BÖLÜMÜ 0000-0001-6061-9372
  • M. Nuri ALMALI VAN YÜZÜNCÜ YIL ÜNİVERSİTESİ, MÜHENDİSLİK FAKÜLTESİ, ELEKTRİK-ELEKTRONİK MÜHENDİSLİĞİ BÖLÜMÜ 0000-0003-2763-4452

DOI:

https://doi.org/10.18100/ijamec.682806

Keywords:

Hearing impaired, Mobile application, Speech-to-text, Course management system

Abstract

Hearing-impaired people use sign language to communicate with each other. People who do not know sign language it is difficult to communicate with hearing-impaired people. In this study, this problem is tried to solve with speech-to-text infrastructure. A software project has been developed to enable people who do not know sign language to communicate with the hearing impaired people. The scope of the system which named "Subtitles Course Tracking System (SCTS)" is that with an application running on the Android operating system, user’s speeches are converted into text (Speech-to-Text) and instantly transferred to remote a server. Speech texts which are recorded on the database of the remote server could be followed up in real-time on mobile phones and web pages by the software that has capable of asynchronous data exchange over the Internet (AJAX). Within the scope of this study, a web-based course management system has been developed and all courses in the system made accessible at any time.

Downloads

Download data is not yet available.

References

A. Ataş, A. Genç and E. Belgin, Odyoloji’de Kullanılan Temel Kavramlar, Pediatrik Kulak Burun Boğaz Hastalıkları, U. Akyol, Ed. Ankara, Güneş Kitapevi, 2003, pp. 35-50.

World Health Organization, Deafness and Hearin Loss. http://www.who.int/ mediacentre/factsheets /fs300/en/, Accessed on: Oct. 10, 2016

Ç. Gürboğa and T. Kargın, “İşitme engelli yetişkinlerin farklı ortamlarda kullandıkları iletişim yöntemlerinin/becerilerinin incelenmesi,” JFES., vol. 36, no. 1, pp. 51-64, May. 2003. https://doi.org/10.1501/Egifak_0000000074

The Liberated Learning Consortium, Increasing Access to Speech Recognition. http://www. transcribeyourclass.ca/projectdescription. html/, Accessed on: Dec.07, 2015

Ava, Group Conversations Made Accessible. https://www.indiegogo.com/projects/ava-group-conversations-made-accessible/, Accessed on: Oct. 12, 2014

Nuance Communications, Inc. Speech recognation. http://research.nuance.com/category/ speech-recognition/,Accessed on: Nov. 21, 2015

K. Ryba, T. McIvor, M. Shakir and D. Paez, “Liberated Learning: Analysis of University Students Perceptions and Experiences with Continuous Automated Speech Recognition,” E-Journal of Instructional Science and Technology., vol. 9, no. 1, Mar. 2006.

M. Wald, “Synote: Accessible and Assistive Technology Enhancing Learning for All Students,” in International Conference on Computers for Handicapped Persons, Berlin, Heidelberg, 2010, pp. 177-184. https://doi.org/10.1007/978-3-642-14100-3_27

S. Kafle and M. Huenerfauth, “Evaluating the Usability of Automatically Generated Captions for People who are Deaf or Hard of Hearing,” in Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility, Baltimore, Maryland, USA, 2017, pp. 165-174. https://doi.org/10.1145/3132525.3132542

A. A. Mirza, C. Mirza and D. Rhodes, “Conference And Call Center Speech To Text Machine Translation Engine,” Patent US20190121860A1, Apr. 25, 2019. https://patents.google.com/patent/US20190121860A1/en

S. U. UPase, “Speech recognition based robotic system of wheelchair for disable people,” in International Conference on Communication and Electronics Systems (ICCES), Coimbatore, 2016, pp. 1-5. https://doi.org/10.1109/CESYS.2016.7889851

K. Lunuwilage, S. Abeysekara, L. Witharama, S. Mendis and S. Thelijjagoda, "Web based programming tool with speech recognition for visually impaired users," in 11th International Conference on Software, Knowledge, Information Management and Applications (SKIMA), Malabe, 2017, pp. 1-6. https://doi.org/10.1109/SKIMA.2017.8294132

M. A. Noakes, A. J. Schmitt, E. McCallum and K. Schutte, “Speech-to-text assistive technology for the written expression of students with traumatic brain injuries: A single case experimental study,” School Psychology., vol. 34, no. 6, pp. 656-664, Nov. 2019. https://doi.org/10.1037/spq0000316

E. Biçek, “Real-time inscriptive follow up system of audible lecture with an Android based and web based application,” M.S. thesis, Dept. Electrical and Electronics Engineering., Van Yüzüncü Yıl Univ., Van, Turkey, 2016.

J. Tebelskis, “Speech recognition using neural networks,” Ph.D. dissertation, School of Computer Science., Carnegie Mellon University., Pittsburgh, Pennsylvania, 1995. https://www.examinations-hub.com/download-file/104644743220180811135756.pdf

A. Mesbah, A. van Deursen, “Component- and push-based architectural style for ajax applications,” The Journal of Systems & Software., vol. 81, no. 12, pp. 2194-2209, Apr. 2008. https://doi.org/10.1016/j.jss.2008.04.005

J. Garrett, “Ajax: a new approach to web applications, Adaptive path,” http://adaptivepath.org/ideas/ajax-new-approach-web-applications/, Accessed on: Oct. 14, 2015

Yang J., Liao Z., Liu F., “The impact of ajax on network performance,” The Journal of China Universities of Posts and Telecommunications, vol 14, no. 1, pp. 32-34, 2008. https://doi.org/10.1016/S1005-8885(08)60007-2

Downloads

Published

31-03-2020

Issue

Section

Research Articles

How to Cite

[1]
“A Mobile Application That Allows People Who Do Not Know Sign Language to Teach Hearing-Impaired People by Using Speech-to-Text Procedures”, J. Appl. Methods Electron. Comput., vol. 8, no. 1, pp. 27–33, Mar. 2020, doi: 10.18100/ijamec.682806.