
International Journal on Science and Technology
E-ISSN: 2229-7677
•
Impact Factor: 9.88
A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal
Plagiarism is checked by the leading plagiarism checker
Call for Paper
Volume 16 Issue 1
2025
Indexing Partners



















Adaptive Technology System for Hearing-Impaired Individuals Using OpenCV
Author(s) | Bandaru sakthilaya, Sharon Katty Rose, Dr.G.Mahalakshmi |
---|---|
Country | India |
Abstract | The Speech-to-Sign Language Translator is an application designed to convert spoken words into translated text, audio output, and sign language representations. Using Python and various libraries, this project captures speech input, translates it into multiple language options, and provides both a textual and audio translation. Additionally, it displays sign language images to visually communicate the translated message to individuals with hearing impairments. The system leverages speech recognition for capturing spoken words, Google Translate API for multi-language support, gTTS for audio output, and a pre-loaded set of sign language images for visual representation. |
Keywords | Speech-to-Text, Sign Recognition, Machine Learning, Natural Language Processing, Gesture Detection, Accessibility, Artificial Intelligence. |
Field | Engineering |
Published In | Volume 16, Issue 1, January-March 2025 |
Published On | 2025-03-28 |
Cite This | Adaptive Technology System for Hearing-Impaired Individuals Using OpenCV - Bandaru sakthilaya, Sharon Katty Rose, Dr.G.Mahalakshmi - IJSAT Volume 16, Issue 1, January-March 2025. DOI 10.71097/IJSAT.v16.i1.2918 |
DOI | https://doi.org/10.71097/IJSAT.v16.i1.2918 |
Short DOI | https://doi.org/g896fk |
Share this


CrossRef DOI is assigned to each research paper published in our journal.
IJSAT DOI prefix is
10.71097/IJSAT
Downloads
All research papers published on this website are licensed under Creative Commons Attribution-ShareAlike 4.0 International License, and all rights belong to their respective authors/researchers.
