Raspberry Pi Translates Speech to Sign Language with Robotic Hand
This project bridges a critical communication gap.


The Raspberry Pi has been used for countless innovative projects, but today we’re highlighting one that bridges a human connection gap. Maker and developer Prabhjot Singh has created "Deaf Link," a project that uses a Raspberry Pi as a translation hub to convert sign language into audible speech and vice versa, utilizing a robotic hand for communication.
The first mode is sign to speech, where sign language is captured using a camera module. The Raspberry Pi processes this input with OpenCV and MediaPipe, powered by Google TensorFlow, using hundreds of sign language images for reference. Once a sign is recognized, it's converted into audio and played through a speaker.
The second mode, speech to sign, is where things get even more exciting. Spoken words are detected using a microphone and processed through Google’s Speech-to-Text API. The resulting text is then sent via an MQTT broker to an Arduino, which drives servos in a robotic hand, translating the text into sign language gestures.
The project is powered by a Raspberry Pi 4 B and an Arduino Nano 33 IoT. The Pi is connected to a Raspberry Pi Camera Module 3 and a Razer Seiren Mini microphone, while the Arduino controls the six servo motors in the robotic hand. All components are housed in a custom enclosure. On the software side, Singh uses tools like MQTT, OpenCV, and TensorFlow.
For those interested in the technical details, Singh has made the project open-source, sharing a guide on Hackster so others can see how it all comes together.
If you want to see this impressive Raspberry Pi project in action, check out Singh’s demo video on YouTube. Follow him for more innovative projects and updates on Deaf Link.
What's Your Reaction?






