This project uses Microsoft Cognitive Services Speech SDK, Blender Animations, Unity Gaming Engine, OpenCV, and Android Studio to create an application that translates speech input into American Sign Language (ASL) real time through an Augmented Reality (AR) approach.
interpretAR won 'Most Innovative Design' at McMaster's 2019 ECE Expo
-
interpretAR Version 1.0 COMPLETE! Download current version for multi-language speech to text integrated with your mobile camera
-
ASL voice translation functional! Take a gander at the vocab page to see all the words you can translate in ASL
-
Over 100 words of sign language and growing
Demo version 1.0 release date: April 9th 2019 -DONE
This repository hosts interpretAR Android Studio Project files:
- Plethora of ASL vocabulary for the best user experience
- Face recognition for a responsive AR environment
- Multi-language speech input / output capabilities
- Simple and sleek UI design
We tested the project with Brandon's Samsung S7 Edge with Android version 8.0.0. Visit Brandon's Website.
Note: THIS PROJECT IS CONTINUALLY BEING UPDATED. Read revision comments for current status of application.
- A PC (Windows, Linux, Mac) capable to run Android Studio.
- Version 3.3 of Android Studio.
- An ARM-based Android device (API 23: Android 6.0 Marshmallow or higher) enabled for development with a working microphone.
- By building this project you will download the Microsoft Cognitive Services Speech SDK. By downloading you acknowledge its license, see Speech SDK license agreement.
- Clone/download the GitHub repository.
- Open interpretAR folder as a project in Android Studio.
- Press Ctrl+F9, or select Build > Make Project.
- Connect your Android device to your development PC.
- Press Shift+F10, or select Run > Run 'app'.
- In the deployment target windows that comes up, pick your Android device.
- On your Android device, play with different settings to grasp the full expierence of interpretAR.