##An automated home application that allows multimodal interaction
An automated home application that allows multimodal interaction using Kinect’s gesture and speech recognition capability to control devices in a living space. The gesture aids in identification of the device whereas the speech command is responsible for changing the state of the device. Used: - C#, Kinect 2.0, Kinect SDK, MS Speech SDK, Arduino.
Siddharth Gupta - siddg@ufl.edu
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License