Integration of Voice and Gesture

Metadata Updated: May 2, 2019

Speech recognition technology is relatively mature.  In spite of this it is not always accurate. Greater accuracy occurs when the speech is constrained by the context of the application. For instance, when using Siri to interact with an iPhone, there are a limited number of topics and interactions available. Gesture recognition is a newer technology, but is becoming viable, particularly with the advent of the Microsoft Kinect. Gesture recognition provides additional context for speech recognition; speech provides context for gestures. Integrating both would provide a much more robust environment for human computer interaction. Survey of the technical field was accomplished. Basic software and hardware, including microphones, cameras and 3D depth sensors was procured. Basic equipment functionality has been demonstrated. Groundwork has been laid for continued efforts.  

Access & Use Information

Public: This dataset is intended for public access and use. License: U.S. Government Work

Downloads & Resources


Metadata Created Date August 1, 2018
Metadata Updated Date May 2, 2019

Metadata Source

Harvested from NASA Data.json

Additional Metadata

Resource Type Dataset
Metadata Created Date August 1, 2018
Metadata Updated Date May 2, 2019
Publisher Space Technology Mission Directorate
Unique Identifier TECHPORT_12412
Maintainer Email
Public Access Level public
Bureau Code 026:00
Metadata Context
Metadata Catalog ID
Schema Version
Catalog Describedby
Datagov Dedupe Retained 20190501230127
Harvest Object Id accc5ef6-3877-46e0-8ff8-73a6e5b58de4
Harvest Source Id 39e4ad2a-47ca-4507-8258-852babd0fd99
Harvest Source Title NASA Data.json
Data First Published 2012-08-01
Homepage URL
Data Last Modified 2018-07-19
Program Code 026:027
Source Datajson Identifier True
Source Hash 268c7ef81b990fad3839841ffe0f74e64a317d1b
Source Schema Version 1.1

Didn't find what you're looking for? Suggest a dataset here.