I recently built a project that allows users to send voice commands to their Yun using an HTML5 website. You can find a full tutorial and demo video for it here (http://arduinomeetslinux.com/beyond.php?p=4).
The project allows users to browse to a web page hosted by the Yun on any device, then speak in commands to change the color of an RGB led connected to the Yun.
As the user speaks out commands, the API tranlsates them into a string. This string is then passed to the Arduino sketch using the Yun's Mailbox functionality. The string is appended to the url http://arduino.local/mailbox/ and a request is made to it using jQuery.
A sketch running on the Yun is listening for Mailbox messages. As they come in the messages are parsed out, and in this case, it looks for names of a few colors in the command string. When it recognizes these colors the sketch changes the RGB to that color.
This method of voice recognition could be useful for many applications because it allow for any words to be translated accurately and sent to the Yun - its just up to the sketch to parse out the string to determine what action to take. The translation is often very accurate as well.
Very nice! Thanks for sharing.
So, you're offloading the speech processing from the Yun, putting that responsibility on the computer running the web browser. Is that happening locally on that computer, or is the speech data being sent accross the Internet to some server to do the actual recognition processing? (In other words, must younhav an active Internet connection?)
The speech is being sent across the internet to some server that does the processing - this is how the SpeechRecognition API works in HTML5. So yes, the Yun has to have an active internet connection for this to work, it can't just happen locally.