I recently read this article on SoundWave, a Kinect-like system which uses the Doppler Effect for determining user gestures through an ultrasonic frequency emitted by your laptop's stock microphone/speaker setup.
In essence, it's active sonar.
Now, while the article does note that having the keyboard in such close proximity makes the technology kinda superfluous, it occurs to me that gesture input is not the only use case for this software. It could, in fact, possibly be the least useful. That's not to detract from the excellent work done by the researchers and developers of this technology in any way at all. I simply think there may be alternative uses for it. Outside of the military that is, where sonar/radar have been used extensively since around World War 1.
At first I considered using Kinect in conjunction with SoundWave - would it potentially improve the accuracy and user experience of Kinect? Possibly. Probably not. I also considered how one might use this for improving accessibility (could it?). Then it occurred to me - maybe we're looking at this from the wrong direction? Instead of using the SoundWave/Doppler shift as a means to provide input (me --> device), what if we used it as a means of keeping us aware of our ambient surroundings (device --> me)?
Consider how one could incorporate this with HUD or Project Glass. While it could prove to be more of a distraction than a help, I can't help but think there is an alternative use case for this. Perhaps as means for alerting us of people invading our personal space? Not likely to be helpful on a crowded bus or street.
I'm almost certainly barking up the wrong tree but I just can't shake the feeling that we're missing something important here. Not in what it currently is, but in what it could be.