But this may be the first true watershed year for gesture based computing. At the E3 conference earlier this Summer, both Sony and Microsoft showed off gesture based interfaces for their next generation gaming systems.
Like many other tech advancements, I do expect gaming to be one of the first areas to see a big impact from gesture based computing, as moving your hands and head to control a game just makes sense in the case of many (but not all) games.
But games won't be the only place to see an impact from gesture based computing. Many companies are looking at it for use in the real world as well.
I recently spoke to the company GestureTek, who have built a very interesting implementation of gesture based computing for a Japanese amusement park. For a ride at the park, GestureTek made it possible to recognize visitors to the ride's initial room, where they could interact with displayed images within the pre-ride room. I was intrigued by this, especially for its ability to identify and separate multiple users within a room.
While this is another form of entertainment use for gesture based computing, seeing this does open up many other possibilities for the tech, such as in retail environments, security systems, and enhanced virtual conferencing.
I also expect the cost and complexity of gesture based computing to continue to go down. The gaming console implementations are an example of a lower-barrier and recently an MIT researcher displayed a form of gesture based computing that used a standard web cam and a $1 multi-colored glove.
Of course, we aren't going to be using our hands and faces to control everything on a computer. I still expect touch interfaces, keyboards and mice to be around for a while. But gesture based interfaces will open up a whole new frontier in computing.