One of the moments in recent gaming history that made the biggest impression on me was Heavy Rain's ARI, the Added Realty Interface.
When I read this Penny Arcade Report article about how the developers of City Quest implemented Oculus Rift support for their 2D game by creating a virtual basement for you to play it in, I was immediately impressed. Lots of developers are coming up with clever ways to use the Rift to create completely new and unique experiences, but I haven't seen much in the way of using the Rift to augment traditional gaming experiences. The Ibex Virtual Reality Desktop project is one such project, but it In City Quest's Rift edition, you can look around and see the computer the game is running on, decorations from 80s nerd culture, the doorway leading out of the room, and even hands (though lacking virtual arms) using the mouse and keyboard:
While the dev's reddit thread announcing this feature calls it the "silliest way they could think of" to implement support for the Rift, I believe that the concept of using the Rift to create virtual spaces where we can interact with augmented reality-style interfaces has a lot of potential.
With only a Rift, you're limited to a traditional form of input like keyboard/mouse or game controller. Of course, while wearing the Rift, when you look down, you see what's in the game, not what's on your desk. You're totally blind to where the real world keyboard is, or where the real world keys are on it. If you want to move your hand from the mouse to the keyboard or press a hard to access key like F10, you have to grope around and rely on your proprioceptive memory. However, if there were a way to project your real keyboard, mouse, and hands into the virtual space, those clumsy moments would be no worse than glancing down in real life.
3Gear Systems and Lead Motion are two of several companies that are already working on machine vision-based products that could be used to provide this solution. A camera looking down at your hands could conceivably be used to track not only the position of your hands, but the devices they interface with. I believe keyboard layouts are standardized enough - main large block of letters and numbers, function keys clustered in fours, etc - to be able to use the positions of the printed letters and some intelligence to automatically (or assisted with some guided manual calibration) produce a virtual representation of any given keyboard. Mice would be harder to represent accurately, since there are so many variations of mouse contours, button sizes, and side buttons that would be hard to precisely identify with an overhead camera. However, I believe the main difficulty with using a mouse with the Rift would be simply locating it with your hand, and so a simple representation (or perhaps one chosen from a selection of popular models) should suffice.
Once you're set up in your augmented reality Rift office, how could you take advantage of working in virtual reality? I have more ideas about that which this blog post is too narrow to contain.