Hands are, indeed, amazingly good at manipulating real-life objects. Your tactile senses are a highly evolved skill that took millions of collective years to develop and perfect. But tactile input works best when manipulating real-life objects, and most computing tasks are all about manipulating representations of objects. Data, thoughts, symbols — abstractions. The human body is an incredible system that works wonders in a variety of settings, but isn’t particularly conducive to working in abstractions. (Why do you think charades is so hard?)
Fortunately, your body isn’t the only highly evolved system we have. In fact, we have one that was explicitly developed to deal with these abstractions: language. Hopefully you where I’m going with this. Siri is a step of course, but others have been at it for a long time and we still have a long way to go. And it doesn’t stop with voice.
We’re aiming for the future right? Well the future is all about removing layers. Touch replaces the mouse, speech will replace touch, and your thoughts will replace speech. (See the progress in the fields of EEG and especially prosthetics if you don’t believe me.)
I don’t want to overstate the case, so maybe replace is too strong a word. But each subsequent mode of interaction will certainly reduce the significance of the previous generation. So I’m not discounting touch entirely. There are surely a huge number of applications where we can evolve the Pictures Under Glass interface to gain the advantages of tactile input while maintaining the flexibility and portability of a chip behind a glass screen. But physical objects always have a physical cost, and millions of people are thrilled with the trade-offs we’ve been making so far. I just don’t want to sell ourselves short when thinking about the future of interaction design.