Several years ago Microsoft's ramped up its recruitment of researchers in the field of voice recognition and other multimodal interactions (touch, vision, voice, position/motion) for the HCI (human-computer interaction). Only now is its efforts becoming commercial and reaching market. Of note is their research in computer vision, which will only trend larger unless limited by privacy issues (Big Brother watching or Minority Report iris/retinal scans). Microsoft's Surface uses cameras to track interaction with the table interface and is an example of how refined vision has become.
It's not a far stretch to see future payoff in Microsoft's investment in robotics. I'd say robotics is essentially computing empowered with motion. The payoff (reaching consumers) comes with applying that vision to hand-eye coordination and then applying AI. This is not to say, robots do not already exist for the consumer (Roomba), but it has yet to hit critical mass.
Not only are our senses interacting with computing more but computing devices are increasingly sensing us. For example xuuk has a camera with infrared eye tracker to detect if you are looking at it. They are marketing it as a way for advertisements (Wired magazine says billboards) to tell if you are looking at them. Not too long ago, a Wired magazine article described military research on computers scanning our brains to monitor which sensory input was overloaded or underutilized. The computer would then select the optimal mode of communication (image, text, or audio) to the user.
No comments:
Post a Comment