Handpose - Look Ma, No Keyboard!

Discussion in 'Live RSS Feeds' started by News, Apr 28, 2015.

  1. News

    News Extraordinary Robot
    News Feed

    Joined:
    Jun 27, 2006
    Messages:
    26,211
    Likes Received:
    20
    Today's inspirational post shows something that could become awesome... We all gesture at our PC's, would be be awesome if our PC's understood them (then again, maybe not... lol :)

    All hands, no keyboard: New technology can track detailed hand motion



    Or, let’s say you speak sign language and are trying to communicate with someone who doesn’t. Imagine a world in which a computer could track your hand motions to such a detailed degree that it could translate your sign language into the spoken word, breaking down a substantial communication barrier.

    Researchers at Microsoft have developed a system that can track – in real time – all the sophisticated and nuanced hand motions that people make in their everyday lives.

    The Handpose system could eventually be used by everyone from law enforcement officials directing robots into dangerous situations to office workers who want to sort through e-mail or read documents with a few flips of the wrist instead of taps on a keyboard.

    It also opens up vast possibilities for the world of virtual reality video gaming, said Lucas Bordeaux, a senior research software development engineer with Microsoft Research, which developed Handpose. For one thing, it stands to resolve the disorienting feeling people get when they’re exploring virtual reality and stick their own hand in the frame, but see nothing.

    Microsoft researchers will present the Handpose paper at this year’s CHI conference on human-computer interaction in Seoul, where it has received a Best of CHI Honorable Mention Award.

    Handpose uses a camera to track a person’s hand movements. The system is different from previous hand-tracking technology in that it has been designed to accommodate much more flexible setups. That lets the user do things like get up and move around a room while the camera follows everything from zig-zag motions to thumbs-up signs, in real time.

    The system can use a basic Kinect system, just like many people have on their own Xbox game console at home. But unlike the current home model, which tracks whole body movements, this system is designed to recognize the smaller and more subtle movements of the hand and fingers.

    It turns out, it’s a lot more difficult for the computer to figure out what a hand is doing than to follow the whole body.

    ...

    In the long run, the ability for computers to understand hand motions also will have important implications for the future of artificial intelligence, said Jamie Shotton, a principal researcher in computer vision who worked on the project.

    That’s because it provides another step toward helping computers interpret our body language, including everything from what kind of mood we are in to what we want them to do when we point at something.

    In addition, the ability for computers to understand more nuanced hand motions could make it easier for us to teach robots how to do certain things, like open a jar.

    “The whole artificial intelligence space gets lit up by this,” Shotton said.


    Project Information URL: http://blogs.microsoft.com/next/2015/04/17/all-hands-no-keyboard-new-technology-can-track-detailed-hand-motion/

    Accurate, Robust, and Flexible Real-time Hand Tracking

    Abstract


    We present a new real-time hand tracking system based on a single depth camera. The system can accurately reconstruct complex hand poses across a variety of subjects. It also allows for robust tracking, rapidly recovering from any temporary failures. Most uniquely, our tracker is highly flexible, dramatically improving upon previous approaches which have focused on front-facing close-range scenarios. This flexibility opens up new possibilities for human-computer interaction with examples including tracking at distances from tens of centimeters through to several meters (for controlling the TV at a distance), supporting tracking using a moving depth camera (for mobile scenarios), and arbitrary camera placements (for VR headsets). These features are achieved through a new pipeline that combines a multi-layered discriminative reinitialization strategy for per-frame pose estimation, followed by a generative model-fitting stage. We provide extensive technical details and a detailed qualitative and quantitative analysis

    [​IMG]

    [​IMG]

    [​IMG]

    Project Information URL: http://research.microsoft.com/apps/pubs/default.aspx?id=238453

    Follow @Coding4Fun
    Follow @KinectWindows
    Follow @gduncan411

    [​IMG]

    Continue reading...
     

Share This Page

Loading...