Pages

Thursday, December 9, 2010

Pranav Mistry Evolution

SixthSense' is a wearable gestural interface that augments the physical world around us with digital information and lets us use natural hand gestures to interact with that information.


We've evolved over millions of years to sense the world around us. When we encounter something, someone or some place, we use our five natural senses to perceive information about it; that information helps us make decisions and chose the right actions to take. But arguably the most useful information that can help us make the right decision is not naturally perceivable with our five senses, namely the data, information and knowledge that mankind has accumulated about everything and which is increasingly all available online. Although the miniaturization of computing devices allows us to carry computers in our pockets, keeping us continually connected to the digital world, there is no link between our digital devices and our interactions with the physical world. Information is confined traditionally on paper or digitally on a screen. SixthSense bridges this gap, bringing intangible, digital information out into the tangible world, and allowing us to interact with this information via natural hand gestures. ‘SixthSense’ frees information from its confines by seamlessly integrating it with reality, and thus making the entire world your computer.

The SixthSense prototype is comprised of a pocket projector, a mirror and a camera. The hardware components are coupled in a pendant like mobile wearable device. Both the projector and the camera are connected to the mobile computing device in the user’s pocket. The projector projects visual information enabling surfaces, walls and physical objects around us to be used as interfaces; while the camera recognizes and tracks user's hand gestures and physical objects using computer-vision based techniques. The software program processes the video stream data captured by the camera and tracks the locations of the colored markers (visual tracking fiducials) at the tip of the user’s fingers using simple computer-vision techniques. The movements and arrangements of these fiducials are interpreted into gestures that act as interaction instructions for the projected application interfaces. The maximum number of tracked fingers is only constrained by the number of unique fiducials, thus SixthSense also supports multi-touch and multi-user interaction.

The SixthSense prototype implements several applications that demonstrate the usefulness, viability and flexibility of the system. The map application lets the user navigate a map displayed on a nearby surface using hand gestures, similar to gestures supported by Multi-Touch based systems, letting the user zoom in, zoom out or pan using intuitive hand movements. The drawing application lets the user draw on any surface by tracking the fingertip movements of the user’s index finger. SixthSense also recognizes user’s freehand gestures (postures). For example, the SixthSense system implements a gestural camera that takes photos of the scene the user is looking at by detecting the ‘framing’ gesture. The user can stop by any surface or wall and flick through the photos he/she has taken. SixthSense also lets the user draw icons or symbols in the air using the movement of the index finger and recognizes those symbols as interaction instructions. For example, drawing a magnifying glass symbol takes the user to the map application or drawing an ‘@’ symbol lets the user check his mail. The SixthSense system also augments physical objects the user is interacting with by projecting more information about these objects projected on them. For example, a newspaper can show live video news or dynamic information can be provided on a regular piece of paper. The gesture of drawing a circle on the user’s wrist projects an analog watch.





SPARSH (स्पर्श) lets you conceptually transfer media from one digital device to your body and pass it to the other digital device by simple touch gestures.

Our digital world – laptop, TV, smart phone, e-book reader and all are now relying upon the cloud, the cloud of information. SPARSH explores a novel interaction method to seamlessly transfer something between these devices in a real fun way using the underlying cloud. Here it goes. Touch whatever you want to copy. Now it is saved conceptually in you. Next, touch the device you want to paste/pass the saved content. 



So, what can you do with SPARSH? Imagine you received a text message from a friend with his address. You touch the message and it is conceptually get copied in you - your body. Now you pass that address to the search bar of the Google map in the web browser of your laptop by simply touching it. Want to see some pictures from your digital camera on your tablet computer? Select the pictures you want to copy by touching them on camera display screen and now pass it to your tablet by touching the screen of the tablet. Or you can watch a video from your Facebook wall by copying it from your phone to TV. SPARSH uses touch based interactions as just indication for what to copy, from where and where to pass it. Technically, the actual magic(transfer of media) happens on the cloud. 










Mouseless is an invisible computer mouse that provides the familiarity of interaction of a physical mouse without actually needing a real hardware mouse.

As the computer mouse has remained largely unchanged over the last decades, we have become increasingly proficient at operating the two-button mouse. Recently, various multitouch and gestural interaction technologies have been explored as means to implement alternative methods to interact with a computer. Despite these advances in computing hardware technologies, the two-button computer mouse has remained the predominant means to interact with a computer. The Mouseless invention removes the requirement of having a physical mouse altogether but still provides the intuitive interaction of a physical mouse that we are familiar with. Mouseless consists of an Infrared (IR) laser beam (with line cap) and an Infrared camera. Both IR laser and IR camera are embedded in the computer. The laser beam module is modified with a line cap and placed such that it creates a plane of IR laser just above the surface the computer sits on. The user cups their hand, as if a physical mouse was present underneath, and the laser beam lights up the hand which is in contact with the surface. The IR camera detects those bright IR blobs using computer vision. The change in the position and arrangements of these blobs are interpreted as mouse cursor movement and mouse clicks. As the user moves their hand the cursor on screen moves accordingly. When the user taps their index finger, the size of the blob changes and the camera recognizes the intended mouse click.

As we improve our computer vision algorithms, an extensive library of gestures could be implemented in addition to mouse movement and mouse clicks. Typical multitouch gestures, such as zooming in and out, as well as novel gestures, such as balling one’s fist are all possible. In addition, the use of multiple laser beams would allow for recognition of a wider range of free hand motions, enabling novel gestures that the hardware mouse cannot support.

We implemented a fully functional working prototype system of 'Mouseless' that costs approximate $20 to build. 






0 comments:

Post a Comment