Wednesday, October 1, 2014

Two Extremes of Touch Interaction

Microsoft Research Redmond researchers Hrvoje Benko and Scott Saponas have been investigating the use of touch interaction in computing devices since the mid-’00s. Now, two sharply different yet related projects demonstrate novel approaches to the world of touch and gestures.
Wearable Multitouch Interaction gives users the ability to make an entire wall a touch surface, while PocketTouch enables users to interact with smartphones inside a pocket or purse, a small surface area for touch. Both projects will be unveiled during UIST 2012, the Association for Computing Machinery’s 24th Symposium on User Interface Software and Technology, being held Oct. 16-19 in Santa Barbara, Calif.

Make Every Surface a Touch Screen

Wearable Multitouch Interaction turns any surface in the user’s environment into a touch interface. A paper co-authored by Chris Harrison, a Ph.D. student at Carnegie Mellon University and a former Microsoft Research intern; Benko; and Andy Wilson—describes a wearable system that enables graphical, interactive, multitouch input on arbitrary, everyday surfaces.
The Wearable Multitouch Interaction prototype is built to be wearable, a novel combination of laser-based pico projector and depth-sensing camera. The camera is an advanced, custom prototype provided by PrimeSense. Once the camera and projector are calibrated to each other, the user can don the system and begin using it.“We wanted to capitalize on the tremendous surface area the real world provides,” explains Benko, of the Natural Interaction Research group. “The surface area of one hand alone exceeds that of typical smart phones. Tables are an order of magnitude larger than a tablet computer. If we could appropriate these ad hoc surfaces in an on-demand way, we could deliver all of the benefits of mobility while expanding the user’s interactive capability.”
“This custom camera works on a similar principle to Kinect,” Benko says, “but it is modified to work at short range. This camera and projector combination simplified our work because the camera reports depth in world coordinates, which are used when modeling a particular graphical world; the laser-based projector delivers an image that is always in focus, so didn’t need to calibrate for focus.”
The early phases of this work raised some metaphysical questions. If any surface can act as an interactive surface, then what does the user interact with and what is the user interacting on? The team also debated the notion of turning everything in the environment into a touch surface. Sensing touch on an arbitrary deformable surface is a difficult problem that no one has tackled before. Touch surfaces are usually highly engineered devices, and they wanted to turn walls, notepads, and hands into interactive surfaces—while enabling the user to move about. The researchers agree that the first three weeks of the project were the most challenging.
Harrison recalls their early brainstorming sessions.
OmniTouch on a variety of surfaces
One of the key decisions for Wearable Multitouch Interaction was that the system would interact with fingers. This raised the challenge of finger segmenting: defining to the system what fingers look like so that it could identify fingers or shapes that looked like fingers. Following this decision was the notion that any surface underneath those fingers is potentially a projected surface for touch interaction.“We had to assume it was possible,” he recalls, “then go about defining the system and its interactions, then conduct initial tests with different technologies to see how we could implement the concept. It was during those initial weeks that we achieved the biggest breakthroughs in our thinking. That was a really exciting stage of research.”
“In this case, we're detecting proximity at a very fine level,” Benko explains. “The system decides the finger is touching the surface if it’s close enough to constitute making contact. This was fairly tricky, and we used a depth map to determine proximity. In practice, a finger is seen as “clicked” when its hover distance drops to one centimeter or less above a surface, and we even manage to maintain the clicked state for dragging operations.”Then came the next problem: click detection. How can the system detect a touch when the surface being touched contains no sensors?

The Wearable Multitouch Interaction system enables any surface to be used as a touch screen.
One of the more interesting discussions during this project was how to determine where to place the interface surface. The team explored two approaches. The first was a classification-driven model in which the system classified specific objects that could be used as a surface: a hand, an arm, a notepad, or a wall. This required creating a machine-learning classifier to learn these objects.
The second approach took a completely user-driven model, enabling the user to finger-draw a working area on any surface in front of the camera/projector system.
“We wanted the ability to use any surface,” Benko says. “Let the user define the area of where they want the interface to be, and have the system do its best to track it frame to frame. This creates a highly flexible, on-demand user interface. You can tap on your hand or drag your interface out to specify the top left and bottom right border. All this stems from the main idea that if everything around you is a potential interface, then the first action has to be defining an interface area.”

The team stresses that, although the prototype is not as small as they would like it to be, there are no significant barriers to miniaturization and that it is entirely possible that a future version of Wearable Multitouch Interaction could be the size of a matchbox and as easy to wear as a pendant or a watch.

0 comments:

Post a Comment

Twitter Delicious Facebook Digg Stumbleupon Favorites More

 
Design by Vicky | Bloggerized by Vicky - Free gprs tricks | Affiliate Network Reviews