An Imaginary Interface

Today, Technology Review is running a story of mine about a type of interface for mobile phones in which a person gestures with her hands instead of tapping buttons or icons. The gestures, which could trace simple line drawings like a graph or directions, for instance, are detected by a small camera and sent to the screen of the recipient of your call. It’s essentially a way of supplementing a phone conversation with gestures in the same way you would in a face-to-face conversation.  The project is pretty cool, and it may get more people thinking about how we will interact with devices when they shrink down to almost nothing (and lose their displays as a result).

The TR story is here. But I also wanted to include a video that illustrates the technology,  as well as a longer version of the written piece that I think provides a little more context to a reader.

The longer version of the TR story is below:

An Invisible Touch for a Screen-less Device

“Imaginary Interfaces” make it possible to interact with a mobile phone that doesn’t have a screen.

Kate Greene

For better or for worse, the main way to interact with a mobile phone is to look at the phone and tap it with your fingers. This isn’t always the most practical approach, however, as it requires focused visual attention. In addition, it limits the size of the device: it must be at least big enough for a person to see without strain and to press buttons and icons without ambiguity. But some researchers are exploring new ways to interact with mobile devices that could allow the gadgets to continue to shrink to the point where they become a part of our clothing or accessories.

One project, called Imaginary Interfaces, has been proposed by Patrick Baudisch, professor of computer science at the Hasso-Plattner Institute in Postdam, Germany and his research student, Sean Gustafson. The main idea is that a person can augment a phone conversation with someone—provide them with a rudimentary drawing of a chart or map, for instance—by simply tracing lines and curves with their fingers in the air. It’s similar to what a person might do in a face-to-face conversation, but the difference is that a small video recorder and microprocessor captures and analyzes the hand gestures, and, via a wireless connection, sends a replicated drawing on a recipient’s computer display.

“Screen devices require a certain minimum size,” says Baudisch, with the limiting factor being human eyesight.  But unlike a device with a screen or even a touch screen, a device that runs an imaginary interface could, Baudisch says, fit into the button of a shirt and be embedded invisibly into clothing. He and Gustafson have built a prototype of an Imaginary Interface device in which the camera is about the size of a large broach, but he predicts that within a few years, components will shrink to the size that allows such system to be significantly smaller.

The idea of doing away with a touch screen, keyboard, or physical input device of any kind has fascinated user-interface designers and scientists for years. In fact, Sony currently offers EyeToy, a video camera that captures gestures for its PlayStation game consoles, and Microsoft is promising its own gesture-based system for the Xbox 360 called Project Natal. Another gesture-based project, called Sixth Sense, has received significant notoriety recently. The SixthSense prototype, developed by Pattie Maes and Pranav Mistry, uses a wearable camera to record a person’s gestures and a small projector to create an ad hoc display on any surface.

The Imaginary Interfaces prototype, in contrast, is designed to be used without the immediate feedback of a display. Instead, it relies on the short-term visual memory of the person using it. To use it, a person “opens up” an interface when she makes an “L” shape with her left hand or right hand. This creates a two dimensional spatial surface, a boundary for the forthcoming finger traces.

Baudisch says that a person could use this space to clarify spatial situations such as where a player was when she scored a goal, how a person gets to a building, how an article should be laid out. “Users start drawing in midair,” he says. “There is no setup effort here, no need to whip out a mobile device or stylus.” A paper detailing the setup and user studies will be presented at the 2010 symposium on User Interface Software and Technology in New York in October.

There is, of course, the requirement that at least one person must have a screen available in order to see the midair finger trace, says Andy Wilson, senior researcher at Microsoft who led the development of Surface, the touch screen table top. Wilson and Baudisch have collaborated on projects before, however, they are not working together on the current research. “I think it’s quite interesting in the sense that it really is the ultimate in thinking about when devices shrink down to nothing, when you don’t even have a display,” says Wilson.

It can draw on the fact that people have a natural sense to use their hands to frame space and set context for an interaction, he says. “That’s a quite powerful concept and it hasn’t been explored much,” Wilson adds. “I think they’re on to something.”

This entry was posted in technology, video and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *