Thursday, April 19, 2007

Escalating from small screens to large screens: Patrick Baudisch talk

Today is the talk in DCS from Patrick Baudisch from Microsoft Research. His talk is about escalating from small screens to large screens. The abstract of his talk is here.

Abstract

The range of available range of computing devices continues to grow.
One way of classifying these devices is by their display size,
ranging from smart personal objects, such as phones and watches to
wall-size displays. The new devices have the potential to radically
reshape the way we think and work. At this point, however, it is
still an open question how these devices will come together to create
that ubiquitous infrastructure.

In my vision, devices of different screen sizes will be joined into a
single "escalation" hierarchy: Whenever possible, users will use the
most available, least encumbering device. Only when the most
available device fails will users escalate to the next larger and
more powerful device. I envision a future in which many users will
conduct the vast majority of their information access on a device the
size of a mobile phone. On the other hand, users will escalate to
very large and powerful devices, such as multi-display systems or
wall displays, if complex sensemaking activities require them to.

In this talk, I present a framework of visualization and interaction
techniques that combines devices into a single integrated device
hierarchy. Its ultimate goal is to enable users to begin a thought
process on one device and to continue it on another one using a
single unified type of interaction.

Here are my notes that I made from the talk:

He is mentioning how the number of cell phones sold is more than the number of PCs. The mantra is to use the most available device, which is what ubiquitous computing is all about. However, the challenge is porting from a small screen to a large screen. Sometimes you want to see information displayed on a wall display instead on a tiny screen of a cell phone. In fact, many people like software developers have more than one monitor, and there was an article showing the performance benefits of having a larger screen. You want to have a consistent experience moving from a small screen to a large screen.

This type of research in my opinion is called ambient computing. According to Patrick, what we want is to have high focus in the core and low resolution in the periphery vision. Only content physically close to the user can be perceived in full vision, therefore the human is the limiting factor. Patrick showed a demo called drag and pop, which allows users to drag files to folders on the desktop but it will highlight those possible folders, without having the user to physically use the mouse to drag to the actual folder. Therefore, this concept is using focus + context interaction which is to bring the content to the user, let the user interact with it, and send the content back. This especially applies to large displays, because you don't want to have users to be fatigued with their body parts like moving their arms or walking across the display.

This then begs the question, is this going to make users more lazy? Witness the success of the Nintendo Wii because it allows users to move and do some exercise interacting with the games. For small displays, we also want focus + context, but the context is imaginary. For example, he showed a demo of a mapping application on a Compaq iPaq handheld, where arcs are used (called halo rings) to indicate that there are locations of interest for the user.

In summary, human factors cause large screens to fall apart into focus and context, and interaction. By the way, Patrick's talk in PPT form is right here.

On Technorati: , ,

No comments: