Tom Rodden from the University of Nottingham in the UK who spoke on Interacting in Ubiquitous Computing Environments at the Pervasive conference a couple of weeks ago in Toronto, is speaking at U of T today. So, I'm in this talk right now which is on Ubiquitous Computing in the Real World. There are many synonyms for ubiquitous computing, such as pervasive computing, smart artefacts, smart spaces. A lot of the vision of ubiquitous computing is technical, but according to Mark Weiser and John Seely Brown, "what matters is not technology, but its relationship to us" (Mark Weiser and John Seely Brown from PARC). However, since ubiquitous computing involves having technology embedded in the environment, is the future of ubiquitous computing mean that every device has a user manual? How many of you read the user manual for every new gadget that you get (well, I usually don't, cause I just want to use it, and I feel stupid if I have to look at a manual). He gave an example of a picture of a faucet in a US airport where there is an instruction manual that explains how to operate the faucet. You go near the faucet to turn it on, and you walk away to turn it off. If you wear black clothing, then the faucet does not work.
Tom is the director of the Equator project. One of the research works he is explaining is an online game that incorporates location and mixed reality called Can You See Me Now. In this game, players use a PDA and run to catch other players in different cities, and online players can see the interactions happening online. Visualizations can be used to show the coverage of Wi-Fi and GPS, which exposes the gaps within the technology to seamlessly integrate with the real world. What needs to happen is to reveal and make the infrastructure a resource for interaction. One of the games they created involves exploiting the wireless access points as resources within the game.
In order to have ubiquitous computing in the real world, Tom says that we need to expose the invisible. For developers, what gets in the way of creating location-based applications is the infrastructure. Content triggering does not depend on just the location but also the infrastructure. Another neat application they created is Savannah, in which they used a playing field to make a virtual savannah. Children used PDAs to look at a savannah based on where they are positioned in the playing field. But the problem was that the children started to stop and form a circle. The hidden features of the technology need to be revealed to designers, researchers and individuals.
One thing that really struck me and is really true is that the more intelligent the technologies, the more they will frustrate. If we look at the original definition of ubiquitous computing, it is not the technology that is important, but its social effect on people. I think this is where social context needs to be exploited.
I asked a question to Tom after he finished his talk about the option for opting out of the ubiquitous computing system. This comes from what Adam Greenfield talked about in his keynote to the Pervasive conference, about having users be able to abort or revert back away from the system because sometimes the user may not want to conform or follow the ubiquitous computing system. Tom's answer was yes for sure, there needs to be a way for the user to opt out. He even elaborated further that there should be a method for a user to counter-act what the ubiquitous computing system does. This then reminded me of hacking, cause isn't that what hacking is, to counter-act the system and break in?
Nonetheless, it was a great talk and inspiring, a great perspective on ubiquitous computing from someone who is not just building the ubiquitous computing systems because they're cool, but for a purpose and demonstrating that in real life.
On Technorati: pervasive computing, Tom Rodden, ubiquitous computing
No comments:
Post a Comment