Wednesday, August 30, 2006

Back to school

I went back to school today at U of T. I had to buy a GO train ticket, and there was a line up. I didn't want to miss the train that was coming. Finally, I saw that there was a sign saying that to avoid lineups, you can use the ticket vending machine! I was like oh that's cool, finally, the Oakville GO train station has that! I used vending machines all the time when I was in Menlo Park and had to commute to San Francisco, there were no actual ticket agents! So, I used the ticket vending machine, the user interface needs a little bit improvement, because once you select the zone, then you insert your credit card or debit card or interac card. But it doesn't prompt you for it, or have an OK button. Once you select the zone, you have to put your card in, I kind of knew that, but probably some people wouldn't know until they were prompted. But, eventually, I did get my ticket and went on the train. If you want to figure out if your GO train station has a ticket vending machine, check here.

I have to finish up course work, only 2 more courses to finish up my course requirements. Also need to work on the PhD depth oral and thesis proposal, so I'm going to be super busy this fall term. As if I'm not busy any term. I had a great summer, now it's really back to work!

Saturday, August 26, 2006

Photos and podcast from Copenhagen available!

I went on a city bus tour and boat tour today as well as walking in the city of Copenhagen with a friend of mine from the conference. Here are the photos that I took of (this includes all photos from Hypertext and Odense as well). This doesn't include all the photos from today, since I've run out of upload space on Flickr (you only have a certain amount every month, so I'll have to wait till next month to upload the rest unless I want to pay to go professional which I don't). I also podcasted, and split it into 4 podcasts referring to 4 different events.

Podcast of city bus tour
Podcast of canal boat tour
Podcast of tour of National Museum
Podcast of after National Museum

Enjoy, can you tell that I had a great time in Copenhagen!

Friday, August 25, 2006

Podcasts are now split

I've decided to split my podcasts into two, one for personal stuff and one for my talks. I just felt it was time to split the podcasts, not everyone wants to listen to all my personal ramblings of what I did on trips and so forth, if they just want to listen to my research talks.

Session 1 of the last day of Hypertext

The first session r3ight after the keynote is on Hypermedia Application Design. The first speaker is Allan Hansen (fah AT daimi.au.dk) who also presented HyCon yesterday. His talk is on Ubiquitous Annotation Systems: Technologies and Challenges. Digital textual annotations are well understood, but what about digital annotations in the physical world? The issues involve what technologies are required, what input and output devices are needed? There are already anchors for hypermedia resources like XPath and XPointer. There are also anchors for physical resources like positioning sensors and RFID tags, for determining IDs of devices that are attached to physical resources. Anchor based models can be used to integrate with open hypermedia systems. This is a survey talk, looking at existing annotation systems. The presentation of the annotations can be done using mashups like annotated Google maps, and the annotations can be attached or detached to the object, and on or off location.

The second speaker is Paul De Bra and he is talking about The Design of AHA! I wondered what does AHA mean, like some kind of eureka moment. But AHA stands for adaptive hypermedia applications. His presentation is being done with hypertext, using a browser, where the agenda items are hyperlinks. None of the talks are done in hypertext, he says we are doing things wrong, we should publish things in hypertext, so hypertext papers should be done in hypertext. Adaptive hypermedia is more than just e-learning and recommender systems, it can be used for adaptive hypermedia research papers and talks. This is an interesting talk, the talk is being presented based on what agenda item to go first. So, now we are in the Stability part of the talk. Too much adaptation in hypermedia documents is not good and will cause it to be unstable because it becomes like an adventure game. The next item in the talk that we are going to is link adaptation. The AHA system suggests what are the next items to go to depending on where you are in the talk or paper. Users can create concepts and concept relationships using their graphical tool and user model updates are done using event-condition-action rules. Depending on the user model, you need to select the destination link and stylesheets are generated to colour the links appropriately. If you share URLs, then the URL that another person sees will be adapted according to his or her user model. Wikipedia doesn't do adaptive hypermedia because of performance, if they did, then it requires more resources and you can't do server-side caching, they would need to add more servers.

The third presentation is Journey to the Past: Proposal of a Framework for a Past Web Browser presented by Adam Jatowt. This is an interesting talk, because it's nice to see what stuff happened before in the past, and history is an important part of human studies. How many times do you want to find some past content, but then it doesn't exist anymore because it was changed or removed from the web site. Of course, Google does do archiving and other archiving sites do that, but there is no way to navigate to a past web site for example. Like I had a previous web site a year ago, but it would be difficult to find it as it exists right now. So this is the motivation for a past web browser. One could also combine this with the current web browser to make a mixed web browser.

Technorati tags:

Last day of Hypertext

Today is the last day of Hypertext. Right now is the keynote speech on Ubiquitous Hypermedia and Social Computing, right up my alley, by Kaj Gronbaek (kgronbak AT interactivespaces.net). The problem that he is addressing is that it is hard or impossible to link and annotate documents that you don't own, so this led to open hypermedia work. What is open hypermedia? It separates sructure from content, ok, that makes sense. This leads to the concept of ubiquitous hypermedia, using hypermedia for enabling ubiquitous computing in my opinion, because hypermedia and the web is ubiquitous, and so this is part of Mark Weiser's vision that he set out at PARC which I just finished my summer internship at.

Ubiquitous hypermedia links objects, people, and places. Sounds familiar like HP Lab's Cooltown where objects, people and places are associated with a URL. Kaj's research group created a joint research center for interactive spaces, in conjunction with design companies and architects.

Social Computing in Cyberspace

Social computing according to Wikipedia encompasses a list of social computing software and environments. MySpace is the second most popular property on the web, I can't believe that! Besides chat, messaging, dating, relationship sites, reputation systems or recommendation systems are also part of social computing like eBay, eOpinions, etc. What's getting popular now is MMOGs or massively multiplayer online games, where you are immersed in a virtual environment with other people. We can see that social computing is a real important thing, as there are special research groups in this area, like Microsoft's Social Computing Group and PARC's Socio-Technical and Human Computer Interaction group. Howard Rheingold, who coined virtual community, in his book, SmartMobs talks about the next social revolution which involves social computing wth pervasive physical objects. Social computing is widespread in cyberspace, but Kaj says it's not prevalent in physical space, akin to Mark Weiser's vision. His quote "We should co-evolve social computing for phsyical spaces and take advantage of the full faculty of our bodies". I couldn't agree with him more! Yes, that's what we need!

Social Computing in the Physical World

One example of this is social computing in public spaces. His group created iFloor, which is an interactive library floor where people can debate and interact among the people that show up on the floor. Another example is a collaborative library search for children using the floor and pressing buttons using your feet and having an interactive table which the children can use a pen to select items and books. They've put debate and blog structures, spatial hypermedia and meta-data presentation for the implementation of the interactive floors. They also created an eBag (electronic schoolbag) that links pupils to their digital portfolio. How it works is that students carry a mobile phone with Bluetooth and Bluetooth sensors detect within proximity the student and displays the student's profile and content on an electronic whiteboard, where they can collaborate with other students. The student can drag and drop onto their eBag so they can carry digital resources, just like a student carries a school bag of books and physical objects. Another application is shared mobile annotations, with one example being HyCon, a context-aware hypermedia framework, for location-based moblog with a mobile phone and GPS. One cool application I saw is BibPhone in which an RFID reader is put on a book and then you can do a voice annotation of the book by speaking to the loud speaker.

One of the things that Kaj is saying is that it is not easy for librarians to put digital material in a physical environment with the same ease like librarians put books on the shelves. So they created an InfoColumn/InfoGallery that exhibits digital subscriptions in the physical library space. These digital subscriptions can come through RSS feeds, and users pick up links using Bluetooth phones (just like the squirting mechanism of HP Labs' CoolTown). They've installed this system in 2 of Denmark's largest libraries. I think this is really cool, and really immerses hypermedia with physical spaces and exposes that to the real people, the real users, not just geeks like us!

Besides hypermedia in libraries and for students, they've also done spatial physical hypermedia in the home, where they have an interactive table where the members of the home can interact and show photos and content and then redirect to other places in the home. One example is a context-aware remote control, where you can for example watch a TV show in one place in the house, and then continue that show on the same channel when you go into another place in the house. But with the remote control, you just use motion gestures and the system knows that you want to continue.

A lot of these applications are using Bluetooth for the proximity sensing, Bluetooth is certainly pervasive in Europe, that's for sure! Not quite yet in Canada and the US, Bluetooth so far is being used for calling using a headset to a mobile phone. They're also doing gaming applications for collaborative gaming in a physical setting.

Now, he's talking about where hypermedia skills fit in. He's mentioning that hypermedia principles are powerful in integration of physical entities. When you have a physical resource, that has to be resolved and integrated with the web and services on the web, so this requires hypermedia. So what are the research issues in ubiquitous hypermedia? There needs to be rich structures for physical entities and multimedia content, rich presentation specs are needed, and there is need to understand the behaviour which equally important as the structure. In addition, there are research issues into user interaction, user experience, context-awareness and adaptivity and making hidden actions understandable.

In conclusion, he's saying that the next major steps in ICT development will take place outside the traditional PC and traditional web browser. The research group web site is www.interactivespaces.net. This was a great keynote showing great applications, I thoroughly enjoyed it. I just asked the question about certain examples where the project didn't work and why it didn't work, because he mentioned so many examples where there was positive feedback and support. So one of the unifying themes that I can see when working in ubiquitous hypermedia in physical spaces, is that this adds the element of reliability. Now, I'm not saying that hypermedia cannot be reliable, but this is extremely important in a physical environment, because things can't crash and users have to use this, so they can't have the device and software be unreliable.

Thursday, August 24, 2006

Slides, paper and podcast for my Hypertext paper now available

I've made available the podcast of the recording of my talk yesterday, the slides to my presentation, and a copy of the Hypertext paper. If you have any comments on my work or don't feel to make a comment on my blog, then you can send me e-mail (achin AT cs DOT toronto DOT edu).

Last session of Day 1 of Hypertext

The last session for today is Novel Systems and Models. Jean-Yves Delort is presenting Identifying Commented Passages of Documents Using Implicit Hyperlinks. The technique that he is presenting is selecting passages from documents. Blog comments are used as implicit hyperlinks. Building prototypes of relevant comments is difficult because there are many different types of comments and the comments can target different parts of a document. He extracts features from the comments using automatic extraction, which are the nouns, verbs, prepositions. Through the conversation graph, then can select passages by analyzing the parts of the conversation. Then he did a study to ask the relevant parts of the comments from students. I asked the question about what he used for extracting the nouns, verbs and prepositions from the comments, and he mentioned that he used lexical analyzers. Of course, this leads to the issue about spam comments, because some spam comments would pass the lexical analyzer.

The second presentation is Templates and Queries in Contextual Hypermedia presented by Frank Allan Hansen. To support contextual augmentation, there is a need to represent context in hypermedia. Context is represented in hypermedia by the HyCon implementation and using a UML diagram. Tagged objects can represent both data and context information, so need to look into structural computing for determining the structure behind the context. Structural templates are used to model the context and data which is borrowed from structural and object-oriented computing. Contexts are modeled as queries in HyCon. This is just a proof of concept model right now, so they haven't done evaluation of the contextual model.

The third presentation is Harvesting Social Knowledge from Folksonomies by Harris Wu. The research problem is whether social tagging can be used as a part of IT architecture. They are interested in determining how social tagging can be used as navigational links. This is a short paper and so there is a review of what is being done to finding social knowledge, like link analysis, data reduction using singular vector decomposition. It's interesting that he is basing his talk from other people's talks at the conference as examples of how other people are harvesting social knowledge.

The fourth and last presentation is Supporting the Design of Behaviours in Callimachus, presented by Manolis Tzagarakis. They view behaviours from a systems point of view. Within structures, they can determine propagation phenomena of operations. Phenomena-in-the large are expressed in terms of interactions-in-the-small. They use pattern-based approach with propagation templates.

This concludes the end of today!

Technorati tags: , ,

Second part of Day 1 of Hypertext conference

The second part of the Hypertext conference is happening now. The third session is on Education and Evaluation.

Education and Evaluation

The first presentation is on Hyperstories and Social Interaction in 2D and 3D Edutainment Spaces for Children presented by Franca Garzotto. They first of all did a study with kids about their cooperative behaviour in the lab and in class, to understand children's requirements. The kids can go into a hyperspace or hyperlab, the hyperspace is where the hyperstories can be created and navigated, whereas the hyperlab is where the kids can actually perform some particular tasks or activities. One of the things about design is that we need to look at the system in its context, not just the structuring of content and navigation but structuring of activities. From their evaluation, the kids were very enthusiastic about this and didn't want to stop afterwards.

The second presentation is on The Evolution of Metadata from Standards to Semantics in e-Learning Applications by Hend Al-Khalifa, and is being presented by Hugh Davis. Metadata is still needed because we still can't find everything with Google (sorry, Google, there still is improvement in search). We have to manually and tediously enter all this metadata in electronic forms, which is certainly a pain (I'm sure everyone can agree with this). Eric Duval's work looks into trying to automate the metadata by inferring context. Hend and Hugh's work deal with comparing the tags in del.icio.us with the keywords generated by an automatic keyword generator. From their study, they found out that the folksonomy somewhat matched the keywords from the documents. The next step was to extract the semantic metadata by mapping the folksonomy to domain ontologies. From tags, they can get metadata for the learning object model.

The third presentation is Implementation and Evaluation of a Quality Based Search Engine by Thomas Mandl. The motivation behind this work deals with that there is a lack of a quality on the web, and the question arises as to can we automatically determine quality assessment of web pages. Link analysis can be used to determine quality, and this is what is used on the web in the form of PageRank used by Google for ranking searches. Link analysis is based on bibliometrics and citations. There is a Matthew effect from this analysis which is a quote from Matthew 25:29) , this is what Jesus said:


"For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath." (Matthew XXV:29, KJV).


Then, he created his AQUAINT system which is an improvement over other algorithms and systems based on the number of parameters in the model. The most important part is his quality model which is based from various sources. Of course, the data input into the model will determine how accurate the quality model really is.

The fourth and last presentation from this session is Hyperlink Assessment Based on Web Usage Mining presented by Przemyslaw Kazienko and Marcin Pilarczyk. There are positive association rules and negative association rules based on visiting certain pages, what is the probability that he also visits other pages. These association rules then determine which links get added or removed. A very interesting talk, and I asked about whether he had done evaluation of the modified content by the users that visit the pages, because I think that would be essential to evaluation of this sytem and whether it will be adopted.

Day 1 of Hypertext conference

I'm in Day 1 of the Hypertext conference. The first session is on Blogs, Wikis and RSS, and I'm the second speaker in this session. The session is about to start any minute now.

Session 1: Blogs, Wikis & RSS

The first talk in the first session is CUTS: CUrvature-Based Development Pattern Analysis and Segmentation for Blogs and Other Text Streams by Yan Qi and K. Selcuk Candan. Their challenge is to extract information to enable indexing, mining and ease of navigation. They visualize a blog archive by using length of segments and gradient of segments to indicate log vs., short, more vs. less change and the degree of concentration (high vs. low). Their claim is that topic-based segments can enable blog search. Some people use curve sgementation in order to detect topic shift detection and measure teh similarity between consecutive entries. The topic shift is assumed to occur at local minima of the curve. The method is to take the blog entries and represent them as a vector, calculate similarity value among entry vectors and then construct a dissimilarity matrix. Then the entries in the matrix are mapped to a 2-D curve using multidimensional scaling based on entry dissimilarities. From here, they can figure out topic development patterns. To automatically determine the topic development patterns, they use the adaptive curve segmentation algorithm. They tested their algorithm with a blog archive and book.

The next speaker is me and I'm going to present now. I just finished presenting. I think it went well and I had lots of questions which I recorded so I can take that to improve my work. Thanks to this guy from the conference for taking pictures of me, he has a much better camera than mine!


I'm presenting


Marketing myself with URLs (hey, it's a Hypertext conference!)


My talk

The fourth speaker is by David Millard called Web 2.0: Hypertext by Any Other Name? I guess there is no third speaker apparently. This is based upon his Masters project. The original hypertext pioneers had objectives: 1) Halasz' seven issues, 2) open hypermedia systems, 3) adaptive hypermedia systems. In the work, he selected example Web 2.0 systems to determine if they conform to the original hypertext objectives. From the analysis, they created a table to compare the various sites and the features like content search, context search, structural search, dynamic content, links, versioning, annotation, personalization and extensibility to see if the sites did support this. It is important to note that none of the Web 2.0 systems have support for typed n-ary links, but the research systems do. Dynamic structures are also not prelevant, and he mentioned that (that's why my work could be applied, the dynamic social structures can be used to find community). Trails are also not supported, trying to figure out what the user did to navigate (we do have trackbacks in blogs, and search engines do crawling and caching, so it DEFINITELY should be possible to do this, but it's not being done automatically). That would be cool, to find a social trail behind the hypertext. In summary, most of the important aspirations of the hypertext community have been fulfilled in Web 2.0. One of the questions that I asked is that it's not really a fair assessment to compare the early hypertext pioneers because they didn't take into account the social collaboration and the open communication and user driven interface of the web now. So I proposed a new term, instead of hypertext, it should be called hyperspace.

Yahoo tags:
blogs, wikis, RSS, web2.0
Social Networks, Networking and Virtual Communities

It's after the coffee break, and the second session is on Social Networks, Networking and Virtual Communities. The first presenter is Cameron Marlow from Yahoo Research and their paper is on HT06, Tagging Paper, Taxonomy, Flickr, Academic Article, ToRead. This is a position paper and trying to classify the space of tagging. Tagging systems add users into the resources, whereas before, the users were experts and implicit in the web resources. They are creating a tagging model and create a taxonomy based on that, followed with a preliminary study. The systems taxonomy is looking into the structure of tags. Tags can be distinguished by permissions as to who can tag, as well as recommendation of tags. Another classification of tags is tag aggregation like the concept of a set in Flickr, and the object type that is being tagged like web pages, events, e-mail or photos, etc. The statistics in the number of distinct tags over time for a particular user seem to be similar between Flickr and del.icio.us. Flickr and del.icio.us are two completely different tagging systems. I asked the question about tag clouds, and whether Yahoo Research is looking into the analysis of tag clouds and how tags can be aggregated into a tag cloud, where certain tags are related to exactly the same thing (like for instance I tag this post with hypertext2006, hypertext06, ht06, ht2006, but they all mean the same thing). Yahoo is now providing data for academics to make use for.

Now, there is the presentation of short papers. The second presentation in this session is Social Navigation in Web Lectures being presented by Robert Mertens. The talk is about creating a web lecture interface. Social navigation is being used for e-learning.

There is no third presentation, so now there is the fourth presentation which is Using String-matching to Analyze Hypertext Navigation being presented by Roy Ruddle. The string-matching method looks at repeated subsequences in sessions where each letter in the subsequence is the link. One of the things that resonates with me is to provide a trail network (artificial paths) that is generated automatically. Using this type of analysis of navigation can be used to reconstruct sessions. Several people argued because we already have caching of web links (ie. Google personalized history), we already have trails of our navigation.

The last presentation before lunch (yay!) is A Cognitive and Social Framework for Shared Understanding in Cooperative Hypermedia Authoring by Weigang Wang. He is using Piaget's social cognitive theory and applying it to a shared hypermedia workspace.

Technorati tags: , , ,

Wednesday, August 23, 2006

Presenting my paper at Hypertext tomorrow

I'm presenting my paper "A Social Hypertext Model for Finding Community in Blogs" at the Hypertext conference tomorrow. The conference schedule for tomorrow and Friday is here. Met three new people today from Hypertext, that are now part of my growing social network. Which makes me think, I should probably track the people in my social network and do some social network analysis on my own network, based on the contacts I have on my Palm and other people that I know that I didn't put on my Palm, from LinkedIn, from my MSN Messenger contacts, from my e-mail.

I'll have slides, paper and podcast available right after the conference.

Keynote at Hypertext conference

I'm at the Hypertext conference now where I have a paper called "A Social Hypertext Model for Building Communities in Blogs" which we're presenting tomorrow. Anyways, it's interesting in this room for the keynote, there's all these geeks and geekettes with their laptops and there's wires all over the place for ethernet cables and power cables (yes, you know that you're in a conference when...) . And there's all these mobile devices that people have, cell phone, camera, PDA (hey wait a minute, that's me!).

Right now, there is a joint keynote speaker, Ward Cunningham who is going to talk about Design Principles of Wiki: How can so little do so much? Wikis were created in 1994 by Ward Cunningham, so he is considered the pioneer of wikis, and is the author of design patterns. Here are the notes from his talk.

He is defining a wiki, and he's taking it from MSN Encarta which shows 11 words. Britannica takes wiki as 75 out of 496 words. From Wikipedia, a wiki is defined with 3271 words. Wiki was taken from wikiwiki which is a Hawaiian word from Hawaiian bus. What is a difference between a wiki and a blog? According to him, a wiki is a work made by a community. The blogosphere is a community made by its works. There is a reversal of roles. The blogosphere is a collection of works. Wikizens can come and go without changing a wiki's identity. This is important because that is what my research is based on, finding community in blogs, it's the collection of blogs that form a community. I'll explain more about that in my talk tomorrow. Ward keeps having to tell people what is the difference between a blog and a wiki. He is showing the first type of wiki which was archived by Web archive. Ward created this type of wiki to show a new style of computer programming and of writing.

Ward is saying that "Agile development corrects dysfunctional behavior resulting from decades of misunderstood risk". The wiki is being compared with agile software development and open software. How can so little do so much, in terms of a wiki? There was a shortest wiki contest and some are written in Ruby, Perl, Python, PHP and Java, the shortest one is 4 lines, which is 222 characters of Perl. He is going through a sample code of a wiki which was written by Casey West:

#!usr/bin/perl
use CGI':all';

path_info=~/\w+/;
$_='grep -l $& *' .h1($&).escapeHTML$t=param(t)ll'dd<$&';

open F,">$&";
print F$t;

s/htt\S+l([A-Z]\w+){2,}/a{href,$&),$&/eg;
print header,pre"$_<form>",submit,textarea t,$t,9,70


It's interesting how such small code, can produce so much in terms of the result.

Wiki Design Principles

I just noticed he spelled principles wrong as prinicples.

1. Open principle - if a page is incomplete or poorly organized, any reader can edit it as they see fit. This is based on an element of trust on the Net.

2. Incremental principle - it must be both possible and useful to cite unwritten pages. This is based on hypercard which was created before hypertext.

3. Organic principle - the structure of the site is expected to grow and evolve with the community that uses it. Community is from the people that use the wiki.

4. Mundane principle - a small number of conventions provide all necessary formatting. No person could possibly encode things that are alaways percected and

5. Universal principle

6. Overt principle
- the formatted and printed output will sugest the input requiredto reproduce it.

7. Unified Principle
- page names will be drawn from a flat spcae so that no additional context is required to interpret them

8. Precise Principle
- pages will be titled with sufficient precision to avoid most name clashes, typically by forming noun phrases.

9. Tolerant Principle
- All input will produce output even when the ooutput is not likely to be disired

10. Observable Principle
- activity wihin the site can be watched and reviewed by any one else to the conference. Ward was thinking about a person who created

11. Convergent Principle
- ambiguity and duplication can be removed by finding and citing similar or related content.

Ward is now trying to show an example of convergent principle. I really like his diagram of what a wiki is and how methodology and community and technology come together in a wiki.

A final question was asked to Ward about what is the future of wikis and if the 12 principles can be applied, and he said world peace. Everyone applauded, and he gave reference to Doug Englebart and his creation of hypertext.

Technorati tags:

Tuesday, August 22, 2006

In Odense for Hypertext conference

I'm in Odense, Denmark for the Hypertext conference which starts tomorrow. It was funny on the way to Odense, when I took a plane to Amsterdam, the entertainment system that powers the portable TVs on the plane crashed. The TVs started booting up with the Linux shell, that was just too funny!



Anyways, I took some pictures when I arrived in Copenhagen and Odense. Odense is about 1.5 hours train ride from Copenhagen airport. Check out my photos from Flickr.

Sunday, August 20, 2006

Back from PARC but off to Hypertext tomorrow!

I'm back from PARC in Palo Alto, but I'm leaving tomorrow to fly off to Denmark for the Hypertext conference in Odense.



I've never been to Denmark, so it should be fun. The conference is from August 23 to 25, but I'll stay in Copenhagen to do some sightseeing. I've finished packing but need to make some minor changes to my presentation. I'll post the slides and podcast after the conference. The paper that I'm presenting is called "A Social Hypertext Model for Finding Community in Blogs". If you're going to be at Hypertext, come to my presentation, I'm in session 2. I've noticed that Cameron Marlow whom I met at Sunbelt from Yahoo Research is there for a paper along with Mor Naaman (who spoke at PARC from the BayCHI meeting about the Yahoo tagging research project, oh, I don't recall the name of them it's for location-awareness, ah yes, I remember now, it's Zone Tag).

Saturday, August 19, 2006

Finished internship at PARC

Wow, I can't believe 3 months have passed by so fast here in Palo Alto. Today was my last day at PARC. I really enjoyed working at PARC, it is such a great environment with great people and great research. Also, I am glad I had the opportunity to do sightseeing and have fun in the Bay Area. Some highlights were the bike ride in San Francisco and across the Golden Gate Bridge to Sausalito, Fisherman's Wharf and the cable car, the Santa Cruz beach boardwalk, Monterey Bay, Half Moon Bay, visit to the Intel Museum in Santa Clara, visit to Google, visit to HP Labs, visit to Microsoft Research, visit to Sun Labs, visit to the San Jose museum of innovation, and visit to the Computer Museum in Mountain View (hey, fun also means visiting the techie stuff in Silicon Valley too!)

I'll be presenting my paper entitled "A Social Hypertext for Finding Community in Blogs" at the Hypertext conference next week in Denmark. So I'm off traveling again for a 2nd time in Europe. Watch this space for slides and the podcast of my talk. Thanks Silicon Valley for a great time here! Hope I can come by again in the future.

Saturday, August 12, 2006

One more week left at PARC

Well, the summer internship at PARC is nearly winding down. I have one more week left. Still have to finish up a poster and presentation of the work that I'm doing at PARC for next week. This will be my last weekend here in the Bay area. The week after next week, I'll be presenting my PhD research work at the Hypertext conference in Odense, Denmark. And then when I come back from that, September will be extremely busy with course work, TA work, and preparing my PhD depth oral and PhD thesis topic, as well as organizing a workshop called Social Computing: Best Practices for the CASCON conference. If you haven't already, check out the CASCON blog which I'm helping to set up and maintain, and subscribe to it, to keep abreast with the latest from CASCON.

Friday, August 11, 2006

Happy 25th Birthday to the PC!

It's the 25th anniversary of the personal computer. Happy birthday! And here's to many more years of the PC, and its various incarnations of it (like Pocket PC, Car PC, Phone PC, Tablet PC, Ultra-mobile PC, laptop PC, etc.)