Thursday, August 24, 2006

Second part of Day 1 of Hypertext conference

The second part of the Hypertext conference is happening now. The third session is on Education and Evaluation.

Education and Evaluation

The first presentation is on Hyperstories and Social Interaction in 2D and 3D Edutainment Spaces for Children presented by Franca Garzotto. They first of all did a study with kids about their cooperative behaviour in the lab and in class, to understand children's requirements. The kids can go into a hyperspace or hyperlab, the hyperspace is where the hyperstories can be created and navigated, whereas the hyperlab is where the kids can actually perform some particular tasks or activities. One of the things about design is that we need to look at the system in its context, not just the structuring of content and navigation but structuring of activities. From their evaluation, the kids were very enthusiastic about this and didn't want to stop afterwards.

The second presentation is on The Evolution of Metadata from Standards to Semantics in e-Learning Applications by Hend Al-Khalifa, and is being presented by Hugh Davis. Metadata is still needed because we still can't find everything with Google (sorry, Google, there still is improvement in search). We have to manually and tediously enter all this metadata in electronic forms, which is certainly a pain (I'm sure everyone can agree with this). Eric Duval's work looks into trying to automate the metadata by inferring context. Hend and Hugh's work deal with comparing the tags in del.icio.us with the keywords generated by an automatic keyword generator. From their study, they found out that the folksonomy somewhat matched the keywords from the documents. The next step was to extract the semantic metadata by mapping the folksonomy to domain ontologies. From tags, they can get metadata for the learning object model.

The third presentation is Implementation and Evaluation of a Quality Based Search Engine by Thomas Mandl. The motivation behind this work deals with that there is a lack of a quality on the web, and the question arises as to can we automatically determine quality assessment of web pages. Link analysis can be used to determine quality, and this is what is used on the web in the form of PageRank used by Google for ranking searches. Link analysis is based on bibliometrics and citations. There is a Matthew effect from this analysis which is a quote from Matthew 25:29) , this is what Jesus said:


"For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath." (Matthew XXV:29, KJV).


Then, he created his AQUAINT system which is an improvement over other algorithms and systems based on the number of parameters in the model. The most important part is his quality model which is based from various sources. Of course, the data input into the model will determine how accurate the quality model really is.

The fourth and last presentation from this session is Hyperlink Assessment Based on Web Usage Mining presented by Przemyslaw Kazienko and Marcin Pilarczyk. There are positive association rules and negative association rules based on visiting certain pages, what is the probability that he also visits other pages. These association rules then determine which links get added or removed. A very interesting talk, and I asked about whether he had done evaluation of the modified content by the users that visit the pages, because I think that would be essential to evaluation of this sytem and whether it will be adopted.

No comments: