Wednesday, November 16, 2005
Peter Norvig talk from Google today
Peter Norvig from Google came to talk today at the Distinguished Lecture in Computer Science at U of T today. His talk was about AI in the Middle: Mediating between Author and Reader.
Here was the abstract of his talk:
The system of publishing the written word has made more knowledge available to more people than any other technology. No other system comes within a factor of a million. Now that a good portion of this written material is available online, it can be processed by computer. But the written word is notoriously imprecise and ambiguous, so currently the best way to make use of it is to leverage the intelligence and
language understanding ability of author and reader, and relegate the computer to the more modest role of connecting the two. Even this modest role still leaves a number of challenges in computer science, computational linguistics, and artificial intelligence, which will be discussed.
His bio is here with an excerpt below:
Peter Norvig has been at Google Inc since 2001 as the Director of Machine Learning, Search Quality, and Research. He is a Fellow of the American Association for Artificial Intelligence and co-author of Artificial Intelligence: A Modern Approach, the leading textbook in the field.
Previously he was the senior computer scientist at NASA and head of the 200-person Computational Sciences Division at Ames Research Center. Before that he was Chief Scientist at Junglee, Chief designer at Harlequin Inc, and Senior Scientist at Sun Microsystems Laboratories.
Dr. Norvig received a B.S. in Applied Mathematics from Brown University and a Ph.D. in Computer Science from the University of California at Berkeley. He has been a Professor at the University of Southern California and a Research Faculty Member at Berkeley. He has over fifty publications in various areas of Computer Science, concentrating on Artificial Intelligence, Natural Language Processing and Software Engineering, including the books Paradigms of AI Programming: Case Studies in Common
Lisp, Verbmobil: A Translation System for Face-to-Face Dialog, and Intelligent Help Systems for UNIX.
Here are the notes that I made from the talk:
Peter talked about the 3 different methods for combing knowledge from information. The first method is knowledge engineering which involves general level human intelligence therefore this requires logical axioms and encoding knowledge. However, the problem is too expensive and it takes time to process and analyze, we don’t really need all that knowledge. The second method is machine learning (of which U of T Computer Science is renowned in, as it has the largest group in AI). Machine learning involves examining trends, and with machine learning algorithms, you can do spelling corrections. So, Peter gave an example where his Google colleague's name using dictionary-based schemes becomes Tehran Salami. The audience laughed. However, if you use corpus based, then you get a better result. Even though the more data you get, then the algorithms work (shown by Google's graphs and prediction scheme), we need to worry more about the data rather than the algorithms. So this method is not really good for general AI.
Enter AI in the middle as a hybrid that connects the authors and readers together, between knowledge engineering and machine learning. There is a book by Andy Clark called Being There Putting Brain, Body and World Together Again – the brain is not the sole part, Clark says that the brain is the mediator. If we apply this to search, the idea is this. We predict something, present it to the user, and the user provides feedback. We let the human do the decision, and don’t try to approximate human intelligence. We would be happier by getting material from an authority rather than an aggregator.
How to make author and reader become more intelligent? We need to know about the What, Who, How, Where, When, and Wallet (which is very important to get money!). Google uses Statistical Machine Translation. For example, to translate Arabic into English, some words are not fluent, there is 1 disfluency for each sentence. For translating Chinese into English, there are 2 disfluencies for each sentence. So Google uses a probabilistic model based on word statistics, don’t use syntax (parser), or semantics (ontologies, Wordnet). More data is better, this doubles the parallel training corpus.
Another way for searching words is using Named Entity Extraction. For example, Sun Microsystems is in the group of software companies. They use word clustering and use a Bayes network to assign words into clusters to infer what the word is, and use that to return results of query.
There was a question period after the talk. I asked question about Google’s take on social searching and searching based on others that have searched for that term before like for example My Web 2.0 Beta from Yahoo and from del.icio.us and flickr and tagging. Peter replied that he doesn’t believe tagging works well but Google is working on personalization of searches and in the labs for sharing searches with others. Before they based their searching algorithms on stateless, but now beginning to add state. However, he believes that in certain situations, tagging does work like for example in pictures but it doesn't work well for web pages.
Another question that was asked was about structural searching. For example, finding apartments with a certain price and at a certain location. Google can't answer that query for you. Peter said that Google hasn't really looked too much into this. On the topic of searching on mobile devices, Peter mentioned the need for a different type of interface because of the constrained space and that this will become important in the future. This is probably the work of the summer intern as a wireless engineer at Google Labs.
The last question dealt with visualizing searches and showing clusters. Should we have clustering at all? Peter and Google say no, for the majority of queries, you don’t want clusters, but for the minority audience then it may be good to have clusters, but for most queries, it won’t help to show the clusters in the results. One of the problems with clustering is that what happens if the cluster is incorrect then have to correct the cluster where it would be easier to just redo the text then revisualize it. Another problem with clustering is what to name the cluster.
All in all, it was a good talk, hey it's from Google! It's the work at Google that makes Computer Science still the discipline that students want to get their degree in.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment