Query Formulation with Query Auto-Completion

In our recent CIKM paper An Eye-tracking Study of User Interactions with Query Auto Completion (QAC), we looked at how searchers examine and interact query completions, and what this tells us about measuring QAC ranking quality. Accurately interpreting user interactions would allow us to optimize search for each individual user. However, measuring performance is a real bottleneck, because searcher behaviour itself is affected by the searcher’s previous experience, expectations, and the search engine.

Here’s an example video that shows how searchers examine QAC:

http://research.microsoft.com/apps/video/default.aspx?id=232081

For full details, take a look at the paper here.

Advertisement

RUSSIR 2014: Lectures on Online Experimentation for Information Retrieval

This summer, I was invited to teach at RuSSIR – the Russian Summer School in Information Retrieval in Nizhny Novgorod. I very much enjoyed the enthusiasm and insightful questions of the students. Thanks again to all who attended my lectures, and to the organizers!

My 5 lectures broadly covered online evaluation and learning to rank for information retrieval. This includes topics such as AB-testing, interleaved comparisons, estimation of online metrics from exploration data, and bandits for online learning to rank. If you want to review some of the material, you can now access my slides here: http://1drv.ms/1yYLNlp.

To get you started, here are the slides from the first lecture: