Fast and Reliably Online Learning to Rank for Information Retrieval

Thesis Cover: Fast and Reliable Online Learning to Rank for Information Retrieval

Fast and Reliable Online Learning to Rank for Information Retrieval. Download a copy (PDF, 2.9 MB).

My PhD thesis is out! Download a copy from this page, or get in touch if you would like a printed copy. The public defense will be on May 28. at the Agnietenkapel in Amsterdam.

Summary:

The amount of digital data we produce every day far surpasses our ability to process this data, and finding useful information in this constant flow of data has become one of the major challenges of the 21st century. Search engines are one way of accessing large data collections. Their algorithms have evolved far beyond simply matching search queries to sets of documents. Today’s most sophisticated search engines combine hundreds of relevance signals to provide the best possible results for each searcher.

Current approaches for tuning the parameters of search engines can be highly effective. However, they typically require considerable expertise and manual effort. They rely on supervised learning to rank, meaning that they learn from manually annotated examples of relevant documents for given queries. Obtaining large quantities of sufficiently accurate manual annotations is becoming increasingly difficult, especially for personalized search, access to sensitive data, or search in settings that change over time.

In this thesis, I develop new online learning to rank techniques, based on insights from reinforcement learning. In contrast to supervised approaches, these methods allow search engines to learn directly from users’ interactions. User interactions can typically be observed easily and cheaply, and reflect the preferences of real users. Interpreting user interactions and learning from them is challenging, because they can be biased and noisy. The contributions of this thesis include a novel interleaved comparison method, called probabilistic interleave, that allows unbiased comparisons of search engine result rankings, and methods for learning quickly and effectively from the resulting relative feedback.

The obtained analytical and experimental results show how search engines can effectively learn from user interactions. In the future, these and similar techniques can open up new ways for gaining useful information from ever larger amounts of data.

Advertisement

Balancing Exploration and Exploitation in Online Learning to Rank for IR

Our article “Balancing Exploration and Exploitation in Online Learning to Rank for IR” (to appear in Information Retrieval) addresses the question of whether online learning to rank systems need to balance exploration and exploitation.

Intuitively, when learning online from natural user interactions with a search system, a search engine needs to exploit what it has learned so far, in order to fulfill user expectations in search performance as well as possible. At the same time, it needs to explore, to ensure that the collected feedback is as informative as possible for future learning. Often, the results that would be most useful for the current search are not the same as those most useful for learning.

In our article we formalize the above problem as an exploration-exploitation dilemma. We develop two approaches for balancing exploration and exploitation in an online learning to rank for IR setting, one based on a pairwise, one based on a listwise learning approach. We show that, as hypothesized, balancing exploration and exploitation improves online performance in both types of approaches. However, the optimal balance depends on the approach, and on other factors, such as the amount of noise in user feedback.