In our paper Combining Implicit and Explicit Topic Representations for Result Diversification (SIGIR’12), we presented an approach that combines subtopics that are extracted from multiple heterogenous sources and represented in different formats. We use this method to mine subtopics of a query and applied it to the search result diversification task. Here is the abstract:
Result diversification aims to deal with ambiguous or multi-faceted queries by providing documents that cover as many subtopics of a query as possible. Various approaches to subtopic modeling have been proposed. Subtopics have been extracted internally, e.g., from documents retrieved in response to the query, and externally, e.g., from Web resources such as query logs. Internally modeled subtopics are often implicitly represented, e.g., as latent topic models, while externally modeled subtopics are often explicitly represented, e.g., as reformulated queries.
In this paper, we propose a framework that: i)~combines both implicitly and explicitly represented subtopics; and ii)~allows flexible combination of multiple external resources in a transparent and unified manner. Specifically, we use a random walk based approach to estimate the similarities of the subtopics mined from a number of heterogeneous resources, i.e., click logs, anchor text, and web n-grams. We then combine these with the internal (implicit) subtopics by constructing regularized topic models, where we use the similarities among the external subtopics to regularize the latent topics extracted from the top-ranked documents. Empirical results show that regularization with explicit subtopics extracted from a good resource leads to improved diversification results. These indicate that better (implicit) topic models are formed due to the regularization with (explicit) external resources. In our experiments, click logs and anchor text are shown to be more effective resources compared to web n-grams. Combining resources does not always lead to better results, but achieves a robust performance. This robustness is important for two reasons: it cannot be predicted which resources will be most effective for a given query, and it is not yet known how to reliably determine the optimal model parameters for building implicit topic models.