Johannes Fürnkranz, TU Darmstadt, Germany

- Topic
- Preference Learning in Games
- Abstract
- Game Playing has been a popular and successful application domain for machine learning. Typically, an evaluation function that assigns numerical utilities to different game states is learned from game traces or from self play. In this talk, we show how to use preference learning techniques for such tasks. For example, in games like Chess, expert feedback is amply available in the form of annotated games. This feedback usually comes in the form of qualitative information because human annotators find it hard to determine precise utility values for game states. We show the results of a case study, in which we used game annotations for learning a utility function that respects a set of preference constraints that have been obtained from the annotations. We will also briefly discuss current work in which we try to adapt Monte Carlo tree search techniques to a preference-based setting.

Jérôme Lang, Université Paris Dauphine, France

- Topic
- From social choice to preference learning
- Abstract
- Social choice theory is the field that studies the aggregation of individual preferences towards a collective choice. Among important collective choice problems studied, we find voting, fair division of resources, matching, coalition formation, and judgment aggregation. Computer science (and especially artificial intelligence and operations research) plays an increasing role in the field. More specifically, and more recently, machine learning (and especially, preference learning) is now playing an important role as well. Examples of meeting points of social choice and preference learning:
- How can we decide the outcome of an election from a small set of samples of the voters’ preferences?
- How can we design social choice mechanisms using machine learning principes? How can we interpret voting rules as maximum likelihood estimators?
- When can voting and aggregation rules be useful for preference learning?

The talk will present the basics of social choice, and then will focus on the connections between social choice and preference learning.

Marina Meila University of Washington, USA

- Topic
- Discovering consensus and structure in preferences
- Abstract
How do we do “statistics as usual” when data come in the form of permutations, partial rankings, or other objects with rich combinatorial structure? The problem of representing preferences is compounded by the fact that many problems of interest in the area of consensus finding are known to be NP-hard.

I will start with the simple “consensus ranking” problem and describe how combinatorics, search, and statistics can be used to build interpretable models and algorithms that are efficient in the presence of consensus.

At the center of the results is counting the inversions in a permutation. This leads to a natural mathematical way to represent a permutation, which is key both to fast computing and to better understanding the models we create. As a result, we obtain algorithms that can effectively model preferences over very large number of alternatives, which can handle common forms of missing data in a principled way, or which can simultaneously discover the consensus and the structure of the preferences expressed.

Patrice Perny UPMC, France

- Topic
- Incremental Elicitation for Decision Making on Combinatorial Domains
- Abstract
Preference elicitation on combinatorial domains is a challenging issue for supporting individual or collective decision making. In combinatorial optimization problems, systematic elicitation methods aiming at fully specifying a preference model are not feasible due to the large amount of solutions. Moreover, very often, the optimal solution can be identified without knowing the model precisely. This explains the interest for incremental elicitation procedures, in which preference queries are selected iteratively, to be as informative as possible at every step, so as to progressively reduce the set of admissible parameters of the preference model, until an optimal solution can be determined. This approach which has been successfully applied for decision making on explicit sets in various contexts, is harder to implement on implicit set.

In this presentation we develop incremental search methods for decision making on implicit sets. We introduce a new incremental elicitation approach interweaving preference elicitation and search to determine the optimal solution within a combinatorial set implicitly defined. This approach is implemented in different contexts such as multiobjective optimization models (state-space search, spanning tree problems) to elicit the weights of an aggregation function until the optimal solution can be identified, or in collective decision making (voting rules for knapsack problems) to elicit individual utilities so as to identify the set of necessary winners. The efficiency of this approach will be discussed from the results we obtained on different numerical tests. This presentation is based on recent joint works with Nawal Benabbou.