Reference: Glass, A. Explaining Preference Learning. Technical Report. 2007.
Abstract: Although there is existing work on learning user preferences in various systems, the outputs of such systems tend to be confusing to users. Even when the system is correct, users view outcomes as "magical" in some way, but are unable to understand why a particular answer is correct, or whether the system is likely to be helpful in the future. We describe the augmentation of a preference learner, designed to provide meaningful feedback to the user in the form of explanations. We also show how these explanations can be incorporated into the larger system to provide transparency into the learning system, enabling trust between user and system.
Full paper available as pdf.