Fair Algorithmic Ranking
hendrikboom writes:
Researchers at the Cornell and the Technische Univeritat Berlin and Cornell have studied the problem that more popular items get priority in search results, creating a positive feedback loop that unfairly deprecates other, equally valuable items.
Rankings are the primary interface through which many online platforms match users to items (e.g. news, products, music, video). In these two-sided markets, not only the users draw utility from the rankings, but the rankings also determine the utility (e.g. exposure, revenue) for the item providers (e.g. publishers, sellers, artists, studios). It has already been noted that myopically optimizing utility to the users - as done by virtually all learning-to-rank algorithms - can be unfair to the item providers. We, therefore, present a learning-to-rank approach for explicitly enforcing merit-based fairness guarantees to groups of items (e.g. articles by the same publisher, tracks by the same artist). In particular, we propose a learning algorithm that ensures notions of amortized group fairness, while simultaneously learning the ranking function from implicit feedback data. The algorithm takes the form of a controller that integrates unbiased estimators for both fairness and utility, dynamically adapting both as more data becomes available. In addition to its rigorous theoretical foundation and convergence guarantees, we find empirically that the algorithm is highly practical and robust.
Journal Reference:
Marco Morik, Ashudeep Singh, Jessica Hong, and Thorsten Joachims. 2020. Controlling Fairness and Bias in Dynamic Learning-to-Rank. In Proceedingsof the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '20), July 25-30, 2020, Virtual Event, China.ACM, NewYork, NY, USA. DOI: https://doi.org/10.1145/3397271.3401100
Maybe this, if deployed widely, can help reduce the tendencies for discourse to develop isolated silos.
Read more of this story at SoylentNews.