What the research is:
Recommender systems have come to influence nearly every aspect of human activity on the internet, whether in the news we read, the products we purchase, or the entertainment we consume. The algorithms and models at the heart of these systems rely on learning our preferences through the course of our interactions with them; when we watch a video or like a post on Facebook, we provide hints to the system about our preferences.
This repeated interplay between people and algorithms creates a feedback loop that results in recommendations that are increasingly customized to our tastes. Ideally, these feedback loops ought to be virtuous all the time; the recommender system is able to infer exactly what our preferences are and provides us with recommendations that enhance the quality of our lives.
However, what happens when the system overindexes and amplifies interactions that do not necessarily capture the user’s true preferences? Or if the user’s preferences have drifted toward recommended items that could be considered harmful or detrimental to their long-term well-being? Under what conditions would recommender systems respond to these changes and amplify preferences leading to a higher prevalence of harmful recommendations?
How it works:
In this paper, we provide a theoretical framework to answer these questions. We model the interactions between users and recommender systems and explore how these interactions may lead to potential harmful outcomes. Our main assumption is that users have a slight inclination to reinforce their opinion (or drift), i.e., increase their preference toward recommendations that they seem to correlate well with, and decrease it otherwise. We characterize the temporal evolution of the user’s preferences as a function of the user, the recommender system, and time, and ask whether this function admits a fixed point, i.e., any change in the system’s response to the user’s interactions does not change their preferences. We show that even under a mild drift and absent any external intervention, no such fixed point exists. That is, even a slight preference by a user for recommendations in a given category can lead to increasingly higher concentrations of item recommendations from that category. We refer to this phenomenon as preference amplification.
Recommender system model
We leverage the well-adopted collaborative filtering model of recommendation systems – each (user, item) tuple receives a score based on the likelihood of the user to be interested in the item. These scores are computed using low-dimension matrix factorization. We use a stochastic recommendation model, in which the set of items presented to a user is chosen probabilistically relative to the items’ scores (rather than deterministically sorting by score). The level of stochasticity in the system is determined by a parameter 𝛽; the higher the 𝛽, the lesser the stochasticity and the distribution of scores is heavily concentrated in the top items. Finally, we think of the content available for recommendation to be benign or problematic, and use ɑ to denote the prevalence of the latter, i.e., the percentage of problematic content out of all content.
Our model also includes the temporal interactions between the user and the recommender system, where in each iteration the user is presented with a set of items, and signals to the recommender system their interests. These interests drift slightly based on the recommended items, the actual magnitude of drift being parameterized by the score the item receives.
The figure below illustrates our temporal drift model. The recommender system initially recommends a diverse set of items to the user, who in turn interacts with those items they prefer. The recommender system picks up this signal, and recommends a less diverse set of items (depicted as only green and blue items) that matches the perceived preferences of the user. The user then drifts further towards a very specific set of items (depicted as the items in blue) that the recommender system suggested. This causes the recommender system to only suggest items from that specific class (blue items).
Simulations
In order to study the parameter space in which the system reinforces recommendation scores, we use simulations with both synthetic and real data sets. We show that the system reinforces scores for items based on the user’s initial preferences — items similar to those liked by the user initially will have a higher likelihood of being recommended over time, and conversely, those that the user did not initially favor will have a decreasing probability of recommendation.
In the figure above on the left, we can see the effect of preference amplification. Solid lines (top group of lines) indicate the likable items, whose probability of receiving a positive reaction from the user is above 0.5. The dashed lines (bottom group) indicate the items that have a low positive reaction from the user. As the figure shows, the probability of liking an item increases toward 1 if its score is positive and toward 0 otherwise. For higher values of 𝛽 (the stochasticity of the recommender system), the stochastic recommender system acts as a Top-N recommender and is therefore more likely to present the users with items that they already liked, leading to stronger reinforcement of their preferences. On the right-side plot in the figure above we see another outcome of preference amplification – the probability of the user liking an item from the top 5% of items recommended to them significantly increases over time. This amplification effect is especially evident for high values of 𝛽, where the stochasticity of the system is low, and the recommender system chooses items that are very likely to be preferred by the user.
Mitigations
Finally, we discuss two strategies for mitigating the effects of preference amplification of problematic entities at a) the global level and b) the personal level. In the former, the strategy is to remove these entities globally in order to reduce their overall prevalence, and in the latter, the system targets users and applies interventions aimed at reducing the probability of recommendation of these entities.
In the figure above we characterize simulation effects of a global intervention on problematic content. We plot the probability of recommending an item of problematic content for different initial prevalences (denoted by ɑ). The figure shows that despite the low prevalence of the problematic content, if there is some initial affinity for that type of content, the probability of it being recommended to the user increases over time.
In the paper, we also describe an experiment we conducted using a real-world large-scale video recommender system. In the experiment, we downrank videos considered to include borderline nudity (the platform already filters out videos that violate community standards) for users who have a high level of exposure to them consistently. The results of the experiment show that in addition to reducing exposure of this content in the impacted population, we saw that overall engagement go up by +2%. These results are highly encouraging, as not only we can prevent exposure to problematic content, we also have an overall positive effect on the user experience.
Why it matters:
In this work, we study the interactions between users and recommender systems, and show that for certain user behaviors, their preferences can be amplified by the recommender system. Understanding the long-term impact of ML systems helps us, as practitioners, to build better safeguards and ensure that our models are optimized to serve the best interests of our users.
Read the full paper:
A framework for understanding preference amplification in recommender systems
Learn More:
Watch our presentation at KDD 2021.
The post When do recommender systems amplify user preferences? A theoretical framework and mitigation strategies appeared first on Facebook Research.