Publicly-accessible adaptive systems such as collaborative recommender systems present a security problem.
Attackers, who cannot be readily distinguished from ordinary users, may inject biased profiles in an attempt to force a system to “adapt” in a manner advantageous to them. Such attacks may lead to a degradation of user trust in the objectivity and accuracy of the system.
Recent research has begun to examine the vulnerabilities and robustness of different collaborative recommendation techniques in the face of “profile injection” attacks.
In this paper, we outline some of the major issues in building secure recommender systems, concentrating in particular on the modeling of attacks and their impact on various recommendation algorithms.
We introduce several new attack models and perform extensive simulation-based evaluation to show which attack models are most successful against common recommendation techniques.
We consider both the overall impact on the ability of the system to make accurate predictions, as well as the degree of knowledge about the system required by the attacker to mount a realistic attack.
Our study shows that both user-based and item-based algorithms are highly vulnerable to specific attack models, but that hybrid algorithms may provide a higher degree of robustness.
Finally, we develop a novel classification-based framework for detecting attack profiles and show that it can be effective in neutralizing some attack types.
It's nice to know that MemeStreams has been using a robust approach for years now.