1 / 1

Thwarting Passive Privacy Attacks in Collaborative Filtering

Thwarting Passive Privacy Attacks in Collaborative Filtering Rui Chen Min Xie Laks V.S. Lakshmanan HKBU , Hong Kong UBC, Canada UBC, Canada. Introduction. Anonymization Algorithm.

demi
Télécharger la présentation

Thwarting Passive Privacy Attacks in Collaborative Filtering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Thwarting Passive Privacy Attacks in Collaborative Filtering Rui Chen Min XieLaks V.S. Lakshmanan HKBU,HongKongUBC,CanadaUBC,Canada Introduction AnonymizationAlgorithm We directly anonymize RILs because in real-life recommender systems an adversary does not have access to the underlying matrices. This is critical to both data utility and scalability. Our solution employs two anonymization mechanisms: suppression, a popular mechanism used in privacy-preserving data publishing, and permutation, a novel mechanism tailored to our problem,whichpermutesan item that has moved up in an RIL to a position equal to or lower than its original position. Identify potential privacy threats. For each item i, identify the set of items whose successive RILs at time T1 and T2 are distinguished by i. Labelthemwitheithersuppressorpermuteandrecordpermuteposition. Determine anonymization locations.Identify the itemsetsthatcanbeusedtoinfer some target item with probability > δ.Calculate a set of anonymization locations bymodelingtheprocessasaweightedminimumhittingset(WMHS)probleminordertominimizethe resultant utility loss. Performanonymizationoperations.Firstsuppress allitems labeledsuppressandthenpermuteitemslabeledpermute withoutgeneratingnewprivacythreats.Ifapermutationwithoutviolatingtheprivacyrequirementcannotbefound,suppressionwillbeusedinstead. Extendtomultiplerelease.Secureanew RIL release withrespect to all previous|W|-1 RILreleases,where|W|isthesizeofanattackwindow. Recently, collaborativefilteringhasbeenincreasinglydeployedinawiderangeofapplications.As a standard practice, many collaborative filtering systems release related-item lists (RILs) as a means of engaging users. eBayAmazonNetflix Releaseof RILs brings substantial privacyrisks w.r.t. a fairly simple attack model, known as passive privacy attack. In apassive privacy attack, an adversary possesses some background knowledge in the form of a set of items rated by a target user, and seeks to infer for some other item, called a target item, whether it has been rated/bought by the user, from the public RIL releases published by the recommender system. Fig. 1.A sample user-item rating matrix and its public RILs Example.Consider a recommender system with the user-item rating matrix in Fig. 1. Suppose at time T1 an attacker knows that Alice (user 5) has bought items i2, i3, i7 and i8, and intendsto learn if Alice has bought a sensitive item i6. The adversary then monitors the temporal changes of the public RILs of i2, i3, i7 and i8. Let the new ratingsduring (T1, T2] be the shaded ones. At T2, by comparing the RILs with those at T1, the attacker observes that i6 appearsormoves up in the RILs of i2, i3, i7 and i8, and consequently infers that Alice has bought i6. ExperimentalEvaluation Fig. 3. Utility results on: MovieLens (a)–(d); Flixster (e)–(h) AttackModelandPrivacyModel Attackmodel.We propose the concept of attack window to model a real-world adversary.An adversary performs passive privacy attacks by comparing any two RIL releases within his attack window. Fig.2.Anillustrationofattackwindows Privacymodel.We propose a novel inference-proof privacy notion, known as δ-bound, tailored for passive privacy attacks in collaborative filtering. Let Tran(u) denote the transaction of user u, i.e., the set of items bought by u. Definition(δ-bound)Let B be the background knowledge on user u in the form of a subset of items drawn from Tran(u), i.e., B ⊂ Tran(u). A recommender system satisfies δ-bound with respect to a given attack window W if by comparing any two RIL releases R1 and R2 within W, where i ∈ (I − B) is any item that either appears afresh or moves up in R2, and 0 ≤ δ ≤ 1 is the given privacy requirement. Fig. 4. Attack success probability results on: MovieLens (a)–(d); Flixster (e)–(h) Fig. 5. Efficiency results on: MovieLens (a)–(d); Flixster (e)–(h) Conclusion Our work is the first remedy to passive privacy attacks in collaborative filtering.In this paper, we proposed the δ-bound model for thwarting passive privacy attacks and developed anonymization algorithms to achieve δ-boundby means of a novel anonymization mechanism called permutation.

More Related