1 / 30

Personalized Privacy Protection in Social Networks

Personalized Privacy Protection in Social Networks. Speaker : XuBo. Privacy Protection. Privacy Protection in Social Networks. more and more people join multiple social networks on the Web

ranee
Télécharger la présentation

Personalized Privacy Protection in Social Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Personalized Privacy Protection in Social Networks Speaker : XuBo

  2. Privacy Protection

  3. Privacy Protection in Social Networks • more and more people join multiple social networks on the Web • as a service provider, it is essential to protect users’ privacy and at the same time provide “useful”data

  4. Previous Works • Clustering-based approaches • Graph editing methods • Drawback • Different users may have different privacy preferences • Same level may not be fair and data may useless

  5. Personalized Privacy Protection in Social Networks • define different levels of protection for users and incorporate them into the same published social network • A user can have a clear estimation of the knowledge that an attacker can know about him • The knowledge an attacker uses to find the privacy information of a user is called the background knowledge

  6. background knowledge

  7. A simple example • protection objectives • the probability that an attacker finds a person P is node u in the published graph should be less than 50% • the probability that an attacker finds person P1 and P2 have a connection is less than 50%

  8. Personalized Protection

  9. Method • For Level 1 protection, we use node label generalization • For Level 2 protection, we combine the noise node/edge adding methods based on the protection at Level 1 • For Level 3 protection, we further use the edge label generalization to achieve the protection objectives

  10. Abstract • PROBLEM DEFINITION • LEVEL 1 PROTECTION • LEVEL 2 PROTECTION • LEVEL 3 PROTECTION • EXPERIMENTS

  11. PROBLEM DEFINITION • G(V,E), Given a constant k

  12. NP-hard problem • Here the cost stands for the sum of label generalization levels. For example, in Figure 1(d), if let the labels in the group {a,d} to be the same, they become [(Africa, American), 2*], thus the node label generalization cost of this group is 4.

  13. LEVEL 1 PROTECTION • Level 1’s background knowledge is about node label list • generalization is to group nodes and make the nodes within each group have one generalized node label list

  14. Achieve Level 1 protection • if each group’s size is at least k, the first objective is achieved. • To satisfy objectives (2) and (3), the following condition must be satisfied: The first condition constrains that any two nodes in the same group do not connect to a same node The second condition constrains that no edges within a group

  15. The Safety Grouping Condition (SGC)

  16. Algorithm 1: Generate safe groups

  17. LEVEL 2 PROTECTION • Adding nodes/edges into GL1 • Label Assignment

  18. Construct a k-degree anonymous graph • The first step is to set a target degree for each node • the second step is to randomly create edges between the nodes which need to increase their degrees under SGC • we add noise nodes to enable all the nodes in GL1 have their target degrees by connecting them with noise nodes under SGC. • we need to hide the noise nodes by making their degree to a preselected value degree target Degree target is the degree that the maximum number of groups of size k in GL1 have

  19. Algorithm 2: Degree Anonymizing Algorithm with X

  20. A running example of Algorithm 2

  21. Label Assignment • Nocon(A1,A2, l) • a node/edge label assignment with less Nocon(A1,A2, l) changes is preferred • heuristic method • first assign labels to the noise edges • decide the labels of noise nodes

  22. Assign labels to the noise edges

  23. Decide the labels of noise nodes

  24. LEVEL 3 PROTECTION • degree label sequence • a group contains at least one node that needs Level 3 protection, for any nodes in it, change their degree label sequence to be the same by generalizing edge labels.

  25. Generate degree label anonymous graph

  26. EXPERIMENTS • Utilities • One hop query (1 hop) • (l1, le, l2) • Two hop query (2 hop) • (l1, le, l2, le2, l3) • Relative error = • Degree Distribution • EMD (Earth Mover Distance) • a good measure of the difference between two distributions

  27. Datasets • Speed Dating Data • 552 nodes and 4194 edges • Each node has 19 labels • Edge label is “match”/”unmatch” • ArXivData • 19835 nodes and 40221 edges • Each node denotes an author • set label ”seldom” to edges with weight 1, ”normal” to edges with weight 2 - 5 and ”frequent” to edges with weight larger than 5 • ArXivData with uniform labels

  28. Results • higher utility • average relative error increases with the increasing of k

  29. Results(2)

  30. Results(3)

More Related