1 / 48

Part IV : Liability Chapter 16: Trust and Accountability

Part IV : Liability Chapter 16: Trust and Accountability. Sebastian Ries. Agenda. Introduction: Trust And Accountability (1) Trust (32) Motivation (3) Introduction of (Social) Trust for Computer Scientists (3) Integration of Trust into Applications & State-of-the-Art: Classification (5)

patch
Télécharger la présentation

Part IV : Liability Chapter 16: Trust and Accountability

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Part IV : LiabilityChapter 16: Trust and Accountability Sebastian Ries

  2. Agenda • Introduction: Trust And Accountability (1) • Trust (32) • Motivation (3) • Introduction of (Social) Trust for Computer Scientists (3) • Integration of Trust into Applications & State-of-the-Art: Classification (5) • Challenges (1) • Examples for Trust Models (20) • TidalTrust (9) • Subjective Logic (11) • Accountability (11) • Accountability in Computer Science (2) • Reputation Systems (3) • Concept (2) • Classification of Reputation Systems (2) • Example: eBay Feedback Forum (MISSING) • Micropayments Systems (6) • Concept (2) • Classification (1) • Examples (3) • Summary (1) • Conclusion (1) Trust And Accountability:

  3. Trust And Accountability • Trust And Accountability: • Both are well-known concepts of every day social life • Both can be transferred to ubiquitous computing • Definitions (Merriam-Webster Online Dictionary) • Trust: “assured reliance on the character, ability, strength, or truth of someone or something”, and the “dependence on something future or contingent” • Accountability: “the quality or state of being accountable; especially: an obligation or willingness to accept responsibility or to account for one’s actions” Trust And Accountability:

  4. Trust Trust And Accountability:

  5. Trust • Motivation for Trust: • „Socially based paradigms will play a big role in pervasive computing environments“ (Bhargava et al., 2004) • Trust: Well-founded basis for an engagement in presence of uncertainty and risk Trust? Social contacts („real world“) Interactions between devices („virtual world“) Trust And Accountability:

  6. Trust • Motivation for Trust in Ubiquitous computing: • Calm technology • User centric approach • Characteristics of the UC environment • Unmanaged • Complex • Heterogeneous • Massively networked • => There is a need for: • interaction between loosely coupled devices • a basis for risky engagements Trust And Accountability:

  7. Introduction of Trust • Trust, … • Is a well-known concept of everyday life • Allows simplification in complex situations • Is a basis for delegation and efficient rating of information => trust seems to be a promising approach • Trust is based on / influenced by,… • Direct experiences (e.g. from previous interactions) • Indirect experiences • Recommendations • Reputation • Risk • Context Trust And Accountability:

  8. Introduction of Trust • Properties of Trust • Subjective • Asymmetric • Context-dependent • Dynamic • Non-monotonic • Gradual • Not transitive • But: there is the concept of recommendations • Categories of Trust (McKnight & Chervany, 1996): • Interpersonal Trust - between people or groups • Impersonal Trust – arises from a social or organizational situation • Dispositional Trust – general attitude towards the world Trust And Accountability:

  9. Introduction of Trust • Definition • Many different definitions (e.g., see also introduction) • Example definition which is adopted by some researchers: • “.. trust (or, symmetrically, distrust) is a particular level of the subjective probability with which an agent will perform a particular action, both before he can monitor such action (or independently of his capacity of ever to be able to monitor it) and in a context in which it affects his own action.” (Gambetta, 2000) Trust And Accountability:

  10. Trust in Ubiquitous Computing • Goal • trust as a basis for risky engagements in dynamic, complex environments under uncertainty and risk • Design aspects when building trust-based systems • Trust modeling • Trust management • Decision making Trust And Accountability:

  11. Trust Modeling • Main task: Representation of trust values and computational model • Different approaches regarding representation of trust • Dimension: one-dimensional, multi-dimensional • Domain: Binary, discrete, or continuous values • Semantics: Rating, ranking, probability, belief, and fuzzy concept • Different approaches regarding computation of trust • Considering / Not considering of recommendations • Other aspects • Aging of evidences • Re-evaluation of trust values • Examples: See TidalTrust and Subjective Logic Trust And Accountability:

  12. Trust Management • Traditional definition by Matt Blaze: • Trust management (TMa) “… is a unified approach to specifying and interpreting security policies, credentials, and relationships that allows direct authorization of security-critical actions.” (Blaze et al., 1998) • Drawbacks of traditional trust management: • Trust establishment is not part of the model • Trust is passed on by credentials; issuing credentials is not part of the TMa • Trust is only treated implicitly • Trust is (often / sometimes) treated as monotonic • Missing evaluation of risk • Message: (Traditional) Trust management is access control. Example (form Grandison & Sloman, 2000): PolicyMaker (Blaze et al., 1996) The following policy specifies that any doctor who is not a plastic surgeon should be trusted to give a check-up. Policy ASSERTS doctor_key WHERE filter that allows check-up if the field is not plastic surger Trust And Accountability:

  13. Trust Management • More recent trust management focuses on: • Collection of evidences • Evaluation of risk • Including dynamic aspects and levels of trust • Message: Trust management has to manage trust. Example SULTAN (Grandison, 2003): PolicyName : trust ( Tr, Te, As, L ) ← Cs; The semantic interpretation of a statement in the form above is that: Tr trusts/distrusts Te to perform As at trust/distrust level L if constraint(s) Cs is true. Trust And Accountability:

  14. Decision Making • Very important, since only with automated decision making trust-aided ubiquitous computing can become a calm technology. • Can be done • With and without user interaction • Based on binary decision criteria (e.g. users with a specific certificate accepted as trustworthy) • Based on thresholds, depending on uncertainty, risk, … Trust And Accountability:

  15. Challenges • General challenge for UC: Dealing with context • Specific challenges: • Dealing with uncertainty and risk • Accurate long-term behavior • Smoothness • Weighting towards current behavior • Attack resistance • Intuitive representation in user interfaces Trust And Accountability:

  16. Trust Model (1) - TidalTrust • General: • Developed by J. Golbeck (Golbeck, 2005) • Targets semantic web & friend-of-a-friend networks • Static evaluation of trust (no re-evaluation) • Evaluation in the FilmTrust project http://trust.mindswap.org/FilmTrust • Idea of the FilmTrust project • Everyone • Can join the network • Rate movies on a scale from 1 to 10 • Rate friends (people he knows and who participate in the network) in the sense of “[…] if the person were to have rented a movie to watch, how likely it is that you would want to see that film” Trust And Accountability:

  17. TidalTrust (Trust Network) • Visualization of a trust network: • Alice (A) has three trusted friends (T1, T2, T3) who rated the movie U • If Alice does not know the movie U, the TidalTrust algorithm is used to calculate a rating based on the information in the network. Trust And Accountability:

  18. TidalTrust (Simple Algorithm) • The formula for the calculation of recommended ratings is recursive: • If a node has rated the movie m directly, it returns its rating for m • Else, the node asks its neighbors for recommendations • For a node s in a set of nodes S, the rating rsm inferred by s for the movie m is defined as: • Where intermediate nodes are described by i, tsi describes the trust of s in i, and rim is the rating of the movie m assigned by i. Trust And Accountability:

  19. TidalTrust (Simple Algorithm) • Example: • Lets assume: • A’s trust in T1, T2, and T3 is tAT1 =9, tAT2 =8, tAT3 =1. • The rating of T1, T2, and T3 about the movie U is rT1U=8, rT2U=9, rT3U=2. • The rating of A about U is calculated as: Trust And Accountability:

  20. Problems with the simple algorithm • Conclusion • In this case the inferred rating is close to the rating of A’s most trusted friends. • BUT: If the number of lowly trusted friends of A increases, then they can heavily influence the calculated rating • Results by Golbeck indicate that • Most accurate results come from the highest trusted neighbors • Accuracy decreases with path length • BUT: simple algorithm does not care for that • Optimizations: • Define a minimum threshold for trust in recommenders (max) • Define a maximum threshold for path length (maxdepth) • Arguments against static set up of max • Some node mays have many neighbors with 10 • While there may be other nodes having as highest rating for a neighbor only 6 • Arguments against static set up of maxdepth • If maxdepth is to small, there might not be any recommender • If maxdepth is to big, accuracy is lost • => Dynamic set up for both parameters Trust And Accountability:

  21. TidalTrust (Advanced Algorithm) • Very similar to the simple algorithm • BUT: Constraints in search depth and selection of recommenders • maxdepth= minimal depth to find at least one recommender for the movie m • max= max. threshold for tsi to find at least one recommender for the movie m • Advanced algorithm: • If a node s has rated the movie m directly, it returns its rating rsm for m • Else, the node asks its neighbors for recommendations • For a node s in a set of nodes S, the rating rsm inferred by s for the movie m is defined as: • Where intermediate nodes are described by i, tsi describes the trust of s in i, and rim is the rating of the movie m assigned by i. • start indicates the node initiating the request for movie m Trust And Accountability:

  22. TidalTrust (Advanced Algorithm) • Example 1: • Lets assume: • A’s trust in T1, T2, and T3 is tAT1 =9, tAT2 =8, tAT3 =1. • The rating of T1, T2, and T3 about the movie U is rT1U=8, rT2U=9, rT3U=2. • => maxdepth = 1, max = 9 • The rating of A about U is calculated as: Trust And Accountability:

  23. Tidal Trust (Advanced Algorithm) • Example 2: • => parameters: • max = 9 • maxdepth = 2 • The rating of A about U is calculated as: Trust And Accountability:

  24. TidalTrust • Conclusion: • The inferred rating is close to the ratings provided by most trusted friends • The additional constraints in the advance algorithm should improve the recommendations • Drawbacks of the model: • It does not deal with uncertainty • Does not update trust values in recommenders • Decision making, whether a rating of “6“ is good enough or not, is up to the user. Trust And Accountability:

  25. Trust Model (2) - Subjective Logic • Subjective Logic • Developed by Audun Jøsang (basic ideas 1997, Jøsang, 2001) • The concept of atomicity is left out here. • Basic ideas: • Uncertainty as main aspect of an opinion ω = (b,d,u) • Constraint: b + d + u = 1 (b belief, d disbelief, u uncertainty) • Mathematical foundation based on Bayesian probability theory and belief theory • Defines operators for discounting (recommendation) and consensus • Many proposals how the trust model can be used in applications Trust Modelle:

  26. Example 2: Subjective Logic • Bayesian probability theory: • Allows to calculate the posteriori probability of binary events based on a priori collected evidence. • Beta probability density function (pdf) of a probability variable p • Examples: beta (p|1,1) beta(p|8,2) ,where Trust And Accountability:

  27. Example 2: Subjective Logic • Define: where r corresponds to the number of positive collected evidence, And s corresponds to the number of negative collected evidence. • The mean value of the distribution is defined as: and Trust And Accountability:

  28. Example 2: Subjective Logic • Belief theory: • An opinion is expressed by the triple (b,d,u) • b expresses the total belief of an observer that a particular state is true • d expresses the total disbelief of an observer that a particular state is true • In contrast to probability theory, in which P(A) + P(not A) = 1, it holds: • The triple b, d, and u is related by: • u allows expresses the uncertainty Trust And Accountability:

  29. Example 2: Subjective Logic • The mapping between bayesian probability theory and belief theory is done by the following equations: • Opinion in belief model: • Opinion in bayesian model: • Mapping: • Belief model => Bayesian model • Bayesian model => Belief model und mit Trust And Accountability:

  30. Example 2: Subjective Logic • Operator for consensus: • Let the opinion of A about x be denoted as: • Let the opinion of B about x be denoted as: • The opinion of an agent, who has mode the observations of A and B (assuming they are based on independent evidence), can be calculated as: Trust And Accountability:

  31. Example 2: Subjective Logic • Operator for discounting: • Let the opinion of A about the trustworthiness B be denoted as: • Let the opinion of B about the x be denoted as: • The opinion of A about x based on the recommendation of B about x can be calculated as: with , , and Trust And Accountability:

  32. Example 2: Subjective Logic • Justification for the discounting operator: • The belief of A about x in the recommendation increases with the belief of A in B, and with the belief of B in x. • The disbelief of A about x in the recommendation increases with the disbelief of A in B, and with the disbelief of B in x. • The uncertainty in the recommendation increases with the uncertainty of A about B, and with the uncertainty of B about x (if A has assigned any belief to B) Trust And Accountability:

  33. Example 2: Subjective Logic • Example (based on the graph shown with the TidalTrust example): • Lets assume the opinions of A about the trustworthiness of T1, T2, and T3 are: • i.e. two very trusted users, and one more or less unknown • The opinions of T1, T2, and T3 about U are: • i.e. two rather good ratings, and one very bad • The discounting of the recommendations of T1, T2, and T3 evaluates to: • i.e. the first two opinion maintain high values for belief, since the value of uncertainty dominates the opinion of A about T3, the last opinion has an even bigger component of uncertainty Trust And Accountability:

  34. Example 2: Subjective Logic • Example (cont.): • The consensus between first two opinions evaluates to: • i.e. the consensus between the two opinions with a dominating belief components. The belief in the resulting opinion increases, and uncertainty decreases. • The consensus between this opinion and the last one: • i.e. consensus between an opinion with dominating belief and dominating uncertainty. The opinion with high uncertainty as only little influence on the resulting opinion. • Note: Use the mapping for the transformation of belief representation in bayesian representation, apply the consensus operator, and do the transformation back to the belief representation. Trust And Accountability:

  35. Example 2: Subjective Logic • Conclusion: • As shown we can see form the example the model seems to be intuitive. • The model allows to express one uncertainty about the trustworthiness of someone. • The operator for discounting and consensus are justified separately. • The model allows easy integration of new evidence (re-evaluation) • Drawback: • Complex calculation model • It may be difficult for users to set up opinions. Trust And Accountability:

  36. Accountability Trust And Accountability:

  37. Accountability • Goal • Accountability helps to protect the interests of the collective and the usage of its resources • Problem: Individual Rationality vs. Collective Welfare • Freeriding (e.g. in P2P file-sharing) is individual rational behavior • BUT: Freeriding compromises the idea of file-sharing • How to enforce accountability (Dingledine, 2000) • Selecting favored users => reputation systems • Restricting access (making users pay) => micropayment systems Trust And Accountability:

  38. Reputation Systems • Basic idea: • Good reputation is desirable • Contribution to the collective welfare leads to a good reputation • Selection: Members of the community grant access to their resources only to members with a good reputation. • Reputation vs. Trust • Very similar idea, but: • Trust: subjective trust value of entity A for any entity B • Reputation: only one system-wide reputation score for each entity • Trust can be built on reputation • Well-Known examples: • eBay Feedback forum • Amazon review scheme Trust And Accountability:

  39. Reputation Systems • Classification of computational reputation models (Schlosser et al., 2006) • Accumulative Systems • calculate the reputation of an agent as the sum of all provided ratings • An example for this is the total score in the eBay feedback forum. • Average Systems • calculate the reputation as the average of ratings which an agent has received. • This corresponds, for instance, with the percentage of positive ratings in the eBay feedback forum. • Blurred Systems • calculate the reputation of an agent as the weighted sum of all ratings. • The weight depends on the age of a rating, i.e., older ratings receive a lower weight. • OnlyLast Systems • determine the reputation of an agent as the most recent rating. • Although these systems seem to be very simple, the simulation had shown that they provided a reasonable level of attack-resistance. (e.g. Tit-for-Tat) • EigenTrust Systems • calculate the reputation of an agent depending on the ratings, as well as on the reputation of the raters. An interesting property of these systems is that each agent calculates the reputation of the other agents locally based on its own rating and the weighted ratings of the surrounding agents. • If all agents adhere to the protocol, the locally stored reputation information of all agents will converge. Thus, all agents have the same reputation value for any agent. • Adaptive Systems • calculate the reputation of an agent depending on its current reputation. For example, a single positive rating has a higher impact on the reputation of an agent with a low reputation than on the reputation of an agent with a high reputation. • Beta Systems • calculate the reputation of an agent based on beta distributions, as introduced with the Subjective Logic above, using positive and negative experiences as input. Trust And Accountability:

  40. Reputation Systems • Further aspects: • Incentives for building up a good reputation • Incentives to provide ratings • Location of the storage for the reputation values • Reputation of newcomers • Attack-resistance Trust And Accountability:

  41. Micropayments • Micropayments vs. Reputation • Reputation rewards for contribution to the collective welfare • => allows to select only users which contribute to the collective • Mircopayments makes the users pay for any received resource/service • => prevents users from arbitrary high (mis-)usage of a resource/service, or compensates over-usage by payments • Classification of micropayments (Dingledine et al., 2000): • Fungible: micropayment has some intrinsic or redeemable value • Monetary: • Very small payments with “real money”, e.g. one cent or less per payment • Need to be very efficient, i.e. low transaction costs • => Can be less secure than approaches as eCash which focuses on the transfer of bigger amounts of money • Non-Monetary: • Introduce an artificial currency, i.e. some kind of sparse resource • Can be used to pay others • Non-Fungible: Micropayment does not have an intrinsic value • Typically users only show some Proof-Of-Work (PoW), e.g., solved a computational problem, i.e., the users show that they were will to do something before getting access to a service or resource • Can be used to counteract email spam (see chapter on “Security for Ubiquitous Computing”) • Two kinds of approaches • Proof-of-Work based on CPU cycles • Proof-of-Work based on memory Trust And Accountability:

  42. Micropayments • Classification and different approaches: Trust And Accountability:

  43. Micropayments – Example: Payword • Payword is based on PKI and collision-resistant one-way hash functions (Rivest & Shamir, 1997) • Set-Up • User U, Vender V, Broker B • U calculates a set of n + 1 values w0, …, wn, called ‘paywords’. The paywords are calculated in reverse order. The last payword wn is chosen randomly, the others are calculated recursively • It holds • U sends w0 to V (w0 is not a payment) • Payment: • The i-th micropayment is defined as the tuple (wi,i) • Per requested micropayment: U sends the next tuple (wi,i) to V • Charging • U sends w0 and the tuple (wl,l) as the last payword he received to B • B verifies that wl is the l’s payword by • If every payword is worth 1 cent, B charges U’s account with l cents and passes them to V Trust And Accountability:

  44. Micropayments – Example: P2PWNC A grants access to B A grants access to B Access A Access A - - >B >B A A B B B issues receipt to A B issues receipt to A Receipt B Receipt B - - >A >A • P2PWNC stands for Peer-to-Peer Wireless Network Confederation (Efstathiou & Polyzos, 2005) • Application: he participants in P2PWNC provide each other with access to their WLAN hotspots • P2PWNC uses a token-based micro-payment system, supporting non-simultaneous n-wayexchanges • Token issuing: • If user B is allowed to access the hotspot of A, B issues a signed receipt, which states that B owes a favor to A, i.e., B was granted access by A, and passes it to A. Trust And Accountability:

  45. A presents: A presents: Receipt B Receipt B - - > A > A A A B B B grants access to A B grants access to A Access B Access B - - >A >A Receipt B Receipt B - - >C >C Receipt C Receipt C - - >A >A A A B B Access Access B B - - >A >A • Token-based payment (2-way exchange): • If A wants to access B’s hotspot at a later point of time, B will only grant access to A, if A can show a receipt stating that B owes a favor to A (2-way exchange). • Token-based payment (3-way exchange): • To make the scheme more flexible: • A can also present a chain of receipts, from which B can learn that she owes something to A by transitivity. • For example • A shows two receipts to B • One receipt which states that B owes something to C, and the other one stating that C owes something to A. • Thus, B can learn from these two receipts that she owes something to A (3-way exchange). Trust And Accountability:

  46. Micropayments - Summary • Summary: • Basic idea: making users pay • Hugh variety of approaches • Monetary approaches: • Service provider knows exactly the value of the payment • Allows for short item identifiers or anonymity • Need for central brokers • Non-monetary approaches: • Less risk when loosing payments, or paying for bad service • Lower barrier of acceptance • Used e.g. for P2P file-sharing, WLAN connection sharing • Proof-of-Work • Hugh gap between capacities of high-end systems vs. low-end systems • This gap gets even bigger, when UbiComp devices • Hard to develop “fair” PoWs Trust And Accountability:

  47. Conclusion • Introduced the concepts of trust and accountability • Both concepts may help to • collaborate in UbiComp environments • allow UbiComp to become a calm technology “Increasingly, the bottleneck in computing is not its disk capacity, processor speed, or communication bandwidth, but rather the limited resource of human attention” (Garlan et al., 2002) • Modeling trust is still in a pioneering phase • Need for attack-resistant models, and concepts allowing to transfer trust between similar contexts • Approaches for achieving accountability can already be seen • Reputation systems: e.g. eBay feedback forum,… • Micropayments: PoW against spam,… • Note: It is a major issue not only to model concepts of everyday life, but to present them in a way which is appropriate for everyday usage! Trust And Accountability:

  48. Bibliography • Bhargava et al., 2004 • Bhargava, B., Lilien, L., Rosenthal, A., & Winslet, M.  (2004).  Pervasive trust. IEEE Intelligent Systems, 19(5), 74–88. • Blaze et al., 1996 • Blaze, M., Feigenbaum, J., & Lacy, J.  (1996).  Decentralized trust management.  In Proc. of the 1996 IEEE Symposium onSecurity and Privacy • Blaze et al., 1998 • Blaze, M., Feigenbaum, J., & Keromytis, A. D.  (1998).  Keynote: Trust management for public-key infrastructures.  In Security ProtocolsWorkshop (p. 59-63).  • Dingledine et al., 2000 • Dingledine, R., Freedman, M. J., & Molnar, D.  (2000).  Accountability measures for peer-to-peer systems.  In A. Oram (Ed.), Peer-to-peer: Harnessingthe power of disruptive technologies (p. 271-340). O’Reilly. • Efstathiou & Polyzos, 2005 • Efstathiou, E., & Polyzos, G.  (2005).  A self-managed scheme for free citywide Wi-Fi.  In WOWMOM ’05: Proceedings of the FirstInternational IEEE WoWMoM Workshop on Autonomic Communicationsand Computing (ACC’05) (pp. 502–506).  Washington, DC, USA: IEEE Computer Society. • Jøsang, 2001 • Jøsang, A.  (2001).  A logic for uncertain probabilities.  InternationalJournal of Uncertainty, Fuzziness and Knowledge-Based Systems, 9(3), 279-212. • Gambetta, 2000 • Gambetta, D. (2000). Can we trust trust? In D. Gambetta (Ed.), Trust:Making and breaking cooperative relations, electronicedition (p. 213-237).  • Garlan et al., 2002 • Garlan, D., Siewiorek, D., Smailagic, A., & Steenkiste, P.  (2002).  Project Aura: Toward distraction-free pervasive computing. Pervasive Computing,IEEE, 1(2), 22–31. • Golbeck, 2005 • Golbeck, J.  (2005).  Computing and applying trust in web-based socialnetworks.  Unpublished doctoral dissertation, University of Maryland, College Park. • Grandison & Sloman, 2000 • Grandison, T., & Sloman, M.  (2000).  A survey of trust in internet applications. IEEE Communications Surveys and Tutorials, 3(4). • Grandison, 2003 • Grandison, T.  (2003).  Trust management for internet applications.  Unpublished doctoral dissertation, Imperial College London. • McKnight & Chervany, 1996 • McKnight, D. H., & Chervany, N. L.  (1996).  The meanings oftrust (Tech. Rep.).  Management Information Systems Research Center, University of Minnesota. • Merriam-Webster Online Dictionary • Merriam-Webster, I.  (2006).  Merriam-Webster online dictonary.  www.m-w.com. (seen 12/2006) • Rivest & Shamir, 1997 • Rivest, R. L., & Shamir, A.  (1997).  Payword and Micromint: Two simple micropayment schemes. In Proc. of the Int. Workshop onSecurity Protocols • Schlosser et al., 2006 • Schlosser, A., Voss, M., & Brückner, L. (2006). On the simulation of global reputation systems. Journal of Artificial Societies and Social Simulation, 9(1). Trust And Accountability:

More Related