Reputation and Trust-Based Systems for Wireless Self-Organizing Networks Presenter Gicheol Wang Jaydip Sen
Chapter Outline • Wireless self-organizing networks • MANETs and WSNs • Trust and Reputation -- Definitions • Types of trust • Trust constructs • Inter-relationship among trust constructs • Characteristics of trust • Goals and properties of reputation systems • Classification of trust and reputation systems • Issues in reputation systems • Some examples of trust and reputation systems • Open problems • Conclusion
Wireless Self-Organizing Networks • A self-organizing network is a network that can automatically extend, change, configure and optimize its topology, coverage, capacity, cell size, and channel allocation, based on changes in location, traffic pattern, interference, and the situation or environment. • Wireless ad hoc networks is a special class self-organizing network, where capabilities or existence of links, capabilities or availabilities of nodes or network services are considered as a random function of time. Examples: MANETs and WSN. • A Mobile Ad hoc NETwork(MANET) is a collection of mobile nodes that can has no fixed or predetermined topology, with mobile nodes and dynamic membership changes. • A Wireless Sensor Network (WSN) is a highly distributed network consisting of a large number of tiny, low cost, light-weight wireless nodes deployed to monitor an environment or system.
MANETs and WSNs • MANETs due to complete autonomy of the member nodes, and lack of any centralized infrastructure are particularly vulnerable to different types of attacks and security threats. • In addition, due to resource constraint, there is an incentive for a node to act in a selfish manner without cooperating with other nodes. • In WSNs, the nodes can be physically tampered and mis-configured by an external or an insider attacker. • Cryptography and other intrusion prevention mechanisms cannot prevent such as security threats. • Reputation and Trust are two important tools that can used to detect and defend against these attacks.
Node Misbehavior in MANETs and WSNs MANET Routing • In MANETs and WSNs, nodes may exhibit various types of misbehavior. Node misbehavior may be categorized into two broad types: • Malicious behavior intention is to attack and damage the network. • Selfish behavior intention is to save power, memory and CPU cycle. • Malicious misbehavior can be of two types: • Forwarding misbehavior packet dropping, modification, fabrication, timing attack, silent route change etc. • Routing misbehaviorroute salvaging, dropping of error messages, fabrication of error messages, unusually frequent route updates, sleep deprivation, blackhole, grayhole, wormhole etc. • Selfish misbehavior can be of two types: • Self-exclusion • Non-forwarding
Node Misbehavior in MANETs and WSNs (contd..) • Timing misbehavior a malicious node delays packet forwarding to ensure that time-to-live (TTL) of the packets are expired so that the packets do not reach the destination. • Silent route change attack a malicious node forwards a packet through a different route than it was intended to go through. • Route salvaging attack a malicious node reroutes packets to avoid a broken link, although no error actually has taken place. • Sleep deprivation attack a malicious node sends excessive number of packets to another node so as to consume computation and memory resources of the latter. • Blackhole attack a malicious node claims to have the shortest path; but when asked to forward the packets, it drops them.
Node Misbehavior in MANETs and WSNs (contd..) • Grayhole attack a variation of Blackhole attack, in which a malicious node selectively drops the packets. • Wormhole attack a malicious node sends packets from one part of the network to another part of the network where they are replayed. • Self-exclusion attack a selfish node does not participate when route discovery protocol is executed to save its own power. • Non-forwarding attack a selfish node participates in the route discovery process but drops data packets in the routing phase.
Misbehavior Malicious: Mount Attacks Selfish: Save Resources - Battery, CPU Cycles, Memory Self Exclusion Non-Forwarding Forwarding Misbehavior Routing Misbehavior Packet Dropping Blackhole Attack Modification Grayhole Attack Fabrication Wormhole Attack Timing Misbehavior Silent Route Changes Node Misbehavior in MANETs and WSNs (contd..)
Trust - Definition • Trust is a particular level of subjective probability with which an agent will perform a particular action, both before [we] can monitor such action (or independently of his capacity of ever to be able to monitor it) and in a context which it affects [our] action – Gambetta. • Trust is the firm belief in the competence (reliability, timeliness, honesty and integrity) of an entity to act as expected such that this firm belief is not a fixed value associated with the entity but rather it is subject to the entity’s behavior and applied only within a specific context at a given time. – Azzedin and Maheswaran.
Reputation - Definition • Reputation of an entity is an expectation of its behavior based on other entities’ observations or information about the entity’s past behavior within a specific context at a given time. – Azzedin and Maheswaran.
Types of Trust • Three types of Trust: • Basic: it is based on previous experience of a node. If two nodes A and B in a network are to communicate with each other, the basic trust is not the trust that A has on B; rather it is the general dispositional trust that node A has on other nodes. • General: it is the trust that node A has on node B, which is not dependent on a particular situation. • Situational: it is the trust that node A has on node B in a particular situation. In most of the cases, we are concerned with this type of trust between nodes in wireless self-organizing networks such as MANETs and WSNs.
Trust Constructs • Six types of trust constructs in self-organizing networks: • Trusting intention of a node • Trusting behavior of a node • Trusting beliefs in nodes • System trust in nodes • Dispositional trust of a node • Situational decision to trust a node
Trust Constructs (contd..) • Trusting intention of a node willingness of a node to depend on the information provided by another node in spite of having knowledge about the risk involved. • Trusting behavior of a node voluntary dependence of one node on another node in a specific situation with risk associated in it. • Trusting beliefs in nodes confidence and belief of one node that the other node is trustworthy in a specific situation. • System trust in nodes occurs when nodes believe that proper frameworks are in place to encourage successful interaction between them. • Dispositional trust of a node general expectation of a node about the trustworthiness of other nodes under different situations. • Situational decision to trust a node occurs when a node intends to depend on another node in a given situation. Example: node B wants to communicate with node A; therefore, it communicates with a trusted third party trust management system, which is trusted by node A also.
Trusting Behavior of a Node Trusting Intention of a Node Trusting Beliefs in a Node Situational Decision to Trust a Node System Trust in a Node Dispositional Beliefs in a Node Inter-Relationship among Trust Constructs
Characteristics of Trust • Sun et al have identified some characteristics of trust metric from wireless self-organizing network perspective: • Trust is a relationship between two entities for a specific action. One entity (subject) believes that the other entity (agent) will perform an action. • Trust is a function of uncertainty. • The level of trust can be measured by a continuous real number, referred to as the trust value. • Different subjects may have different trust values with the same agent for the same action. • Trust is not necessarily symmetric. A trusts B does not necessarily imply that B should trust A.
Goals and Properties of Reputation Systems • The goals of a reputation system are: • To provide information that allows nodes to distinguish between trustworthy and untrustworthy nodes in a network. • To encourage the nodes in the network to cooperate with each other and become trustworthy. • To discourage the untrustworthy nodes to participate in the network activities. • To cope with any kind of observable misbehavior. • To minimize the damage caused by any insider attack. • The properties of a reputation system are: • The system must have long-lived entities that inspire expectations for future interactions. • The system must be able to capture and distribute feedbacks about current interactions among its components and such information must be available in future. • The system must use feedbacks to guide trust decisions.
Classification of Trust & Reputation Systems • Trust and Reputation systems can be classified on the following issues: • Initializations of the systems • Type of observations used • Method of information access • Method of information distribution in the network
Method of Initialization • Most of the trust and reputation systems are initialized in one of the following three ways: • All nodes are initially assumed to be trustworthy. The reputation of a node decreases with each bad encounter that the node experiences with its neighbors. • Every node is considered to be untrustworthy. No node trusts other nodes initially. The reputations of nodes increase with every good encounter. • Every node is considered to be neither trustworthy nor untrustworthy. All nodes start with a neutral reputation value. With every good or bad behavior, the reputation is increased or decreased respectively.
Type of Information Used • Based on type of information used trust and reputation systems can be classified into two categories: • Systems using first-hand information • Systems using both first-hand and second-hand information • Systems using only first-hand information collected by the nodes are robust against rumor spreading. Example: OCEAN and Pathrater. • Most of the systems use both first-hand and second-hand information. Examples: CORE, CONFIDANT etc. • In some systems such as DRBTS, some sensor nodes (SNs) use only second-hand information.
Method of Information Access • Reputation systems are broadly categorized into two types depending on the manner in which nodes access reputation information in the network: • Symmetric systems all nodes in the network have access to the same level of information (i.e., first-hand and second-hand information) • Asymmetric systems all nodes do not have the access to same amount of information. Example: in DRBTS, sensor nodes do not have the same amount of information as possessed by the beacon nodes.
Method of Information Distribution • Reputation systems are categorized into two types based on the manner in which reputation information is distributed in the network: • Centralized one central entity maintains the reputation information of all nodes in the network. Examples: eBay and Yahoo auctions. • Distributed each node maintains reputation of all other nodes within its communication range in the network. Examples: MANETs and WSNs. • In centralized systems, the central entity may become a source of security vulnerability and performance bottleneck. • In distributed systems, memory overhead for storing reputation information is shared by all the nodes. It also eliminates the problem of ‘single-point-of-failure’. However, data replication is an issue in such systems.
Issues in Reputation Systems • Information gathering • Information dissemination • Redemption and weighting of time • Weighting of second-hand information
Information Gathering • Information gathering is the process in which a node collects information about other nodes. This is also called first-hand information gathering. • In CONFIDANT protocol, two types of first-hand information are categorized: • Personal experience – information gathered by a node through one-to-one interactions with its neighbors. • Direct observation – information gathered by a node by its direct observation. • Most of the systems use watchdog mechanism to monitor neighborhood activities of nodes. However, watchdog is ineffective if directional antennas or spread spectrum technology is used.
Information Dissemination • Information dissemination is done by exchange of information among the nodes. Information received by a node from other nodes is also known as second-hand information. • Advantages in using second-hand information: • Reputation of nodes builds up fast due to the ability of the nodes to learn from the mistakes of others. • Over a period of time, a consistent local view stabilizes in the system. • Disadvantage second-hand information exchange leads to the possibility of false report attack, where an honest node may be falsely accused or a dishonest node praised. • False report attack may be controlled by using limited information sharing - using either positive or negative information.
Information Dissemination (contd..) • Disadvantages of sharing only the positive information: • False praise attack – colluding malicious nodes may survive for longer time • Nodes cannot share bad experiences • Sharing only negative information prevents false praise attack. However it has the following disadvantages: • Nodes cannot share good experiences • Malicious nodes can launch bad mouth attacks on honest nodes • CONFIDANT uses negative second-hand information. • Context-aware detection accepts negative second-hand information if at least four nodes provide such report. • OCEAN does not allow any information (positive or negative) exchange. It build reputation purely on individual observations of the nodes. • Advantage: robust against rumor spreading • Disadvantage: time required to build reputation is high, and malicious nodes can stay in the system for longer time misusing the resources.
Information Dissemination (contd..) • DRBTS and RFSN allow sharing of both positive and negative information. • The negative effects of information sharing can be mitigated in these systems by appropriately incorporating first-hand and second-hand information into the reputation metric by using different weighting functions for different information. • Information dissemination scheme has three issues: • Dissemination frequency • Dissemination locality • Dissemination content • Two types of systems with respect to dissemination frequency: • Proactive dissemination nodes communicate reputation information during each dissemination interval even if there is no change in the reputation values since the last dissemination interval. • Reactive dissemination nodes publish only when there is a pre-defined amount of the reputation values they store or when an event of interest occurs.
Information Dissemination (contd..) • Reactive dissemination reduces communication overhead in situation where reputations of nodes do not change frequently. It may, however, cause congestion in networks where network activity is high. • Proactive dissemination is more suitable for busy and dense networks. • Communication overhead may be reduced by piggybacking the reputation information with other network traffic: • In CORE, reputation info is piggbacked on reply messages. • In DRBTS, it is piggybacked on location info messages.
Information Dissemination (contd..) • Information dissemination locality: • Local informationinformation in published with one-hop neighborhood by unicast, broadcast or multicast. • Global information information is propagated to nodes outside the radio range of the node publishing the information. This is more suitable for networks with higher mobility. • DRBTS uses local dissemination though broadcast enabling beacon nodes to update their reputation table. • Information dissemination content: • Raw: information published by a node is its first-hand information. • Processed: node publishes composite reputation after considering second-hand information from other nodes.
Redemption and Weighting of Time • Assignment of suitable weights to past and current reputation values for computing the composite reputation metric is an important issue. • CORE assigns more weight to past behavior • Wrong observations or rare behavior changes cannot influence reputation rating • CONFIDANT and RFSN discounts past behavior by assigning less weight. • A node cannot leverage on its past good performance and start misbehaving without being punished. • In periods of low network activity, a benign node may get penalized. DRBTS resolves this problem by generating network traffic through beacon nodes in regions and periods of less network traffic. • OCEAN and Context-aware detection do not assign differential weights on past and current ratings. • CONFIDANT assigns more weight to first-hand information than second-hand information.
Redemption and Weighting of Time (contd..) • CONFIDANT does redemption of misbehaving and misclassified nodes by reputation fading. In reputation fading, past behavior is discounted even in the absence of testimonials and observations. A node that has been isolated from the network due to misbehavior, always gets a chance to rejoin after some time. • In CORE, a node previously isolated from the network due to misbehavior, cannot redeem itself until there is a sufficient number of new nodes in the network those have no past experience with the node. • OCEAN relies on a timeout of reputation. Pathrater and Context-aware detection system have no provision of redemption.
Weighting of Second-Hand Information • The schemes using second-hand information must have to administer the trust level of the sources of such information. • Deviation test a method to validate the credibility of the sources of second-hand information. • Different schemes use different techniques for handling second-hand information: • Dempster-Shafer theory • Discounting belief principle • Beta distribution most widely used. RFSN uses it. • CONFIDANT assigns weights on second-hand information based on the trustworthiness of the source node. The source must have a minimum level of trust for its information to be considered by other nodes in the network. • In RRS, trust of a node is measured by the consistency between the first- and second-hand information. Higher weights are assigned to the first-hand information of the nodes.
Detection Mechanisms • Reputation systems require a tangible object of observation that can be identified by as either good or bad. • In MANETS, nodes promiscuously overhear the communications to/from their neighbors Monitor, Watchdog and NeighborWatch. • Passive acknowledgments nodes register themselves to get notified when their next-hop neighbors on a given route have attempted to forward their packets. • Problems with Wathdog it is difficult to unambiguously identify whether the inability of a node to forward packets is due to its maliciousness or due to collisions and/or limited battery power. • CORE does not rely on promiscuous mode of operation of its Watchdog. It judges the outcome of request by rating the end-to-end communication path. • CONFIDANT uses passive ACKs to verify whether the neighbor node forwards without any modification.
Response Mechanisms • Except Watchdog and Pathrater, almost all reputation systems have a punishment mechanism for misbehaving nodes. • Two steps in punishment mechanisms: • Nodes are avoided while discovering the routing paths. • Nodes are not allowed to access network resources. • It is essential to make sure that the malicious nodes are not allowed to access the networks resources; otherwise, just avoiding them in routing will effectively provide more motivation towards their malicious behavior as the nodes can freely use the network resources while saving their energy and other resources.
Examples of Reputation & Trust Mechanisms • Watchdog and Pathrater • Context-aware inference mechanism • Trust-based relationship of nodes in MANETs • Trust aggregation schemes • Trust management in ad hoc networks • Trusted routing schemes • CORE Collaborative REputation mechanism in mobile ad hoc networks • CONFIDANT Cooperation of Nodes – Fairness In Dynamic Ad hoc NeTwarks • OCEAN Observation-based Cooperation Enhancement in Ad hoc Networks • RRS Robust Reputation System • RFSN Reputation-based Framework for high integrity Sensor Networks • DRBTS Distributed Reputation-Based Beacon Trust Systems
Watchdog • Proposed by Marti et al to mitigate routing misbehavior in MANETs. • Watchdog determines node misbehavior by copying packets to be forwarded into a buffer and monitoring the behavior of the neighboring nodes with respect to packet forwarding. • Watchdog checks whether the neighboring nodes forward packets without modifications. • If the snooped packets match with those in the buffer, they are discarded. • Packets that stay in the buffer of monitor nodes beyond a threshold period of time are flagged as having been dropped or modified. • Nodes responsible for forwarding packets are marked as suspicious nodes. • If the number of such failures exceeds a pre-determined threshold value, the offending node is identified as a malicious node. Information about the malicious nodes is passed to the Pathrater component.
S A B C D Watchdog (contd..) Node B intends to transmit a packet to node C. Node A could overhear this transmission
Pathrater Pathrater • Pathrater component in each node makes a rating of all known nodes the network. • Nodes start with a neutral rating and update the ratings of each neighbor based on the feedback received from the Watchdog component. • Misbehavior of a node is identified on the basis of its packet mishandling and modification activities. Unreliability of a node is determined on the basis of its link errors. • Simulation results have shown that Watchdog and Pathrater significantly improve the throughput with DSR protocol. • The scheme does not penalize the misbehaving nodes, it only avoids them in routing and effectively relieves them from the burden of forwarding packets of other nodes. This encourages the malicious nodes to continue with their misbehavior.
Context-Aware Inference Mechanism • The mechanism proposed by Paul and Westhoff, in which accusations are related to the context of a unique route discovery process and a stipulated time period. • A combined detection mechanism is used that involves unkeyed hash verification of routing messages and comparison of cached packets with the overheard ones. • Trust of a node is computed based on several factors: • Accusations of other nodes • Number of such accusations • Level knowledge of the topology of the network • A context-aware inference mechanism • If a node has to be identified as malicious, accusations have to come from a certain minimum number of nodes. If a single node accuses a particular node, the former is identified as malicious.
Trust-Based Relationship in MANETs • Pirzada and Mcdonald proposed an approach for building trust relationships among nodes in a MANET which has the following features: • Each nodes passively monitor the packets received and forwarded by other nodes. Receiving and forwarding of packets are called events. • Events are assigned weights depending on the applications. • The trust values of all the events from a node are combined to compute an aggregate trust value of the node. The compound trust values are used as link weights for computation of weights. • For routing, the most trustworthy links are used to find the end-to-end path. • Sun et al have proposed a scheme where trust has been modeled as a measure of uncertainty. Using theory of entropy trust values are computed for nodes from certain observations. An entropy-based computation is presented for multi-path trust propagation problem in MANETs.
Trust Aggregation Scheme • Liang and Shi have carried out extensive work on development of models and evaluating robustness and security of various aggregation algorithms in open and untrusted environment. • They have also presented a comprehensive analytical and inference model of trust for aggregation of various ratings received by a node from the neighbors in a WSN. • The simulation results have shown that it is computationally more efficient approach to treat the ratings received from different evaluators (i.e. nodes) with equal weights and compute the average to arrive at the final trust value.
Trust Management in MANETs • Yan et al have proposed a security based on trust framework to ensure data protection, secure routing and other security features in MANETs. • Ren et al have proposed a mechanism for trust relationships among nodes in MANETs. A secret dealer is introduced only at the system bootstrapping phase to initiate the trust propagation in the network. A fully self-organized trust establishment approach is adopted to conform to the dynamic membership changes. • Zhu et al. have proposed an approach to compute trust in wireless network by treating individual mobile device as a delegation graph G and mapping a delegation path from a source node S to a target node T into an edge in the corresponding transitive closure of the graph G. From the edges of the transitive closure of G, the trust values of the wireless links are computed. • Davis has presented a trust management scheme based on a structural hierarchical model which incorporates revocation of certificates. It is robust against false accusation by a malicious node.
Trusted Routing Scheme AODV • Jarett and Ward have presented a trusted routing scheme that extends AODV protocol. • The protocol known as TCAODV (Trusted Computing Ad hoc On-demand Distance Vector) uses a public key certificate stored in each node. • Each node broadcasts its certificate along with the hello messages. The neighbors first verify the authenticity of the certificate by verifying its signature. If the signature verification is successful, the nodes store the certificate as the public key of the issuing node. • In all subsequent routing packet exchanges, the nodes verify the authenticity of the signature and then forward the packets. • Every routing packet is also encrypted using the symmetric key of the pair of nodes exchanging the packet. • The protocol has a very low overhead an ideally suited for trusted routing in MANETs and WSNs.
CORE • Proposed by Mirchiardi and Molva, this protocol enforces cooperation among the nodes in a MANET. • Three types of reputation are used to compute the final reputation metric: • Subjective reputation (observations) • Indirect reputation (positive reports by others) • Functional reputation (task-specific behavior) • Two types of nodes are considered: • Requester: it is a network entity that requests execution of a function. A requestor may have one or more providers within its transmission range. • Provider: network entity that correctly executes the function. • Higher weights are assigned to past observations than the current observations.
CORE (contd..) • The reputation values (lying between -1 and +1) are stored in a reputation table (RT) in each node. Each entry in the RT has four fields: • Unique ID of the node • Recent subjective reputation: • Recent indirect reputation: • Composite reputation • RTs are updated during the request and reply phase. • Reputation computed from first-hand information is referred to as subjectivereputation. The subjective reputation is updated only during the request phase. • If a provider does not cooperate with a requester’s request, then a negative value is assigned to the rating factor of the observation. It automatically reduces the reputation of the provider. • CORE uses functional reputation to evaluate the trustworthiness of a node with respect to different functions. Functional reputation is computed by combining functional and indirect reputation for different functions.
CORE (contd..) • The combined reputation value for each node is computed by combining three types of reputation with suitable weights. • The positive reputation values are decremented with time to ensure that nodes cooperate and contribute on a continuous basis. This prevents a node to build up a very good reputation and then start misbehaving after some time and still surviving in the network. • When a node has to make a decision on whether or not to execute a function for a requestor, it checks the reputation value of the latter. If the reputation is positive, the function is executed. If the reputation is negative, the function is not executed. • False accusation attacks are prevented since only the positive information is shared for indirect reputation updates. However, it provides an opportunity to launch false praise attacks.
CORE (contd..) • An inherent problem is to compute the combined reputation metric a malicious node may hide its misbehavior with respect to certain functions while behaving cooperatively with respect to other functions. A node may choose to not cooperate for functions that consume resources like memory and power and choose to cooperate for functions that don’t require much resource. • However, the reputation computation in CORE is an elegant process and minimizes false detection and increases probability of detection of misbehaving nodes.
COFIDANT • Proposed by Buchegger and Boudec, it is a mechanism to encourage cooperation among the nodes in a MANET. • It is a distributed, symmetric reputation model that uses both first-hand as well as second-hand information. • It assumes DSR as the routing protocol and works on promiscuous mode of operation of the nodes. • Misbehaving nodes are identified and punished by not allowing them to access the network resources. • CONFIDANT is based on the principle that reciprocal altruism is beneficial for every ecological system when favors are returned simultaneously because of instant gratification. In other words, there may not be any benefit of behaving well if there is a delay in granting a favor and getting back the repayment.
COFIDANT (contd..) • Each node in CONFIDANT protocol runs four components: • Monitor • Trust manager • Reputation system • Path manager • Monitor passively observes the activities of the nodes within its 1-hop neighborhood. If any misbehavior is detected in terms of non-forwarding or modification of packets by any neighbor, the monitor module reports this to the reputation system and the trust manager for the evaluation of the new reputation value of the misbehaving node.
COFIDANT (contd..) • Trust manager it handles all incoming and outgoing ALARM messages. • Incoming ALARM messages can originate from any node. • Trustworthiness of the source of an ALARM messages is checked before triggering a reaction. • The outgoing ALARM messages are generated by the node itself after having experienced, observed, or received a report of malicious behavior. The recipient of these ALARM messages are called friend nodes, the list of which is maintained in the node. • Trust manager consists of three components: • Alarm table contains information about received alarms • Trust table maintains trust records of each node to determine trustworthiness of an incoming alarm message • Friend list contains list of all nodes to which the node has to send alarm when it detects any malicious activity
COFIDANT (contd..) Trust Manager Reputation System Evaluating trust Updating ALARM Evaluating alarm enough evidence trusted event detected not trusted not significant ALARM received Monitor Sending ALARM Monitoring Updating event count below threshold within tolerance threshold exceeded Initial state Path Manager Rating Managing path tolerance exceeded Interactions among the components of CONFIDANT