1 / 56

Vulnerabilities and Threats in Distributed Systems *

Vulnerabilities and Threats in Distributed Systems *. Prof. Bharat Bhargava Dr. Leszek Lilien Department of Computer Sciences and the Center for Education and Research in Information Assurance and Security (CERIAS ) Purdue University www.cs.purdue.edu/people/{bb, llilien} Presented by

jayme
Télécharger la présentation

Vulnerabilities and Threats in Distributed Systems *

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Vulnerabilities and Threatsin DistributedSystems* Prof. Bharat Bhargava Dr. Leszek Lilien Department of Computer Sciences and the Center for Education and Research in Information Assurance and Security (CERIAS ) Purdue University www.cs.purdue.edu/people/{bb, llilien} Presented by Prof. Sanjay Madria Department of Computer Science University of Missouri-Rolla * Supported in part by NSF grants IIS-0209059 and IIS-0242840

  2. Prof. Bhargava thanks the organizers of the 1st International Conference on Distributed Computing & Internet Technology—ICDCIT 2004. In particular, he thanks: Prof. R. K. Shyamsunder Prof. Hrushikesha Mohanty Prof. R.K. Ghosh Prof. Vijay Kumar Prof. Sanjay Madria • He thanks the attendees, and regrets that he could not be present. • He came to Bhubaneswar in 2001 and enjoyed it tremendously. He was looking forward to coming again. • He will be willing to communicate about this research. Potential exists for research collaboration. Please send mail to bb@cs.purdue.edu • He will very much welcome your visit to Purdue University.

  3. From Vulnerabilities to Losses • Growing business losses due to vulnerabilities in distributed systems • Identity theft in 2003 – expected loss of $220 bln worldwide ; 300%(!) annual growth rate [csoonline.com, 5/23/03] • Computer virus attacks in 2003 – estimated loss of $55 bln worldwide [news.zdnet.com, 1/16/04] • Vulnerabilities occur in: • Hardware / Networks / Operating Systems / DB systems / Applications • Loss chain • Dormant vulnerabilities enable threats against systems • Potential threats can materialize as (actual) attacks • Successful attacks result in security breaches • Security breaches cause losses

  4. Vulnerabilities and Threats • Vulnerabilities and threats start the loss chain • Best to deal with them first • Deal with vulnerabilities • Gather in metabases and notification systems info on vulnerabilities and security incidents, then disseminate it • Example vulnerability and incident metabases • CVE (Mitre), ICAT (NIST), OSVDB (osvdb.com) • Example vulnerability notification systems • CERT (SEI-CMU), Cassandra (CERIAS-Purdue) • Deal with threats • Threat assessment procedures • Specialized risk analysis using e.g. vulnerability and incident info • Threat detection / threat avoidance / threat tolerance

  5. Outline • Vulnerabilities • Threats • Mechanisms to Reduce Vulnerabilities and Threats 3.1. Applying Reliability and Fault Tolerance Principles to Security Research 3.2. Using Trust in Role-based Access Control 3.3. Privacy-preserving Data Dissemination 3.4. Fraud Countermeasure Mechanisms

  6. Vulnerabilities - Topics • Models for Vulnerabilities • Fraud Vulnerabilities • Vulnerability Research Issues 

  7. Models for Vulnerabilities (1) • A vulnerability in security domain – like a fault in reliability domain • A flaw or a weakness in system security procedures, design, implementation, or internal controls • Can be accidentally triggered or intentionally exploited, causing security breaches • Modeling vulnerabilities • Analyzing vulnerability features • Classifying vulnerabilities • Building vulnerability taxonomies • Providing formalized models • System design should not let an adversary know vulnerabilities unknown to the system owner

  8. Models for Vulnerabilities (2) • Diverse models of vulnerabilities in the literature • In various environments • Under varied assumptions • Examples follow • Analysis of four common computer vulnerabilities [17] • Identifies their characteristics, the policies violated by their exploitation, and the steps needed for their eradication in future software releases • Vulnerability lifecycle model applied to three case studies [4] • Shows how systems remains vulnerable long after security fixes • Vulnerability lifetime stages: • appears, discovered, disclosed, corrected, publicized, disappears

  9. Models for Vulnerabilities (3) • Model-based analysis to identify configuration vulnerabilities [23] • Formal specification of desired security properties • Abstract model of the system that captures its security-related behaviors • Verification techniques to check whether the abstract model satisfies the security properties • Kinds of vulnerabilities [3] • Operational • E.g. an unexpected broken linkage in a distributed database • Information-based • E.g. unauthorized access (secrecy/privacy), unauthorized modification (integrity), traffic analysis (inference problem), and Byzantine input

  10. Models for Vulnerabilities (4) • Not all vulnerabilities can be removed, some shouldn’t Because: • Vulnerabilities create only a potential for attacks • Some vulnerabilities cause no harm over entire system’s life cycle • Some known vulnerabilities must be tolerated • Due to economic or technological limitations • Removal of some vulnerabilities may reduce usability • E.g., removing vulnerabilities by adding passwords for each resource request lowers usability • Some vulnerabilities are a side effect of a legitimate system feature • E.g., the setuid UNIX command creates vulnerabilities [14] • Need threat assessment to decide which vulnerabilities to remove first

  11. Fraud Vulnerabilities (1) • Fraud: a deception deliberately practiced in order to secure unfair or unlawful gain [2] • Examples: • Using somebody else’s calling card number • Unauthorized selling of customer lists to telemarketers (example of an overlap of fraud with privacy breaches) • Fraud can make systems more vulnerable to subsequent fraud • Need for protection mechanisms to avoid future damage

  12. Fraud Vulnerabilities (2) • Fraudsters: [13] • Impersonators illegitimate users who steal resources from victims (for instance by taking over their accounts) • Swindlers legitimate users who intentionally benefit from the system or other users by deception (for instance, by obtaining legitimate telecommunications accounts and using them without paying bills) • Fraud involves abuse of trust [12, 29] • Fraudster strives to present himself as a trustworthy individual and friend • The more trust one places in others the more vulnerable one becomes

  13. Vulnerability Research Issues (1) • Analyze severity of a vulnerability and its potential impact on an application • Qualitative impact analysis • Expressed as a low/medium/high degree of performance/availability degradation • Quantitative impact • E.g., economic loss, measurable cascade effects, time to recover • Provide procedures and methods for efficient extraction of characteristics and properties of known vulnerabilities • Analogous to understanding how faults occur • Tools searching for known vulnerabilities in metabases can not anticipate attacker behavior • Characteristics of high-risk vulnerabilities can be learnt from the behavior of attackers, using honeypots, etc.

  14. Vulnerability Research Issues (2) • Construct comprehensive taxonomies of vulnerabilities for different application areas • Medical systems may have critical privacy vulnerabilities • Vulnerabilities in defense systems compromise homeland security • Propose good taxonomies to facilitate both prevention and elimination of vulnerabilities • Enhance metabases of vulnerabilities/incidents • Reveals characteristics for preventing not only identical but also similar vulnerabilities • Contributes to identification of related vulnerabilities, including dangerous synergistic ones • Good model for a set of synergistic vulnerabilities can lead to uncovering gang attack threats or incidents

  15. Vulnerability Research Issues (3) • Provide models for vulnerabilities and their contexts • The challenge: how vulnerability in one context propagates to another • If Dr. Smith is a high-risk driver, is he a trustworthy doctor? • Different kinds of vulnerabilities emphasized in different contexts • Devise quantitative lifecycle vulnerability models for a given type of application or system • Exploit unique characteristics of vulnerabilities & application/system • In each lifecycle phase: - determine most dangerous and common types of vulnerabilities - use knowledge of such types of vulnerabilities to prevent them • Best defensive procedures adaptively selected from a predefined set

  16. Vulnerability Research Issues (4) • The lifecycle models helps solving a few problems • Avoiding system vulnerabilities most efficiently • By discovering & eliminating them at design and implementation stages • Evaluations/measurements of vulnerabilities at each lifecycle stage • In system components / subsystems / of the system as a whole • Assist in most efficient discovery of vulnerabilities before they are exploited by an attacker or a failure • Assist in most efficient elimination / masking of vulnerabilities (e.g. based on principles analogous to fault-tolerance) OR: • Keep an attacker unaware or uncertain of important system parameters (e.g., by using non-deterministic or deceptive system behavior, increased component diversity, or multiple lines of defense)

  17. Vulnerability Research Issues (5) • Provide methods of assessing impact of vulnerabilities on security in applications & systems • Create formal descriptions of the impact of vulnerabilities • Develop quantitative vulnerability impact evaluation methods • Use resulting ranking for threat/risk analysis • Identify the fundamental design principles and guidelines for dealing with system vulnerabilities at each lifecycle stage • Propose best practices for reducing vulnerabilities at all lifecycle stages (based on the above principles and guidelines) • Develop interactive or fully automatic tools and infrastructures encouraging or enforcing use of these best practices • Other issues: • Investigate vulnerabilities in security mechanisms themselves • Investigate vulnerabilities due to non-malicious but threat-enabling uses of information [21]

  18. Outline • Vulnerabilities • Threats • Mechanisms to Reduce Vulnerabilities and Threats 3.1. Applying Reliability and Fault Tolerance Principles to Security Research 3.2. Using Trust in Role-based Access Control 3.3. Privacy-preserving Data Dissemination 3.4. Fraud Countermeasure Mechanisms

  19. Threats - Topics • Models of Threats • Dealing with Threats • Threat Avoidance • Threat Tolerance • Fraud Threat Detection for Threat Tolerance • Fraud Threats • Threat Research Issues

  20. Models of Threats • Threats in security domain – like errors in reliability domain • Entities that can intentionally exploit or inadvertently trigger specific system vulnerabilities to cause security breaches [16, 27] • Attacks or accidents materialize threats (changing them from potential to actual) • Attack - an intentional exploitation of vulnerabilities • Accident - an inadvertent triggering of vulnerabilities • Threat classifications: [26] • Based on actions, we have: threats of illegal access, threats of destruction, threats of modification, and threats of emulation • Based on consequences, we have: threats of disclosure, threats of (illegal) execution, threats of misrepresentation, and threats of repudiation

  21. Dealing with Threats • Dealing with threats • Avoid (prevent) threats in systems • Detect threats • Eliminate threats • Tolerate threats • Deal with threats based on degree of risk acceptable to application • Avoid/eliminate threats to human life • Tolerate threats to noncritical or redundant components

  22. Dealing with Threats – Threat Avoidance (1) • Design of threat avoidance techniques - analogous to fault avoidance (in reliability) • Threat avoidance methods are frozen after system deployment • Effective only against less sophisticated attacks • Sophisticated attacks require adaptive schemes for threat tolerance [20] • Attackers have motivation, resources, and the whole system lifetime to discover its vulnerabilities • Can discover holes in threat avoidance methods

  23. Dealing with Threats – Threat Avoidance (2) • Understanding threat sources • Understand threats by humans, their motivation and potential attack modes [27] • Understand threats due to system faults and failures • Example design guidelines for preventing threats: • Model for secure protocols [15] • Formal models for analysis of authentication protocols [25, 10] • Models for statistical databases to prevent data disclosures [1]

  24. Dealing with Threats – Threat Tolerance • Useful features of fault-tolerant approach • Not concerned with each individual failure • Don’t spend all resources on dealing with individual failures • Can ignore transient and non-catastrophic errors and failures • Need analogous intrusion-tolerant approach • Deal with lesser and common security breaches • E.g.: intrusion tolerance for database systems [3] • Phase 1: attack detection • Optional (e.g., majority voting schemes don’t need detection) • Phases 2-5: damage confinement, damage assessment, reconfiguration, continuation of service • can be implicit (e.g., voting schemes follow the same procedure whether attacked or not) • Phase 6: report attack • to repair and fault treatment (to prevent a recurrence of similar attacks)

  25. Dealing with Threats – Fraud Threat Detection for Threat Tolerance • Fraud threat identification is needed • Fraud detection systems • Widely used in telecommunications, online transactions, insurance • Effective systems use both fraud rules and pattern analysis of user behavior • Challenge: a very high false alarm rate • Due to the skewed distribution of fraud occurrences

  26. Fraud Threats • Analyze salient features of fraud threats • Some salient features of fraud threats [9] • Fraud is often a malicious opportunistic reaction • Fraud escalation is a natural phenomenon • Gang fraud can be especially damaging • Gang fraudsters can cooperate in misdirecting suspicion on others • Individuals/gangs planning fraud thrive in fuzzy environments • Use fuzzy assignments of responsibilities to participating entities • Powerful fraudsters create environments that facilitate fraud • E.g.: CEO’s involved in insider trading

  27. Threat Research Issues (1) • Analysis of known threats in context • Identify (in metabases) known threats relevant for the context • Find salient features of these threats and associations between them • Threats can be associated also via their links to related vulnerabilities • Infer threat features from features of vulnerabilities related to them • Build a threat taxonomy for the considered context • Propose qualitative and quantitative models of threats in context • Including lifecycle threat models • Define measures to determine threat levels • Devise techniques for avoiding/tolerating threats via unpredictability or non-determinism • Detecting known threats • Discovering unknown threats

  28. Threat Research Issues (2) • Develop quantitative threat models using analogies to reliability models • E.g., rate threats or attacks using time and effort randomvariables • Describe the distribution of their random behavior • Mean Effort To security Failure (METF) • Analogous to Mean Time To Failure (MTTF) reliability measure • Mean Time To Patch and Mean Effort To Patch (new security measures) • Analogous to Mean Time To Repair (MTTR) reliability measure and METF security measure, respectively • Propose evaluation methods for threat impacts • Mere threat (a potential for attack) has its impact • Consider threat properties: direct damage, indirect damage, recovery cost, prevention overhead • Consider interaction with other threats and defensive mechanisms

  29. Threat Research Issues (3) • Invent algorithms, methods, and design guidelines to reduce number and severity of threats • Consider injection of unpredictability or uncertainty to reduce threats • E.g., reduce data transfer threats by sending portions of critical data through different routes • Investigate threats to security mechanisms themselves • Study threat detection • It might be needed for threat tolerance • Includes investigation of fraud threat detection

  30. Products, Services and Research Programs for Industry (1) • There are numerous commercial products and services, and some free products and services Examples follow. • Notation used below: Product (Organization) • Example vulnerability and incident metabases • CVE (Mitre), ICAT (NIST), OSVDB (osvdb.com), Apache Week Web Server (Red Hat), Cisco Secure Encyclopedia (Cisco), DOVESComputer Security Laboratory (UC Davis), DragonSoft Vulnerability Database (DragonSoft Security Associates), Secunia Security Advisories (Secunia), SecurityFocus Vulnerability Database (Symantec), SIOS (Yokogawa Electric Corp.), Verletzbarkeits-Datenbank (scip AG), Vigil@nce AQL (Alliance Qualité Logiciel) • Example vulnerability notification systems • CERT (SEI-CMU), Cassandra (CERIAS-Purdue), ALTAIR (esCERT-UPC), DeepSight Alert Services (Symantec), Mandrake Linux Security Advisories (MandrakeSoft) • Example other tools (1) • Vulnerability Assessment Tools (for databases, applications, web applications, etc.) • AppDetective (Application Security), NeoScanner@ESM (Inzen), AuditPro for SQL Server (Network Intelligence India Pvt. Ltd.), eTrust Policy Compliance (Computer Associates), Foresight (Cubico Solutions CC), IBM Tivoli Risk Manager (IBM), Internet Scanner (Internet Security Systems), NetIQ Vulnerability Manager (NetIQ), N-Stealth (N-Stalker), QualysGuard (Qualys), Retina Network Security Scannere (Eye Digital Security), SAINT (SAINT Corp.), SARA (Advanced Research Corp.), STAT-Scanner (Harris Corp.), StillSecure VAM (StillSecure), Symantec Vulnerability Assessment (Symantec) • Automated Scanning Tools, Vulnerability Scanners • Automated Scanning (Beyond Security Ltd.), ipLegion/intraLegion (E*MAZE Networks), Managed Vulnerability Assessment (LURHQ Corp.), Nessus Security Scanner (The Nessus Project), NeVO (Tenable Network Security)

  31. Products, Services and Research Programs for Industry (2) • Example other tools (2) • Vulnerability und Penetration Testing • Attack Tool Kit (Computec.ch), CORE IMPACT (Core Security Technologies), LANPATROL (Network Security Syst.) • Intrusion Detection System • Cisco Secure IDS (Cisco), Cybervision Intrusion Detection System (Venus Information Technology), Dragon Sensor (Enterasys Networks), McAfee IntruShield (IDSMcAfee), NetScreen-IDP (NetScreen Technologies), Network Box Internet Threat Protection Device (Network Box Corp.) • Threat Management Systems • Symantec ManHunt (Symantec) • Example services • Vulnerability Scanning Services • Netcraft Network Examination Service (Netcraft Ltd.) • Vulnerability Assessment and Risk Analysis Services • ActiveSentry (Intranode), Risk Analysis Subscription Service (Strongbox Security), SecuritySpace Security Audits (E-Soft), Westpoint Enterprise Scan (Westpoint Ltd.) • Threat Notification • TruSecure IntelliSHIELD Alert Manager (TruSecure Corp.) • Pathches • Software Security Updates (Microsoft) • More on metabases/tools/services: http://www.cve.mitre.org/compatible/product.html • Example Research Programs • Microsoft Trustworthy Computing (Security, Privacy, Reliability, Business Integrity) • IBM • Almaden: information security; Zurich: information security, privacy, and cryptography; Secure Systems Department; Internet Security group; Cryptography Research Group

  32. Outline • Vulnerabilities • Threats • Mechanisms to Reduce Vulnerabilities and Threats 3.1. Applying Reliability and Fault Tolerance Principles to Security Research 3.2. Using Trust in Role-based Access Control 3.3. Privacy-preserving Data Dissemination 3.4. Fraud Countermeasure Mechanisms

  33. Applying Reliability Principlesto Security Research (1) • Apply the science and engineering from Reliability to Security [6] • Analogies in basic notions [6, 7] • Fault – vulnerability • Error(enabled by a fault)– threat(enabled by a vulnerability) • Failure/crash (materializes a fault, consequence of an error) – Security breach (materializes a vulnerability, consequence of a threat) • Time - effort analogies:[18] time-to-failure distribution for accidental failures – expended effort-to-breach distribution for intentional security breaches • This is not a “direct” analogy: it considers important differences between Reliability and Security • Most important: intentional human factors in Security

  34. Applying Reliability Principlesto Security Research (2) • Analogies from fault avoidance/tolerance [27] • Fault avoidance - threat avoidance • Fault tolerance - threat tolerance (gracefully adapts to threats that have materialized) • Maybe threat avoidance/tolerance should be named: vulnerability avoidance/tolerance (to be consistent with the vulnerability - fault analogy) • Analogy: To deal with failures, build fault-tolerant systems To deal with security breaches, build threat-tolerant systems

  35. Applying Reliability Principlesto Security Research (3) • Examples of solutions using fault tolerance analogies • Voting and quorums • To increase reliability - require a quorum of voting replicas To increase security - make forming voting quorums more difficult • This is not a “direct” analogy but a kind of its “reversal” • Checkpointing applied to intrusion detection • To increase reliability – use checkpoints to bring system back to a reliable (e.g., transaction consistent) state • To increase security - use checkpoints to bring system back to a secure state • Adaptability / self-healing • Adapt to common and less severe security breaches as we adapt to every-day and relatively benign failures • Adapt to: timing / severity / duration / extent of a security breach

  36. Applying Reliability Principlesto Security Research (4) • Beware: Reliability analogies are not always helpful • Differences between seemingly identical notions E.g., “system boundaries” are less open for Reliability than for Security • No simple analogies exist for intentional security breaches arising from planted malicious faults • In such cases, analogy of time (Reliability) to effort (Security) is meaningless • E.g., sequential time vs. non-sequential effort • E.g., long time duration vs. “nearly instantaneous” effort • No simple analogies exist when attack efforts are concentrated in time • As before, analogy of time to effort is meaningless

  37. Outline • Vulnerabilities • Threats • Mechanisms to Reduce Vulnerabilities and Threats 3.1. Applying Reliability and Fault Tolerance Principles to Security Research 3.2. Using Trust in Role-based Access Control 3.3. Privacy-preserving Data Dissemination 3.4. Fraud Countermeasure Mechanisms

  38. Basic Idea - Using Trust in Role-based Access Control (RBAC) • Traditional identity-based approaches to access control are inadequate • Don’t fit open computing, incl. Internet-based computing [28] • Idea: Use trust to enhance user authentication and authorization • Enhance role-based access control (RBAC) • Use trust in addition to traditional credentials • Trust based on user behavior • Trust is related to vulnerabilities and threats • Trustworthy users: • Don’t exploit vulnerabilities • Don’t become threats

  39. Overview - Using Trust in RBAC (1) • Trust-enhanced role-mapping (TERM) server added to a system with RBAC • Collect and use evidence related to trustworthiness of user behavior • Formalize evidence type, evidence • Different forms of evidence must be accommodated • Evidence statement: incl. evidence and opinion • Opinion tells how much the evidence provider trust the evidence he provides

  40. Overview - Using Trust in RBAC (2) • TERM architecture includes: • Algorithm to evaluate credibility of evidence • Based on its associated opinion and evidence about trustworthiness of the opinion issuer’s • Declarative language to define role assignment policies • Algorithm to assign roles to users • Based on role assignment policies and evidence statements • Algorithm to continuously update trustworthiness ratings for user • Its output is used to grant or disallow access request • Trustworthiness ratings for a recommender is affected by trustworthiness ratings of all users he recommended

  41. Overview - Using Trust in RBAC (3) • A prototype TERM server • Software available at: http://www.cs.purdue.edu/homes/bb/NSFtrust.html More details on “Using Trust in RBAC” available in the extended version of this presentation at: www.cs.purdue.edu/people/bb#colloquia

  42. Outline • Vulnerabilities • Threats • Mechanisms to Reduce Vulnerabilities and Threats 3.1. Applying Reliability and Fault Tolerance Principles to Security Research 3.2. Using Trust in Role-based Access Control 3.3. Privacy-preserving Data Dissemination 3.4. Fraud Countermeasure Mechanisms

  43. Basic Terms - Privacy-preserving Data Dissemination • “Guardian:” Entity entrusted by private data owners with collection, storage, or transfer of their data • owner can be a guardian for its own private data • owner can be an institution or a computing system • Guardians allowed or required by law to share private data • With owner’s explicit consent • Without the consent as required by law • research, court order, etc. Guardian 1 Original Guardian “Owner” (Private Data Owner) “Data” (Private Data) Guardian 5 Third-level Guardian 2 Second Level Guardian 4 Guardian 6 Guardian 3

  44. Problem of Privacy Preservation • Guardian passes private data to another guardian in a data dissemination chain • Chain within a graph (possibly cyclic) • Owner privacy preferences not transmitted due to neglect or failure • Risk grows with chain length and milieu fallibility and hostility • If preferences lost, receiving guardian unable to honor them

  45. Challenges • Ensuring that owner’s metadata are never decoupled from his data • Metadata include owner’s privacy preferences • Efficient protection in a hostile milieu • Threats - examples • Uncontrolled data dissemination • Intentional or accidental data corruption, substitution, or disclosure • Detection of a loss of data or metadata • Efficient recovery of data and metadata • Recovery by retransmission from the original guardian is most trustworthy

  46. Overview - Privacy-preserving Data Dissemination • Use bundlesto make data and metadata inseparable bundle = self-descriptive private data + its metadata • E.g., encrypt or obfuscate bundle to prevent separation • Each bundle includes mechanism for apoptosis apoptosis = clean self-destruction • Bundle chooses apoptosis when threatened with a successful hostile attack • Develop distance-based evaporation of bundles • E.g., the more “distant” from it owner is a bundle, the more it evaporates (becoming more distorted) More details on “Privacy-preserving Data Dissemination” available in the extended version of this presentation at: www.cs.purdue.edu/people/bb#colloquia

  47. Outline • Vulnerabilities • Threats • Mechanisms to Reduce Vulnerabilities and Threats 3.1. Applying Reliability and Fault Tolerance Principles to Security Research 3.2. Using Trust in Role-based Access Control 3.3. Privacy-preserving Data Dissemination 3.4. Fraud Countermeasure Mechanisms

  48. Overview - Fraud Countermeasure Mechanisms (1) • System monitors user behavior • System decides whether user’s behavior qualifies as fraudulent • Three types of fraudulent behavior identified: • “Uncovered deceiving intention” • User misbehaves all the time • “Trapping intention” • User behaves well at first, then commits fraud • “Illusive intention” • User exhibits cyclic behavior: longer periods of proper behavior separated by shorter periods of misbehavior

  49. Overview - Fraud Countermeasure Mechanisms (2) • System architecture for swindler detection • Profile-based anomaly detector • Monitors suspicious actions searching for identified fraudulent behavior patterns • State transition analysis • Provides state description when an activity results in entering a dangerous state • Deceiving intention predictor • Discovers deceiving intention based on satisfaction ratings • Decision making • Decides whether to raise fraud alarm when deceiving pattern is discovered

  50. Overview - Fraud Countermeasure Mechanisms (3) • Performed experiments validated the architecture • All three types of fraudulent behavior were quickly detected More details on “Fraud Countermeasure Mechanisms” available in the extended version of this presentation at: www.cs.purdue.edu/people/bb#colloqia

More Related