1 / 48

Security

Security. Chapter 8. Types of Threats. Interception : Gain unauthorized access to some resource.  a third party eavesdrop a conversation over the internet.  illegally copy files from the directory of another user. Interruption :

Télécharger la présentation

Security

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Security Chapter 8

  2. Types of Threats • Interception: • Gain unauthorized access to some resource. •  a third party eavesdrop a conversation over the internet. •  illegally copy files from the directory of another user. • Interruption: • Make a service (look like) unavailable, inaccessible, or destroyed. •  denial of service attacks (e.g. overwhelm a Web server with requests). • Modification: • Change a resource (e.g. data or service) so that it violates its specification. •  intercept and change data. •  change a program so that it secretly logs user activities. • Fabrication: • Generate new data/activity that normally does not exist. •  add a password in password file. •  replay attacks.

  3. Security Mechanisms • Encryption: • Encryption is the main tool to achieve security goals (i.e. policy). • It makes data (e.g. message) useless for third parties. • It promotes confidentiality (i.e. hiding information to unauthorized users) and integrity (i.e. preventing changes by unauthorized users). • Authentication: • Refers to the process of verifying a claimed identity (of a user or client). • Authorization: • Authorized clients/users are further restricted by access rights. Authorized users/clients are those who possess the needed access rights for a special activity. • Auditing: • Auditing tools help trace client activities. For example, client actions may be logged in order to be later analyzed. Observe that auditing is a reactive mechanism, whereas encryption, authentication, and authorization are preventive ones.

  4. Example for a Security Policy: Globus • Globus: Security architecture based on different administrative domains with different local security measures. • Main focus: cross-domain security issues. • Policy: • The environment consists of multiple administrative domains. • Local operations (within a single domain) are subject to the local domain security policy only. • Global operations (involving different domains) require the initiator to be known in each involved domain. • Operations between entities in different domains require mutual authentication. • Global authentication replaces local authentication. • Access control (i.e. authorization) is subject to local security only. • Users can delegate rights to processes. • A group of processes in the same domain can share credentials.

  5. Design Issues • Main goal: • Security services should allow for different security policies to be realized. • Design issues for security services: • Focus of protection: • What entities is protection based on? •  different approaches • Layering the security mechanisms: • In which layer are security mechanisms to be put? •  in general middleware layer • Distribution of the security mechanisms: • How to distribute security services in the network? • Simplicity of security mechanisms.

  6. Focus of Protection • Three approaches for protection against security threats • Protection against invalid operations (in databases) • Protection against unauthorized invocations (object-based systems) • Protection against unauthorized users (role-based access control)

  7. Layering of Security Mechanisms (1) • The logical organization of a distributed system into several layers. • In which layer to place the security mechanism depends on the trust a client has in the services of that layer. • Trust: The subjective view of a client on how secure a special service is.

  8. Layering of Security Mechanisms (2) A Router A B Router B • Several sites connected through a wide-area backbone service. • In above figure, security measure are placed on the link level. • However, this only makes sense if a sender (A) trusts in intersite traffic. • If not, A would use transport level measures (e.g. secure TCP, SSL) ignoring the measures taken on the link level. • However, this means that A has to trust in SSL. • If not, A may choose to use secure RPC (a middleware service). •  Thus, where to place security measures is a matter of trust.

  9. Distribution of Security Mechanisms • The principle of RISSC as applied to secure distributed systems. • Trusted Computing Base (TCB): set of all security mechanisms. •  TCB in a (Middleware-based) distributed system includes the NOS security • mechanisms. • Trust in NOS is essential: If not, part of NOS functionality has to be re-implemented in • the distributed system (convergence to DOS!). • Example for a distribution of security services: •  place security-critical servers in dedicated trusted machines. •  use additional hardware support to protect these servers (see figure).

  10. Simplicity • Security mechanisms should be simple: •  User view: • promotes the trust of users in used security mechanisms. •  Designer view: • 1. promotes the reduction of security holes. • 2. promotes the efficiency of the implementation of these mechanisms. • Requirement for simplicity is not easy to achieve because: •  Simple mechanisms are often below the user expectation. • e.g. link level encryption is simple and trustable but it cannot meet requirements • like “send my message to only one user in a (remote) LAN”. • This needs authentication mechanisms. •  Applications may be inherently complex. • e.g. E-commerce

  11. Cryptography (1) • Intruders and eavesdroppers in communication. • Suppose user A wants to send a message P to user B: A  B: P (plaintext) • Intruder can (see figure): • intercept the message P. • intercept and modify the message P. • insert another message P’ in the channel between A and B. • Solution: Encrypt P before sending it using a key and decrypt it after receipt.

  12. Cryptography (2) • Instead of A  B: P • Do the following steps: • A encrypts P using encryption function E and key K, and gets ciphertext C: • C = EK(P) • A  B: C • B decrypts C using decryption function D and key K, and gets P: • P = DK(C) •  Mentioned intruder attacks are not possible. • Above we assumed: DK(C) = P  DK(EK(P)) = P •  symmetric cryptosystem or secret-key systems • Two methods are known: • Symmetric system: Only one secret key used by both sender and receiver. • Public key system: Pair of keys K+ and K-, where K+ is public whereas K- is private (i.e. secret). • Notation: KA,B secret key shared by A and B. • , are public and private key of A, respectively.

  13. Symmetric Cryptosystems • Requirements: • The functions E and D should be efficiently implementable (e.g. in hardware). • The key K should be chosen randomly. • Statistic properties of plaintext (e.g. letter frequency) do not appear in ciphertext. • The functions E and D themselves are not secret. The method should only rely on the secrecy of the key K. • Even if arbitrary many pairs P and EK(P) are known, the key K cannot be determined “easily”:  resistant against known plaintext attacks The same applies if P can be freely chosen:  resistant against chosen plaintext attacks • Modifying one bit in the ciphertext modifies all bits of the plaintext with likelihood ½ and vice versa (avalanche effect).  modifying ciphertext in order to get a desired plain text combination is statistically not possible.

  14. Symmetric Cryptosystems • Requirements (continued): • A secret key can only be determined through a systematic examination of all possible values (brute-force attacks): • Let K’ be a key candidate: • 1) P and EK(P) are known: • EK’(P) is compared to EK(P) or • DK’(EK(P)) is compared to P. • 2) only EK(P) is known: • DK’(EK(P)) is checked whether or not it is a meaningful message. •  Complexity for finding the key depends on the key length. • General goal: Complexity should exponentially grow with the key length.

  15. DES (1) • DES: Data Encryption Standard: • Algorithm for secret key systems. • Designed at IBM and accepted as standard in 1977. • Key length: 56 bits (+ 8 parity bits). • Based on permutations and substitutions. • Main principle: • M: plaintext, K: Key • F: a randomly chosen sequence of bits generated from K (i.e. F = f(K)) • : exclusive OR (XOR) • a b = (a andnot(b)) or (not(a) and b) • Rule: (a b) b = a • A simple example for encryption and decryption: • M = “T”  Mascii = “1010100” • F = “1001011” • EK(M) = M F = “0011111” • DK(M) = EK(M) F = “1010100” (= Mascii)

  16. DES (2) • DES Encryption: • 64 bit message M goes through an initial fixed permutation (IP). • M goes through 16 transformation steps. • In each step i M is split in two halves Li and Ri of 32 bits each. • In each step i a new 48 bit key Ki is generated from the initial 56 key K. • After step 16, a permutation IP-1 (the inverse permutation of the initial one) is applied on the 64 bit block (R16, L16) L and R exchanged before permutation!

  17. DES (3) • DES Transformation in each step: • Steps in iteration i (see figure): • 1. Li = Ri-1 • 2. Ri = Li-1 f(Ri-1, Ki) • DES Decryption: • Same algorithm is used !!! • Only the keys Ki are used in reverse order. • Same algorithm works because: • 1. Ri-1 = Li • 2. Li-1 = Ri f(Li, Ki) • Sample steps: • Input: DK(M) = IP-1(R16, L16) • Step 0: (L0, R0 ) = IP(DK(M)) = (R16, L16) • Step 1: (L1, R1 ) = (R0, L0 f(R0, K16)) = • (L16, R16 f(L16, K16) = (R15, L15). • … • Step 16: (L16, R16) = (R0, L0) • Output: IP-1(L0, R0 ) = M (exchange and reverse permutation)

  18. DES (4) (Master key) (Key for round i) Details of per-round key generation in DES.

  19. DES (5) Ri-132 • The function f: Ki48 Expansion Ri-148 6 6 6 S-box 8 S-box 2 S-box 1 4 4 4 Permutation f(Ri-1, Ki) 32

  20. DES (6) • Evaluation of DES: • Design criteria of S-boxes were not published (until early 90s)! •  S-boxes could have security holes •  S-boxes are implemented e.g. as a ROM • 16 encryption steps is deemed too low. • Key length is deemed too small. • However, DES (with variants) has proved to be quite resistant to different attacks. • The algorithm is very efficient and can be implemented even on smart cards. • Operating modes of DES: • Block-oriented modes: •  Electronic Code Book (ECB) •  Cipher block chaining (CBC) • Stream-oriented modes: •  Output feedback (OFB) •  Cipher feedback (CFB)

  21. ECB C1 C2 C3 M1 M2 M3 DES DES DES DES DES DES M1 M2 M3 C1 C2 C3 • EncryptionDecryption • Blocks Mi of a message are encrypted independently (ciphertexts are Ci). • Same plaintext yields same ciphertext (analysis of message is possible) • Transmission errors of one block do not propagate to other blocks. • Insertion, exchange, etc. of blocks cannot be recognized. • Random access of ciphertext is possible.

  22. CBC C1 C2 C3 M1 M2 M3 DES DES I DES DES DES DES I M1 M2 M3 C1 C2 C3 • EncryptionDecryption • Ciphertext Ci depends on ciphertext Ci-1. • Since Mi is XORed with Ci-1 , statistical properties of Mi are lost. • Same plaintext does not yield same ciphertext. • Transmission errors in Ci yields wrong decryption of Mi (avalanche effect) but in Mi+1 only those bits are wrong, which are incorrect in Ci . Further blocks Mi+2, Mi+3, and so on are not affected. • Insertion, exchange, etc. of blocks can be detected.

  23. Another Method (not in DES) M2 C2 M3 C3 M1 C1 I J DES DES DES DES DES DES J I C2 M2 C3 M3 C1 M1 • EncryptionDecryption • Error propagation: • Wrong ciphertext Ci means that no further block will be decrypted correctly.

  24. OFB I A 8 A: character to be en-/decrypted. B: en-/decrypted character R: internal shift register e.g. 64 bit I: initial value of R left 8 bits are set back into R (after shifting) after each character processing 64 8 R DES start 64 8 B 8 • Encryption (= Decryption) • DES is used only for encryption as a random number generator. • Less efficient than block-oriented modes. • More robust against error propagation (e.g. good for voice/image transmission • because propagation is not tolerated even if few wrong symbols are tolerated)

  25. CFB I A 8 A: character to be en-/decrypted. B: en-/decrypted character R: internal shift register e.g. 64 bit I: initial value of R Ciphertext is set back into R after each character processing 64 8 R DES start 64 8 B 8 • Encryption (= Decryption) • Ciphertext is used for feedback. • Wrong ciphertext affects corresponding plaintext in the same manner, but following plaintexts are also affected (avalanche effect) as long as the ciphertext is in register. • More error propagation, i.e. receiver can better recognize integrity attacks.

  26. Public Key Systems • Idea: • Each partner A has two keys: • 1. A secret key only known by A: (or SKA) • 2. A public key known by all partners: (or PKA) • Encryption E and decryption D functions are public. • Encryption is based on the secret key (SKA). • Decryption is based on the public key (PKA). • Requirements: • DSKA(EPKA(M)) = M • E and D are efficiently implementable. • SKA cannot be derived from PKA • For digital signatures: DPKA(ESKA(M)) = M • Secure even if all M, EPKA(M), DSKA(M), and PKA are known. • Advantages (compared with secret key systems): • Anyone can send a message to a receiver A (no secret key is needed). • Key management is simpler. • Simple authentication for digital signatures is achievable.

  27. Public Key Systems • RSA Algorithm: • PK = (e, n), C = EPK(M) = Memod n • SK = (d, n), DSK(C) = Cd mod n • The message is divided into blocks. Each block is interpreted as a number (<= n). Each block is encrypted independently but different modes (like DES) can be used. • How PK and SK are determined by partner A: • 1. A chooses two very large prime numbers p and q (at least 100 digits). A keeps p and q secret. • 2. A computes n = p*q. (The security of the method is based on the fact that large numbers are hard to factorize.) • 3. A determines a number d > max{p, q} that is relative prime to f = (p-1)*(q-1). • 4. A determines e so that e*d = 1 mod f. • From Fermat’s theorem it follows: Me*d = M mod n •  DPKA(ESKA(M)) = M = DSKA(EPKA(M)) • A rudimentary example: • 1. p = 47 and q = 59 • 2. n = p*q = 2773

  28. Public Key Systems • A rudimentary example (continued): • 3. f = (p-1)*(q-1) = 2668  d = 157 is ok, since gcd(157, 2668) = 1  SK = (157, 2773) • 4. e = 17 is ok, since 17*157 = 2669 = 1 mod 2668  PK = (17, 2773) • Suppose letters are assigned numbers 01, 02, …, 26 for A, …, Z (and 00 for blank) • Since n = 2773 > 2626, blocks of two letters can be built (before encryption). • For example, the message m = “ALEA I” is mapped to “011205010009” • The first block is b = 0112 is interpreted as the number 112. • Encryption of b: 11217 = 1089 mod 2773 and so for other blocks. • Encrypted message: “108923262072” • Decryption yields the original message, since: 1089157 = 112 mod 2773 and so on. • Problems with RSA: • Too large key length is required (for n, 200 digits are recommended to achieve DES security). • RSA is 100 to 1000 times slower than DES. •  this is why many systems use RSA rather to securely distribute keys and not for • the normal message traffic.

  29. Secure Communication • Issues: • Authentication: Does the message really stem from the “pretended” sender? • Confidentiality: Is the message protected from being intercepted? • Integrity: Is the message protected from being changed? • Authentication based on secret keys: • Challenge-and-response protocol: • A  B: A -- A sends its identity to B • B  A: RB -- B sends back a random number (a challenge) • A  B: EK(RB) -- A sends the encrypted random number back to B, after -- decryption B knows that A is on the other side • A  B: RA -- A also sends its own challenge to B • B  A: EK(RA) -- A also knows that B is on the other side • Following optimization makes protocol attackable: • A  B: A, RA -- A sends its identity and its challenge in same message to B • B  A: RB, EK(RA)-- B sends back a challenge and the encrypted challenge of A • A  B: EK(RB) -- A sends encrypted challenge of B to B

  30. Secure Communication • Authentication based on secret keys (continued): • Optimized protocol needs only 3 messages • However: reflection attack is possible now! Let X be an intruder. X does not know the key K. Session 1: X  B: A, RX -- X pretends to be A and sends to B its own challenge Session 1: B  X: RB, EK(RX) -- B encrypts challenge of X and sends its own one Session 2: X  B: A, RB -- X opens a new session and pretends to be A but X uses -- the challenge B got from the other session as its own one Session 2: B  X: RB2 EK(RB) -- B sends a new challenge and encrypts old one !!! Session 1: X  B: EK(RB) -- X returns to session 1 and pretends to know how to -- encrypt challenge of B, i.e., B would mean A is on the -- side of the network • Remedy: Use in both sides different challenges e.g. A uses only even and B only odd challenges. (Even this solution is attackable - see literature) • Remark: Original protocol is safe w.r.t. reflection attacks simply because first B authenticates A, and not vice versa. A third party X has to prove its identity first!

  31. Secure Communication • Authentication based on a Key Distribution Center (KDC): • KDC: trusted server that manages (session) keys. • Each partner X has a key KX to protect key transport from KDC. • Advantage: More scalable, since less number of keys is needed (N instead of ) • Principle of key distribution: • A  KDC: A, B -- A asks KDC for a new key to communicate with B • KDC  A: EKA(K) -- KDC sends encrypted key K to A using A’s key • KDC  B: EKB(K) -- same for B using B’s key • Problem: A may begin communication before B has received the key K. •  Use of tickets: • A  KDC: A, B -- A asks KDC for a new key to communicate with B • KDC  A: EKA(K), EKB(K) -- KDC sends encrypted key K to A using A’s and B’s keys • A  B: A, EKB(K) -- A contacts B and provides a the ticket: EKB(K) • Authentication: •  Needham-Schroeder Protocol

  32. Secure Communication • Authentication based on a KDC (continued): • Needham-Schroeder Authentication Protocol: • A  KDC: RA1, A, B -- A notifies KDC to communicate with B, A gives a nonce • KDC  A: EKA(RA1,B, K, EKB(A, K)) -- KDC replies giving ticket of B and new key K • A  B: A, EK(RA2), EKB(A, K) -- A sends an encrypted challenge and the ticket to B • B  A: EK(RA2, RB) -- B replies with encrypted A’s challenge and a new B challenge • A  B: EK(RB) -- A sends back encrypted B challenge • Nonce: A random number that is generated only one time. Here, in order that A is sure that KDC response is on A’s current request. • Further enhancements are possible • Authentication based on public keys: • A  B: EPKB(A, RA) -- A encrypts using public key of B its identity and a challenge • B  A: EPKA(RA, RB, K) -- B generates a session K and sends it and a challenge to A in -- encrypted form using the public key of A (A is then sure that -- B is the partner) • A  B: EK(RB) -- A sends back the encrypted challenge of B (B is sure that A -- is the partner)

  33. Secure Communication • Confidentiality: • Secret key systems: • A  B: EK(M) – encrypt before sending • Public key systems: • A  B: EPKB(M) – encrypt before sending • Integrity: Protection against replay • Main idea: use sequence numbers (or timestamps) • 1) Secret key systems: • A  B: EK(SA, M1) -- A sends an initial random sequence number and a message to B • -- B is not yet sure whether or not this is a replay message! • B  A: EK(SB, SA+1, M2) -- B sends a “harmless” message to A and performs no actions • -- on behalf of A • A  B: EK(SA+1, SB+1, M3) -- Now B is sure that M1 was not a replay, since sequence -- number of A has changed and message includes newly -- generated sequence number of B • 2) Public key systems: • Use same protocol only encryption uses public keys.

  34. Secure Communication • Integrity: Digital Signatures • Situation: A sends a message M to B • 1. Prove that A sent M to B • 2. Prove that M was destined to B • 3. Prove that M was not manipulated (also not by B) • Solution using public keys: • A  B: EPKB(A, B, DT, M, ESKA(A, B, DT, M)) (DT: Date and time). •  B can decrypt the message and obtain: A, B, DT, M, ESKA(A, B, DT, M). •  B decrypts last part (signature) and compare it to first part (content). •  B holds the signature for use in case of problems (see above). • 1. B can prove that M stems from A, since signature is encrypted by private key of A. • 2. B can prove that B is the destination, since B is part of the message. • 3. B can prove that message has not been manipulated, since nobody except A knows the • private key of A. • Problems: A claims its secret key was stolen or the public key B uses is not its own key!

  35. Authorization • Access Control Matrix: •  rows: subjects •  columns: objects • Implementation: a) Using an ACL (Access Control List) b) Using capabilities.

  36. Authorization • Protection Domains: • Hierarchical organization of protection domains as groups of users. • Role-based authorization.

  37. Authorization • Firewalls: Should a packet enter the LAN? Should a packet leave the LAN? Inspects not only headers but also content e.g. size, spam, etc. • A common implementation of a firewall.

  38. Authorization • Issues with mobile code: • Protecting an agent (from malicious machines) • Protecting a machine (from malicious agents) • Protecting an agent: •  Full protection impossible •  Only detection of modification of agent state is possible • Examples: (sig: signature using owner’s secret key, sig-1: same using public key) • 1) Detection of changes in read-only information (e.g. credit card number): • owner  network: agent, CC#, sig(CC#) • 2) Detection of modification or deletion of information: • (A: Agency, OA: offer of agency, sigA: signature of agency, C: checksum) • owner  A1: agent, offers = [], C0 = sig-1(Nonce) • A1  A2: agent, offers = [OA1], C1 = sig-1(C0, A1, sigA1(OA1)) • A2  A3: agent, offers = [OA1, OA2], C2 = sig-1(C1, A2, sigA2(OA2)) • … • AN  owner: owner computes:

  39. Authorization • Protecting a machine: X resources The organization of a Java sandbox. Java sandbox model: 1. Only trusted class loaders are used (no program-defined ones). 2. Byte code verifier checks for e.g. illegal instructions before processing (only remote classes). 3. Security manager checks at runtime accessibility of resources.

  40. Authorization • Protecting a machine: 8-28 • A sandbox: Mechanisms within each node • A playground: Dedicated nodes for mobile code

  41. Authorization • Protecting a machine: • Implementation alternatives (in Java) • Resource references: • Handles to resource management objects are given at loading times. • Stack interposition: • Each call to a method of a local resource is preceded by enable_privilege and finished by disable_privilege. •  Interpreter calls enable_privilege and pushes disable_privilege on the stack. • Name space management: • Programs that intend to use resources include the class names of those resources. These class names are resolved at runtime by class loaders, which map them to the appropriate classes (e.g. a downloaded program will be mapped to a class with security checks)

  42. Key Distribution (1) • Secret-key distribution • Public-key distribution

  43. Key Distribution (2)  M M  M  M  M  M M  • Distributing keys over insecure channels: • Diffie-Hellman: • n, g: public large numbers • x, y: large secret random numbers known only by A and B, respectively • A  B: n, g, gxmod n -- B computes the key: gxymod n • A  B: gymod n -- A can also compute the same key • Mental Poker (M is for example a key):

  44. Key Distribution (3) • Authenticated distribution of public keys: • CA: trusted certification authority • A  B: A, PKA, sigCA(A, PKA) -- A sends so-called certificate to B. • -- B discloses signature and is then sure that • -- PKA is A’s public key. • Lifetime of certificates: • Certificates may be invalidated because, for example, corresponding private key • was stolen. • Put revoked certificates in a list: Certificate Revocation List (CRL). The list is managed by CA. • Restricted lifetime of certificate (e.g. leases). • Certificates are always to be validated with CA (high availability of CA required).

  45. Authorization Management • Generation of a restricted capability from an owner capability in Amoeba

  46. Delegation • Delegation: • A has access rights R for an object O. • B has not such rights. • A wants to give B these rights (or a subset thereof). • Proxy: • Proxy means here a data structure used for delegation. • E.g. proxy for partner A: (Rights, KPP, sigA(Rights, KPP), KSP) • KSP : secret key of proxy, KPP : public key of proxy • Use of proxies for delegation: • A  B: sigA(Rights, KPP), EK (KSP ) -- A sends to B a certificate that includes needed rights, -- and an encrypted secret proxy key. • B  Server: sigA(Rights, KPP) -- B contacts the object server and provides it with A’s -- certificate (server is not yet sure whether B is rightful owner). • Server  B: EKPP (Nonce) -- Based on a nonce, the server generates a “question” that B -- should be able to answer, if it is not an intruder. • B Server: Nonce -- B could decrypt last message and sends the right “answer”.

  47. Example: Kerberos ticket AS: Authentication server, TGS: Ticket Granting Service Step 2: A in plaintext. Step 3: last part is ticket to TGS, KA,AS: stored in AS and generated from password in step 4. Step 6: t timestamp to prevent replays. Step 7: Session key to Bob is generated. • Authentication in Kerberos. • Setting up a secure • channel in Kerberos.

  48. SESAME Components • Overview of components in SESAME.

More Related