1 / 34

CS363

Week 7 - Monday. CS363. Last time. What did we talk about last time? Malicious code case studies Exam 1 post mortem. Questions?. Project 2. Security Presentation. Omar Mustardo. Code Red. Code Red appeared in 2001 It infected a quarter of a million systems in 9 hours

garvey
Télécharger la présentation

CS363

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Week 7 - Monday CS363

  2. Last time • What did we talk about last time? • Malicious code case studies • Exam 1 post mortem

  3. Questions?

  4. Project 2

  5. Security Presentation Omar Mustardo

  6. Code Red • Code Red appeared in 2001 • It infected a quarter of a million systems in 9 hours • It is estimated that it infected 1/8 of the systems that were vulnerable • It exploited a vulnerability by creating a buffer overflow in a DLL in the Microsoft Internet Information Server software • It only worked on systems running an MS web server, but many machines did by default

  7. Versions • The original version of Code Red defaced the website that was being run • Then, it tried to spread to other machines on days 1-19 of a month • Then, it did a distributed denial of service attack on whitehouse.gov on days 20-27 • Later versions attacked random IP addresses • It also installed a trap door so that infected systems could be controlled from the outside

  8. Targeted Malicious Code

  9. Trapdoors • A trapdoor is a way to access functionality that is not documented • They are often inserted during development for testing purposes • Sometimes a trapdoor is because of error cases that are not correctly checked or handled

  10. Causes of trapdoors • Intentionally created trapdoors can exist in production code when developers: • Forget to remove them • Intentionally leave them in for testing • Intentionally leave them in for maintenance • Intentionally leave them in as a covert means of access to the production system

  11. Salami attacks • I have never heard this term before I read this book • This is the Office Space attack • Steal tiny amounts of money when a cent is rounded in financial transactions • Or, steal a few cents from millions of people • Steal more if the account hasn’t been used much • The rewards can be huge, and these kinds of attacks are hard to catch

  12. The Sony XCP rootkit • A rootkit is malicious code that gives an attacker access to a system as root (a privileged user) and hides from detection • Sony put a program on music CDs called XCP (extended copy protection) which allowed users to listen to the CD on Windows but not rip its contents • It installed itself without the user’s knowledge • It had to have control over Windows and be hard to remove • It would hide the presence of any program starting with the name $sys$, but malicious users could take advantage of that

  13. Privilege escalation • Most programs are supposed to execute with some kind of baseline privileges • Not the high level privileges needed to change system data • Windows Vista and 7 ask you if you want to have privileges escalated • Some times you can be tricked • Symantec needed high level privileges to run Live Update • Unfortunately, it ran some local programs with high privileges • If a malicious user had replaced those local programs with his own, ouch

  14. Keystroke logging • It’s possible to install software that logs all the keystrokes a user enters • If designed correctly, these values come from the keyboard drivers, so all data (including passwords) is visible • There are also hardware keystroke loggers • Most are around $40 • Is your keyboard free from a logger?

  15. Controls against Program Threats

  16. Good software development • We only have time for a few slides about good software development • A shame, since good development stops both unintentional and malicious flaws • Development lifecycle: • Specify the system • Design the system • Implement the system • Test the system • Review the system • Document the system • Manage the system • Maintain the system

  17. Modularity • A goal of software engineering should be to develop software robust independent components • Modularization • Components should meet the following criteria: • Single-purpose: Perform one function • Small: Short enough to be understandable by a single human • Simple: Simple enough to be understandable by a single human • Independent: Isolated from other modules

  18. Encapsulation • Components should hide their implementation details • Only the smallest number of public methods should be kept to allow them to interact with other components • This information hiding model is thought of as a black box • For both components and programs, one reason for encapsulation is mutual suspicion • We always assume that other code is malicious or badly written

  19. Testing • Unit testing tests each component separately in a controlled environment • Integration testing verifies that the individual components work when you put them together • Function and performance tests sees if a system performs according to specification • Acceptance testing give the customer a chance to test the product you have created • The final installation testing checks the product in its actual use environment

  20. Testing methodologies • Regression testing is done when you fix a bug or add a feature • We have to make sure that everything that used to work still works after the change • Black-box testing uses input values to test for expected output values, ignoring internals of the system • White-box or clear box testing uses knowledge of the system to design tests that are likely to find bugs • You can only prove there are bugs. It is impossible to proves that aren’t bugs.

  21. Standards • If you program for a living, you will probably be held to standards • Standards cannot guarantee bug-free code, but they can help

  22. OS Security

  23. OS security • The OS has to enforce much of the computer security we want • Multiple processes are running at the same time • We want protection for: • Memory • Hard disks • I/O devices like printers • Sharable programs • Networks • Any other data that can be shared

  24. Separation • OS security is fundamentally based on separation • Physical separation: Different processes use different physical objects • Temporal separation: Processes with different security requirements are executed at different times • Logical separation: Programs cannot access data or resources outside of permitted areas • Cryptographic separation: Processes conceal their data so that it is unintelligible

  25. Memory Protection

  26. Memory protection • Protecting memory is one of the most fundamental protections an OS can give • All data and operations for a program are in memory • Most I/O accesses are done by writing memory to various locations • Techniques for memory protection • Fence • Base/bounds registers • Tagged architectures • Segmentation • Paging

  27. Fence • A fence can be a predefined or variable memory location • Everything below the fence is for the OS • If a program ever tries to access memory below the fence, it either fails or is shut down • As with many memory schemes, code needs to be relocatableso that the program is written as if it starts at memory location 0, but actually can be offset to an appropriate location OS Memory Fence User Program Memory

  28. Base/bounds registers • In modern systems, many user programs run at the same time • We can extend the idea of a fence to two registers for each program • The base register gives the lowest legal address for a particular user program • The bounds register gives the highest legal address for a particular user program OS Memory Base A Program A Memory Bounds A Program B Memory Program C Memory

  29. Tagged architectures • The idea of base and bounds registers can be extended so that there are separate ranges for the program code and for its data • It is possible to allow data for some users to be globally readable or writable • But this makes data protection all or nothing • Tagged architectures allow every byte (or perhaps defined groups of bytes) to marked read only, read/write, or execute only • Only a few architectures have used this model because of the extra overhead involved

  30. Segmentation Programmer’s View OS View • Segmentation has been implemented on many processors including most x86 compatibles • A program sets up several segments such as code, data, and constant data • Writing to code is usually illegal • Other rules can be made for other segments • A memory lookup is both a segment identifier and an offset within that segment • For performance reasons, the OS can put these segments wherever it wants and do lookups • Segments can be put on secondary storage if they are not currently in use • The programmer sees a solid block of memory Code Constant Data Constant Data Data Data Other users have their own segments Code

  31. Paging Programmer’s View OS View • Paging is a very common way of managing memory • A program is divided up into equal-sized pieces called pages • An address is page number and an offset • Paging doesn’t have the fragmentation programs that segmentation does • It also doesn’t specify different protection levels • Paging and segmentation can be combined to give protection levels Page 0 Page 1 Page 2 Page 2 Page 3 Page 0 Other users have their own pages Page 1 Page 3

  32. Upcoming

  33. Next time… • More OS security • Access control • Authentication • Cody Kumppresents

  34. Reminders • Read Sections 4.1 through 4.4 • Start working on Project 2

More Related