1 / 18

CMPN 385

CMPN 385. Lecture 1.2 Caching. What is Caching. Caching is a technology based on the memory subsystem of your computer. The main purpose of a cache is to accelerate your computer while keeping the price of the computer low. Caching allows you to do your computer tasks more rapidly.

zola
Télécharger la présentation

CMPN 385

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CMPN 385 Lecture 1.2 Caching

  2. What is Caching • Caching is a technology based on the memory subsystem of your computer. The main purpose of a cache is to accelerate your computer while keeping the price of the computer low. Caching allows you to do your computer tasks more rapidly.

  3. Reasons for caching • To reduce latency - Because the request is satisfied from the cache (which is closer to the client) instead of the origin server, it takes less time for the client to get the object and display it. This makes Web sites seem more responsive. • To reduce traffic - Because each object is only gotten from the server once, it reduces the amount of bandwidth used by a client. This saves money if the client is paying by traffic, and keeps their bandwidth requirements lower and more manageable.

  4. Caching Facts • Cache technology is the use of a faster but smaller memory type to accelerate a slower but larger memory type. • When using a cache, the cache is first checked to see if an item is in there. If it is there, it's called a cache hit. If not, it is called a cache miss and the computer must wait for a round trip from the larger, slower memory area. • A cache has some maximum size that is much smaller than the larger storage area. • It is possible to have multiple layers of cache.

  5. Cache design • Microprocessor access main memory in approx 60 nanoseconds • much slower than the typical microprocessor. Microprocessors can have cycle times as short as 2 nanoseconds, so to a microprocessor 60 nanoseconds seems like an eternity. • Build a special memory bank in the motherboard, small but very fast (around 30 nanoseconds)

  6. Two times faster than the main memory access. That's called a level 2 cache or an L2 cache. • Build an even smaller but faster memory system directly into the microprocessor's chip • L1 cache, which on a 233-megahertz (MHz) Pentium is 3.5 times faster than the L2 cache, which is two times faster than the access to main memory.

  7. Some microprocessors have two levels of cache built right into the chip. • Motherboard cache -- the cache that exists between the microprocessor and main system memory -- becomes level 3, or L3 cache.

  8. Caching subsystems • L1 cache - Memory accesses at full microprocessor speed (10 nanoseconds, 4 kilobytes to 16 kilobytes in size) • L2 cache - Memory access of type SRAM (around 20 to 30 nanoseconds, 128 kilobytes to 512 kilobytes in size) • Main memory - Memory access of type RAM (around 60 nanoseconds, 32 megabytes to 128 megabytes in size) • Hard disk - Mechanical, slow (around 12 milliseconds, 1 gigabyte to 10 gigabytes in size) • Internet - Incredibly slow (between 1 second and 3 days, unlimited size)

  9. Networking caching • The technique of keeping frequently accessed information in a location close to the requester. A Web cache stores Web pages and content on a storage device that is physically or logically closer to the user-closer and faster than a Web lookup. By reducing the amount of traffic on WAN links and on overburdened Web servers, caching provides significant benefits to ISPs, enterprise networks, and end users.

  10. Benefits • Two key benefits • Cost savings due to WAN bandwidth reduction –ISPs can place cache engines at strategic points on their networks to improve response times and lower the bandwidth demand on their backbones. ISPs can station cache engines at strategic WAN access points to serve Web requests from a local disk rather than from distant or overrun Web servers. • Improved productivity for end users –The response of a local Web cache is often three times faster than the download time for the same content over the WAN. End users see dramatic improvements in response times, and the implementation is completely transparent to them. • Other benefits include: • Secure access control and monitoring –The cache engine provides network administrators with a simple, secure method to enforce a site-wide access policy through URL filtering. • Operational logging – Network administrators can learn which URLs receive hits, how many requests per second the cache is serving, what percentage of URLs are served from the cache, and other related operational statistics.

  11. How web caching works • A user accesses a Web page. • The network analyzes the request, and based on certain parameters, transparently redirects it to a local network cache. • If the cache does not have the Web page, it will make its own Web request to the original Web server. • The original Web server delivers the content to the cache, which delivers the content to the client while saving the content in its local storage. That content is now cached. • Later, another user requests the same Web page, and the network analyzes this request, and based on certain parameters, transparently redirects it to the local network cache.

  12. Benefits of localizing traffic patterns • Implementing caching technology localizes traffic patterns and addresses network traffic overload problems in the following ways: • Content is delivered to users at accelerated rates. • WAN bandwidth usage is optimized. • Administrators can more easily monitor traffic.

  13. Web Caching Solutions • Browser Caches • If you examine the preferences dialog of any modern browser (like Internet Explorer or Netscape), you'll probably notice a 'cache' setting. This lets you set aside a section of your computer's hard disk to store objects that you've seen, just for you. The browser cache works according to fairly simple rules. It will check to make sure that the objects are fresh, usually once a session (that is, the once in the current invocation of the browser). • This cache is useful when a client hits the 'back' button to go to a page they've already seen. Also, if you use the same navigation images throughout your site, they'll be served from the browser cache almost instantaneously.

  14. Proxy Caches • Web proxy caches work on the same principle, but a much larger scale. Proxies serve hundreds or thousands of users in the same way; large corporations and ISP's often set them up on their firewalls. • Because proxy caches usually have a large number of users behind them, they are very good at reducing latency and traffic. That's because popular objects are requested only once, and served to a large number of clients. • Most proxy caches are deployed by large companies or ISPs that want to reduce the amount of Internet bandwidth that they use. Because the cache is shared by a large number of users, there are a large number of shared hits (objects that are requested by a number of clients). Hit rates of 50% efficiency or greater are not uncommon. Proxy caches are a type of shared cache.

  15. Standalone caches • These caching-focused software applications and appliances are designed to improve performance by enhancing the caching software and eliminating other slow aspects of proxy server implementations. While this is a step in the right direction, these standalone caches are not network integrated, resulting in higher costs of ownership and making them less desirable for wide-scale deployment.

  16. origin servers public Internet 1.5 Mbps access link institutional network 10 Mbps LAN institutional cache Caching Example Assumptions • average object size = 100,000 bits • avg. request rate from institution’s browsers to origin servers = 15/sec • delay from institutional router to any origin server and back to router = 2 sec Consequences • utilization on LAN = 15% • utilization on access link = 100% • total delay = Internet delay + access delay + LAN delay = 2 sec + minutes + milliseconds

  17. origin servers public Internet 10 Mbps access link institutional network 10 Mbps LAN institutional cache possible solution • increase bandwidth of access link to, say, 10 Mbps consequence • utilization on LAN = 15% • utilization on access link = 15% • Total delay = Internet delay + access delay + LAN delay = 2 sec + msecs + msecs • often a costly upgrade

  18. origin servers public Internet 1.5 Mbps access link institutional network 10 Mbps LAN institutional cache possible solution: install cache • suppose hit rate is 0.4 consequence • 40% requests satisfied locally • 60% requests satisfied by origin server • utilization of access link reduced to 60% • 10msec overhead at proxy • total avg delay • Internet delay + access delay + LAN delay • 0.6*(2.01) secs + 0.4*milliseconds < 1.4 secs

More Related