430 likes | 538 Vues
Content Placement as a Key to Leveraging Geo-Distributed I nfrastructures. Abhigyan Sharma University of Massachusetts Amherst. Geo-distributed, heterogeneous computing infrastructure. Alice. Datacenters. Set top boxes. Speed of light. Cellular base stations. Routers.
E N D
Content Placement as a Key to Leveraging Geo-Distributed Infrastructures Abhigyan Sharma University of Massachusetts Amherst
Geo-distributed, heterogeneous computing infrastructure Alice Datacenters Set top boxes Speed of light Cellular base stations Routers
Network mobility of endpoint principals Wi-Fi: IP3 Cellular: IP4 Alice’s phone • Interfaces • Devices • Things • Content • Services • ... Wi-Fi: IP1 Cellular: IP2 www.dropbox.com/file1 Server A IP5 Server B IP6
Research vision for future Internet services Services to support network mobility Service A Service B Services to leverage geo-distributed, heterogeneous infrastructure • Meta-service for automatic deployment & reconfiguration
This talk is about … Auspice global name service Network CDN SIGCOMM 2014 SIGMETRICS 2013, INFOCOM 2011
Degrees of freedom enabled by geo-distribution ✖ ✖ ✔ Network routing Content placement Request redirection
Network routing/traffic engineering Link capacity Traffic Engineering Network routing Traffic matrix Traffic from node ito node j
Network cost function • Convex functions of link utilization, e.g., maximum link utilization (MLU) Shortest path Flow split 0.5 Mbps 1 Mbps 1 Mbps A B A B 3 Mbps 3 Mbps 1.5 Mbps 2 Mbps MLU = 2/3 = 0.67 MLU = 0.5/1 = 0.5
Leveraging geo-distribution: content placement Placement vs. redirection Alice Placement vs. routing Place all content at all locations Traffic matrix No routing problem!
Auspice global name service Network CDN SIGMETRICS 2013
ISPs evolving to Network CDNs (NCDNs) Content providers Content delivery network (CDN) NCDN NCDN NCDN Internet service providers (ISPs) • CDNs commoditized: licensed & managed CDNs • 30+ NCDNs 3X traffic growth (’13-18’), 79% video traffic (’18)* *Cisco Visual Networking Index 2013-18
NCDN model Origin servers NCDN POP Backbone router Content servers Backbone router at exit nodes 12
NCDN management problem Link capacity NCDN management Storage capacity Placement Redirection Routing Content size vector , Content matrix Demand for content i at node j
NCDN placement routing interaction 0.25 Mbps B C 0.5 Mbps Traffic labeled with flow value 0.25 Mbps 8 Mbps 4 Mbps 1.25 Mbps Link labeled with capacity 1.5 Mbps A D 0.75 Mbps Demand = 1 Mbps Demand = 0.5 Mbps Maximum link utilization (MLU) = 0.75/1.5 = 0.5
NCDN placement routing interaction B C 0.5 Mbps Traffic labeled with flow value 8 Mbps 1 Mbps 4 Mbps 0.5 Mbps Link labeled with capacity 1.5 Mbps A D Demand = 1 Mbps Demand = 0.5 Mbps Maximum link utilization (MLU) = 0.75/1.5 = 0.5 = 1/8 = 0.125
NCDN joint optimization • Joint optimization is NP-complete & inapproximable Origin servers Mixed integer program for joint opt. Joint optimization Content matrix Placement, Redirection Optimize: Cost function Constraints: Link capacity Storage capacity … … Routing
Planned NCDN management CM(t) : Content matrix in interval t PRR(t) : Placement, redirection, routing in interval t Planned: CM(t-1) PRR(t) Oracle: CM(t) PRR(t) Timeline (t-1) t (t+1) NCDN joint opt. NCDN joint opt.
Unplanned NCDN management Placement LRU caching Redirection Closest hop count Routing Static shortest-path
Network cost Akamai traces: 7.2M users, 28.2M requests, 1455 TB data Content placement matters tremendously in NCDNs 3x 18%
Traffic engg. vs. static shortest-path routing LRU Caching + Static short-path routing vs. LRU Caching + Traffic engineering Traditional traffic engg. gives small cost reduction in NCDNs 10% or less
Message to NCDNs • Keep it simple • Realistic joint optimization performs worse than simple unplanned • Little room for improvement over simple unplanned • Content placement matters more than routing in Network CDNs
SIGCOMM 2014 Auspice global name service Network CDN
Poor support for mobility in Internet Ungraceful disruptions Unidirectional communication initiation VOIP/Messaging ? Notification service Cloud storage Redundant app-specific mobility support
How global name service handles mobility Global name service (GNS) Alice’s phone IP address new Socket(“ ”); Name/Identity Network location Name-based communication
Scalable global name service (GNS) f0:56:81:c1:c0:eb node1.cs.umass.edu dropbox.com netflix.com/<object> devices in [lat,long,radius] Global name service (GNS) interface device service content group of names Goal: A massively scalable, geo-distributed GNS to enable secure, name-based communication with flexible endpoint principals with arbitrary (fixed) names despite high mobility. 10B devices, 100 addresses/day ≈ 1M updates/sec
Key contributions • Clean-slate naming system design • Arbitrary names • Decentralized root of trust • Scalable name-to-address mapping service • Small lookup latency under high mobility • Deployable in DNS
Security DNS Single root of trust + DNSSEC Domain name IP addresses Decouple certification & resolution Name certification services GNS providers, e.g., Auspice Certificate search services Arbitrary name IP addresses Self-certifying GUID GNS provider
Flexibility Root DNS TLD Domain name IP addresses Auth. NS Name certification services GNS providers, e.g., Auspice “JohnSmith2178@Amherst” “Living room chandelier” “Taxis near Times Square” Auspice IP addresses Arbitrary name Self-certifying GUID GNS provider
Key contributions • Clean-slate naming system design • Arbitrary names • Decentralized root of trust • Scalable name-to-address mapping service • Small lookup latency under high mobility • Deployable in DNS
Active replication cost Total capacity of all servers Lookup cost Update cost (1 active replica/name-record) Update cost (5 active replicas/name-record) • Update cost for name i #active_replicasi X update_ratei
Cost-benefit tradeoff in active replication • Update cost for name i #active_replicasi X update_ratei Replicate-at-all-locations Auspice Update cost Consistent hashing with (static) k-replication Resource limit Name lookup latency
Demand-aware active replication lookup_ratei • #active_replicasof name i update_ratei Geolocality-aware Load-aware i i i j j j S6 S11 S8 S7 S5 S2 S4 S3 S1 S9 S10 j i i j j i
Placement engine Strong consistency Consistent hashing S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 Name i Placement engine for name i Demand-aware replication Active replicas for name i Demand geo-distribution S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 Flexible consistency
Placement schemes comparison Testbed: 16 server cluster emulating with 80 NS an 80 local NS Workload: 90% mobile names (geolocality 0.75), 10% service names Auspice outperforms static placement and a DHT-based scheme Better redirection Better placement
Managed DNS comparison Ultra DNS (16 replicas) vs. Auspice 5/10/15 replicas out of 80 locations One-third update cost, similar latency Auspice reduces cost/latency over today’s managed DNS 60% less latency, similar cost
Auspice GNS summary Enables secure, name-based communication • arbitrary name/location representation • flexible endpoint principals • Key differences from DNS for today’s Internet • decouple certification and resolution • active replication • demand-aware placement
Take-aways Content placement key to leverage geo-distribution Bad news: Redirection, routing can’t redress poor placement Good news:Simple placement works well
Future directions • Simplify management of geo-distributed services Service A Service B • Meta-service for automatic deployment & reconfiguration
Future directions • Secure software platform for in-network computing Set top boxes Cellular base stations Routers
Future directions Content datacenter [Current work] • Energy-efficiency of computing infrastructure Global energy management Network information plane
User-perceived latency cost Latency Cost = E2E propagation delay + Link utilization dependent delay Content placement matters tremendously in NCDNs 28%
Traffic engg. schemes comparison for random placement All traffic engg. schemes achieve near-optimal capacity. Static shortest-path routing at most 30% sub-optimal. INFOCOM 2011 43