Network Latency, Jitter and Loss - PowerPoint PPT Presentation

network latency jitter and loss n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Network Latency, Jitter and Loss PowerPoint Presentation
Download Presentation
Network Latency, Jitter and Loss

play fullscreen
1 / 73
Network Latency, Jitter and Loss
270 Views
Download Presentation
sandro
Download Presentation

Network Latency, Jitter and Loss

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Network Latency, Jitter and Loss

  2. Outline • Loss, Latency and Jitter • Latency Compensation Techniques • Playability versus Network Conditions

  3. Latency, Jitter and Loss • 3 characteristics most identified with IP networks • Loss- packet does not arrive • Usually, fraction #recv/#sent • Note, often assumed independent but can be bursty (several lost in a row) • Latency - time it takes a packet of data to get from source to destination • Round trip time (RTT) often assumed to be 2*latency, but network path can be asymmetric • Jitter - variation in latency from one packet to the next (See Picture next slide)

  4. Latency, Jitter and Loss Latency and jitter affect streams of packets travelling across the network

  5. What’s the problem? • Latency will impact negatively on the level of real-time interaction within the game • Limits how quickly changes in game state can be rolled out across all participants • Reduces players’ ability to react to changes within the game environment • Jitter can make it difficult for the player and game engine to compensate for long term average latency

  6. Sources of Loss • Note, here we are considering only IP packet loss • Above IP, TCP will retransmit lost packets • Below IP, data link layer often retransmits or does repair (Forward Error Correction) • IP packet loss predominantly from congestion • Causes queue overflow • Congestion • Bit errors • More common on wireless • Loss during route change • (link/host unavailable) • Often bursty! Router Routing Table 10 Mbps 5 Mbps Packet queue 10 Mbps

  7. Sources of LatencyMini-Outline • Propagation • Serialization • Congestion

  8. Sources of Latency - Propagation Delay • Time for bits to travel from one host to another • Limited by propagation speed of medium • Typically electricity/light through cable or fiber • Could be radio wave through air • Could even be sound wave through water! • Roughly: latency (ms) = length of link (km) / 300 • Ex: Worcester, MA to Berkeley, CA is 2649 miles (4263 km) latency = 4263 / 300 = 14 msec • Notes: • Light through fiber about 30% slower than light through vacuum • Paths often not in a straight line

  9. Sources of Latency - Serialization Delay • Ex: Consider everyone trying to leave room by one door • Exit only at fixed rate • Similar to transmitting bits by a network card • Time to transmit packet on link 1 bit at a time  serialization • Serialization delay for each hop (cumulative) • Includes headers (26 bytes for Ethernet) latency (ms) = 8 * link layer frame (bytes) / link speed (kbps) • Ex: 1000 byte app data, uplink typical DSL rate • Frame is 1000 + 40 (UDP/IP) + 26 (Ethernet) latency = 8 * 1066 / 192 = 44 msec

  10. Sources of Latency - Queuing Delay • When traffic rate bursty, unpredictable rate (unlike, say, phone) • Need to handle burst  queue • Queuing delay latency (ms) = 8 * queue length (packets) * avg pckt sz (bytes) / link speed (kpbs) • Ex: 10 packets, each 1000 bytes, 1 Mbps link latency = 8 * 10 * 1000 / 1000 = 80 msec • Note, can have at end-host, too, when send faster than link (ie- WLAN) ping, traceroute (Linux), tracert (Windows)

  11. Sources of Jitter • Due to a change in end-to-end delay from one packet to the next • Route changes • Queue length changes • Say, goes from 10 (80 msec delay) to 0 • Packet length changes (serialization different) • Big packet (1000 bytes)  44 msec • Small packet (10 bytes)  4.4 msec • Could be from other packets in the queue, too

  12. Tools • ping • http://www-iepm.slac.stanford.edu/pinger/ • traceroute • http://www.traceroute.org • bandwidth estimation • http://www.speedtest.net/index.php • http://speedtest.verizon.net/SpeedTester/help_speedtest.jsp Traceroute from Australia to the USA showing long-haul propagation delays Note, ~145 ms (12,000 km Sydney to LA) RTT when estimate is 80 ms

  13. Latency, Compensation Techniques

  14. Latency CompensationMini-Outline • Need • Prediction • Time delay and Time warp • Data compression • Visual tricks • Cheating

  15. Need for Latency Compensation • Bandwidth is growing, but cannot solve all problems • Still bursty, transient congestion (queues) • Bandwidth upgrade uneven across all clients • Modems? Maybe. DSL, yes, but even those vary in downlink/uplink. • WWAN growing (low, variable bandwidth, high latency) • Propagation delays (~25 msec minimum to cross country) “There is an old network saying: ‘Bandwidth problems can be cured with money. Latency problems are harder because the speed of light is fixed – you can’t bribe God.’ ” —David Clark, MIT

  16. User Input Process and Validate Input Message: User Input Message: Ok User Input Render Input Time Basic Client-Server Game Architecture • “Dumb” client • Server keeps all state • Validates all moves • Client only updates when server says “ok” Algorithm • Sample user input • Pack up data and send to server • Receive updates from server and unpack • Determine visible objects and game state • Render scene • Repeat

  17. Latency Example (1 of 2) Player is pressing left Player is pressing up but player continues left because of latency Running back goes out of bounds

  18. Latency Example (2 of 2) Player is pressing “pass” but throw is not processed yet because of latency Pass starts rendering here because of latency Interception

  19. Compensating for Latency - Prediction • Broadly, two kinds: • Player prediction • Opponent prediction (often called “dead reckoning”)

  20. User Input Process and Validate Input Message: User Input Render Input Message: Ok with Update Fix Up Time Player Prediction Predicted Algorithm • Sample user input • Pack up data and send to server • Determine visible objects and game state • Render scene • Receive updates from server and unpack • Fix up any discrepancies • Repeat Tremendous benefit. Render as if local, no latency. But, note, “fix up” step additional. Needed since server has master copy.

  21. Example of State Inconsistency • Predicted state differs from actual state The large picture showing the first player’s view differs from the second player’s view, shown by the smaller, inset picture. A black box (drawn manually, not by the game) highlights the main difference

  22. Client uses prediction Client waits for server ok More responsive, Less consistent Less responsive, More consistent Prediction Tradeoffs • Tension between responsiveness (latency compensation) and consistency.

  23. t3 t0 t1 Unit Owner Actual Path t2 send initial position send update send update send update Opponent Predicted Path Opponent Prediction • Opponent sends position, velocity (maybe acceleration) • Player predicts where opponent is (User can see “Warp” or “Rubber band”.)

  24. Opponent Prediction Algorithms Unit Owner • Sample user input • Update {location | velocity | acceleration} on the basis of new input • Compute predicted location on the basis of previous {location | velocity | acceleration} • If (current location – predicted location) < threshold then • Pack up {location | velocity | acceleration) data • Send to each other opponent • Repeat Opponent • Receive new packet • Extract state update information {location | velocity | acceleration} • If seen unit before then • Update unit information • else • Add unit information to list • For each unit in list • Update predicted location • Render frame • Repeat

  25. Opponent Prediction Notes • Some predictions easy • Ex: falling object • Others harder • Ex: pixie that can teleport • Can be game specific • Ex: Can predict “return to base” with pre-defined notion of what “return to base” is. • Cost is each host runs prediction algorithm for each opponent. • Also, latency compensation method, can greatly reduce bitrate. • Predict self. Don’t send updates unless needed. • Especially when objects relatively static.

  26. Time Manipulation • Client states can differ • Depends upon their RTT to server • Impacts fairness • Ex: Two players defeat monster • Server generates treasure. Sends messages to clients. • Clients get messages. Players can react. • Client closer (RTT lower) gets to react sooner, gets treasure • Unfair! • Solution? Manipulate time • Time Delay • Time Warp

  27. Time Delay • Server delays processing of events • Wait until all messages from clients arrive • (Note, game plays at highest RTT) • Server sends messages to more distant client first, delays messages to closer • Needs accurate estimate of RTT Server processes both client commands Client 1 command arrives Client 2 command arrives Time Time Delay

  28. Time Warp • In older FPS (ie- Quake 3), used to have to lead opponent to hit • Otherwise, player had moved • Even with “instant” weapon! • Knowing latency roll-back (warp) to when action taken place • Usually assume ½ RTT Time Warp Algorithm • Receive packet from client • Extract information (user input) • elapsed time = current time – latency to client • Rollback all events in reverse order to current time – elapsed time • Execute user command • Repeat all events in order, updating any clients affected • Repeat

  29. Time Warp Example • Client 100 ms behind • Still hits (note the blood) • Also, note the bounding boxes

  30. Time Warp Notes • Inconsistency • Player target • Move around corner • Warp back  hit • Bullets seem to “bend” around corner! • Fortunately, player often does not notice • Doesn’t see opponent • May be just wounded

  31. Data Compression • Idea  less data, means less latency to get it there • So, reduce # or size of messages  reduce latency (serialization) • Lossless (like zip) • Opponent prediction • Don’t send unless need update • Delta compression (like opponent, but more general) • Don’t send all data, just updates • Interest management • Only send data to units that need to see it

  32. Where are you? Hider’s Nimbus Hider’s Focus Seeker’s Focus Seeker’s Nimbus Interest Management Aura of interest, illustrated by the game ‘Hide and Seek.’ The Aura is made up of a Focus and Nimbus. If the Focus of a unit intersects the Nimbus of another unit, they can interact. Here, the Hider can see the Seeker, but the Seeker cannot see the Hider

  33. Data Compression (continued) • Peer-to-Peer (P2P) • Limit server congestion • Also, client1serverclient2 higher latency than client1client2 • But cheating especially problematic in P2P systems • Update aggregation • Message Move A  Send C, Move B  Send C • Instead, Move A + Move B  Send C • Avoid packet overhead (if less than maximum transmission unit (MTU)) • Works well w/time delay

  34. Visual Tricks • Latency present, but hide from user • Give feeling of local response • Ex: player tells boat to move, while waiting for confirmation raise sails, pull anchor • Ex: player tells tank to move, while waiting, batten hatches, start engine • Ex: player pulls trigger, make sound and puff of smoke while waiting for confirmation of hit

  35. Latency Compensation and Cheating • Opponent prediction  no server is needed! • Yes, if player can be trusted • Else “I just shot you in the head”  how to verify? • Time warp  client pretends to have high latency • Can pass to player then react • Worse if client controls time stamps • Interest management can help with information exposure

  36. Playability versus Network Conditions andCheats

  37. Gaming Satisfaction • A game hosting company, an Internet Service Provider (ISP) and a game manufacturer are aiming for the same thing – satisfied consumers. • Satisfaction is achieved by understanding, and avoiding, the circumstances that would undermine an enjoyable game-play experience. • Everyone knows that ‘latency is bad for gaming’. The task for ISPs and game hosting companies is to determine just how much latency becomes noticeably ‘bad’ for some definitions of bad, and to figure this out for loss and jitter as well.

  38. Discovering Player Tolerance • There are two distinct approaches for discovering player tolerance to network disruptions. • Build a controlled lab environment in which to test small groups of players under selected conditions • Monitor player behaviour on public servers over many thousands of games.

  39. Discovering Player Tolerance • Controlled usability trials are preferable whenever possible. One can monitor (and later account for) tiredness, hunger and social relationships between players. • Arbitrary and repeatable network-level latency, loss and jitter between the players and the game server are introduced artificially. By varying the network conditions and keeping other environmental conditions steady, we can draw fairly solid conclusions about player tolerances from modestly small groups of players. • Unfortunately, it is often hard to find a set of people willing to sit and play in a controlled lab environment.

  40. Discovering Player Tolerance • The alternative is to correlate user behaviour on an existing game server with changes in network conditions over time. • This approach is less than ideal, because we cannot control • (or even know) the environmental factors affecting every player who joins our server. We cannot control the precise network conditions affecting each and every player. • At best we can use the ‘law of large numbers’ – make measurements over thousands of games, correlate player success with known network conditions and hope the remaining unknown factors cancel themselves out.

  41. Networking and Playability • Figure 7.1 illustrates the impact of a player’s median latency (ping) on their average ‘frag rate’. • Since games run for many minutes, a fractional improvement on your frag rate can make quite a difference in your ranking relative to other players.

  42. Networking and Playability • Latency affects performance • Subjective and Objective But depends upon task!

  43. Networking Cheating in General • Broadly speaking, cheating could be described thus: • ‘Any behaviour that a player uses to gain an advantage over his peer players or achieve a target in an online game is cheating if, according to the game rules or at the discretion of the game operator (i.e. the game service provider, who is not necessarily the developer of the game), the advantage or the target is one that he is not supposed to have achieved.’ [YAN2005]. • Cheaters want: • Vandalism – create havoc (relatively few) • Dominance – gain advantage (more)

  44. Client Side Cheats • A huge degree of trust must be placed in the client side of an online game. The game-play experience is entirely mediated by the combination of game client software and the operating system, and hardware environment on which the game client runs. • Unfortunately, this is almost precisely the wrong place to put much trust because the client software runs on physical hardware entirely under control of the player.

  45. Client Side Cheats • Cheats in games using peer-to-peer communication models are essentially variations of client-side cheats, made possible because the local rendering of game-state occurs on the player’s own equipment. • Most client-side cheats involve manipulating the software context within which the game client operates to augment a player’s apparent reflexes and presence, or augment a player’s situational awareness by revealing supposedly hidden information.

  46. Network-layer Cheats • One of the most annoying cheats is disruption of another player’s network connection by ‘flooding’ it with excess IP packet traffic. • The main goal of a DDoS attack is to overload some part of the victim’s network access path, leading the victim to experience a large spike in latency and packet loss rates. Depending on how it is applied, the victim may not even consider the degraded service to be unusual. In most cases the victim has no way of blocking the inbound flood of traffic before it reaches, and saturates, the weakest part of the victim’s Internet connection.

  47. Network-layer Cheats Figure 7.8 shows how a flooding/DDoS attack could be launched against an unsuspecting victim. In principle, the cheater can launch the attack from anywhere, keeping their own game client’s network connection free of excess traffic.

  48. Packet and Traffic Tampering • Reflex augmentation - enhance cheater’s reactions • Example: aiming proxy monitors opponents movement packets, when cheater fires, improve aim • Packet interception – prevent some packets from reaching cheater • Example: suppress damage packets, so cheater is invulnerable • Packet replay – repeat event over for added advantage • Example: multiple bullets or rockets if otherwise limited

  49. Preventing Packet Tampering • Cheaters figure out by changing bytes and observing effects • Prevent by MD5 checksums (fast, public) • Still cheaters can: • Reverse engineer checksums • Attack with packet replay • So: • Encrypt packets • Add sequence numbers (or encoded sequence numbers) to prevent replay

  50. Information Exposure • Allows cheater to gain access to replicated, hidden game data (i.e. status of other players) • Passive, since does not alter traffic • Cannot be defeated by network alone • Instead: • Sensitive data should be encoded • Kept in hard-to-detect memory location • Centralized server may detect cheating (example: attack enemy could not have seen) • Harder in replicated system, but can still share