1 / 56

A Look At The Unidentified Half of Netflow (With an Additional Tutorial On How to Use The Internet2 Netflow Data Archiv

A Look At The Unidentified Half of Netflow (With an Additional Tutorial On How to Use The Internet2 Netflow Data Archives). ESCC/Internet2 Joint Techs Workshop University of Hawaii, January 20-24, 2008 Joe St Sauver, Ph.D. (joe@uoregon.edu or joe@internet2.edu)

livi
Télécharger la présentation

A Look At The Unidentified Half of Netflow (With an Additional Tutorial On How to Use The Internet2 Netflow Data Archiv

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Look At The Unidentified Half of Netflow (With an Additional Tutorial On How to Use The Internet2 Netflow Data Archives) ESCC/Internet2 Joint Techs WorkshopUniversity of Hawaii, January 20-24, 2008 Joe St Sauver, Ph.D. (joe@uoregon.edu or joe@internet2.edu) Internet2 Security Programs Manager Internet2 and the University of Oregon http://www.uoregon.edu/~joe/missing-half/ Notes: All opinions expressed in this talk are strictly those of the author. These slides are provided in detailed format for ease of indexing, for the convenience of those who can't attend today's session in person, and to insure accessibility for both the hearing impaired and for those for whom English is a secondary language.

  2. You Should Know Your Network Traffic • When thinking about network security, an exhortation you'll commonly hear is to "know your network traffic." After all…-- if you don't know what your normal "baseline" traffic looks like, you're going to be hard pressed to identify suspicious traffic patterns, right?-- you'll need to understand your network traffic patterns if you're ever required to deploy a perimeter firewall, and-- you'll need to measure your network traffic if you want to do network capacity planning • Just as you need a feel for your local and regional traffic, the I2 community should strive to understand the traffic on the national backbone. New programs such as the Commercial Peering Service and the FCC Rural Health Care initiative may make this all the more important.

  3. What Is Netflow? • Netflow is an open (but proprietary) Cisco protocol, but that term is used commonly to refer to any/all flow based analyses, including network flow data collected from non-Cisco routers, flow data gleaned from passive optical taps, etc. • Netflow data is normally exported from one or more Netflow-enabled routers to a Netflow collector box (typically a fairly beefy dedicated PC server with lots of CPU and copious disk space) • As data from the routers is received, it is periodically written to disk on the collector box (I2 writes flow data every five minutes). • Applications can then be run against those saved Netflow data files to process the flow data into various summary reports. • Many of you may run Netflow locally, but even if you don't, I2 collects flow data for all traffic passing across the Internet2 Network, grinding that data into a weekly summary which is available at http://netflow.internet2.edu/

  4. And In Fact, That I2 Weekly Netflow Report Is Really What Inspired This Talk… • If you look at a copy of the Internet2 Netflow Weekly Report, you'll see it covers at a wide range of topics, including:-- what's the throughput of bulk data transfers (transfers >=10MB)?-- what applications are being used on the network?-- is the MTU just 1500, or are jumbo frames being used?-- is all traffic best effort, or are DSCP code points being used to tag traffic for expedited service or for scavenger treatment? • When categorizing flows, the report does its best to assign flows to applications, but sometimes there are flows which don't fit any known application. Those flows then go into an "unidentified" category, a category which over time has grown to ~50% of all octets as the applications seen on the network have evolved.

  5. ~50% Unidentified Traffic Is NOT a "One-Off" Phenomenon Report Date % Unidentified Unidentified Octets 20071224 58.34% 268.8T20071217 52.17% 343.8T20071210 47.21% 358.8T20071203 43.31% 295.2T20071126 45.79% 363.9T20071112 48.34% 340.3T20071105 47.51% 379.0T20071029 46.62% 362.1T20071022 45.94% 352.4T20071015 46.99% 368.4T20071008 51.23% 324.6T20071001 53.37% 338.5T20070924 57.60% 443.5T20070917 55.24% 415.2T

  6. At The Risk Of Sounding Somewhat Obsessive/Compulsive, Seeing Roughly Half of All Octets "Unidentified" Bothered Me... • If I'd seen a few percent unidentified, or maybe even ten or twenty percent unidentified, I'd be willing to shrug and forget about that traffic, but seeing roughly half of all traffic end up in a residual "unidentified" category bothered me– what was it? -- An important bread-and-butter application with non-standard port usage habits?-- Stealthy P2P or other bandwidth intensive apps intentionally trying to hide? -- Attack traffic? (you can always spot security types, can't you?)-- Something else? • I decided I wanted to try to find out, grinding the data myself in my favorite statistical package, SAS. But would Internet2 Netflow data be routinely available for analysis? Well, it turns out, yes…

  7. Gaining Access to Internet2's Netflow Data

  8. http://abilene.internet2.edu/observatory/proposal-process.htmlhttp://abilene.internet2.edu/observatory/proposal-process.html • "The following information would be useful to the Abilene Observatory Program, and is necessary in the case of obtaining Netflow data. Please submit to abilene@internet2.edu: -- Give a brief description of the research project, including a title -- List the project leads and participants -- Include URLs if appropriate and available -- Indicate any potential issues with data resulting from the project, including any potential privacy issues.-- Should the project be listed as a participant on the Abilene Observatory web page?-- Submit an id and password to be used with rsync-- Submit a range or a set of individual ip addresses that will be used to access the data (range can be e.g., /28, /30, /32, etc.)-- Indicate any recommendations for additional data sets. "If Abilene data is used in research papers or articles, please send future citations to be included with the above information. Researchers are encouraged to cite the use of this data in papers and articles. […]"

  9. "You've Been Approved!" • Once approved, you'll have a personal username and password* which you can use to get rsync access to Internet2 flow data in flow-tools format (see http://www.splintered.net/sw/flow-tools/ ). Those records will have basically everything you'd normally see in regular Netflow records:-- src and dest IP addresses (albeit with the last 11 bits zero'd)-- src and dest autonomous system numbers-- src and dest port numbers-- protocol type (tcp, udp, etc.)-- number of packets and number of octets-- flow start and stop times-- tcp flags and TOS bits, input/output interface numbers and next hop IPs, etc. • An 11 bit mask ==> the finest granularity IP address information available will be aggregated at the /21 level (e.g., netblocks with up to 2048 dotted quads).At that level of anonymization it may be effectively impossible to "pair up" sequential client/server query/response network flows for some busy systems.--------* Because that password will be stored unencrypted on the system you use to rsync data, pick a password used only for that rsync account, chmod the pwd file appropriately, and carefully limit the IP addresses allowed to have rsync access

  10. "So Is Flow Data Useful At All If The Lowest 11 Bits of the IPs Are Zero'd?" • Absolutely! Keep in mind that it is very uncommon to be able to get any netflow data (or any sort of passively collected data) for a national-scale network. Most backbones treat netflow (and other passively collected data) as confidential/business proprietary, and they do not make that data publicly available in any form for any purpose whatsoever, even if the data's been anonymized. • Internet2, on the other hand, has always viewed support for those studying the network to be an integral part of its role, and that support has been made tangible via things such as sharing data. • From an analyst's point of view, it would (obviously) be tréscommode if flow data were to be completely unanonymized, but that need has to be carefully balanced against the larger need to respect the privacy of Internet2 users. An 11 bit mask is the result.

  11. Sampled Netflow • There's another complication: because of the line rates involved, the netflow data you get from Internet2 is only sampled at a rate of 1:100. That is, you don't get flows for every packet, but flows which result from sampling every one in a hundred packets.If you need to obtain absolute estimates for total traffic, you'll need to scale the totals you receive from sampled netflow accordingly(e.g., scale total octets or total packets by multiplying by 100) • You may wonder WHY sampled netflow is necessary – why can't the router just export records for all the traffic it sees? The answer is that doing netflow imposes overhead, and if the router is exporting every flow associated with any packet, it may slow down and have trouble keeping up with its primary job of routing packets • [Aside: Should Internet2 be deploying non-router-based passive flow-monitoring hardware appliances, at least on some links?]

  12. No IPv6, Either • In addition to only seeing sampled data rather than full flow data, don't be disappointed when you learn that you won't currently get to see native IPv6 flow records, even though that traffic is present on the backbone. • Why is there no native IPv6 flow data? Well, Netflow version 5 (the traditional Netflow format used at most sites, including Internet2), doesn't support IPv6 traffic -- you need to be running the more recent Netflow version 9 if you want to collect data on IPv6 network flows. • Q. "So what's the IPv6 (protocol 41) traffic I see in the Internet2 weekly summaries, eh?"A. "That's legacy IPv6 over IPv4 traffic, not native IPv6 traffic." [Aside #2: Should Internet2's Netflow collections be migrated to Netflow Version 9 so as to support native IPv6 Netflow?]

  13. "So Are You Going to Look at A Week/Month/Year's Worth of Data or ?" • We're just going to look at an hour's worth of data collected on Wednesday, 2008-01-16 at 2100 UTC (4PM EST, 3PM CST, 2PM MST, 1PM PST, etc.). I believe that that hour's worth of data is similar to larger data windows, exhibiting the same sort of characteristic "uncategorized" traffic as larger samples. • True, there may be some traffic which is scheduled to run in the middle of the night in the US, traffic which we might miss by only picking a "prime time" observation point, but that's okay: this isn't meant to be a rigorous and long term analysis, but rather an experiment, an introduction and exploration, perhaps inspiring YOU to do a better/more complete job than I've done.

  14. Even An Hour Of Sampled Netflow Data Is A LOT of Data • Even sampling 1:100, it is easy to underestimate the volumes associated with Netflow data. Consider just our single hour's worth of data from 2008-01-16 2100 UTC: ATLA: 3.36 million records CHIC: 11.9 million records HOUS: 1.97 million records KANS: 5.08 million records LOSA: 2.51 million records NEWY: 8.08 million records SALT: 3.97 million records STTLng: 3.62 million records WASH: 7.18 million records47.7 million records (all values rounded)

  15. Avoiding Overcounting • Because flow data is collected at each node on Abilene, a single flow, say from Oregon to Washington DC, might show up in the netflow data for five nodes as it travels across the country. Having that data included at each site is great -- if you're just looking at the total traffic for one of those routing nodes. But if you're trying to get a picture of the total traffic entering the I2 Network nationally, you don't want to "overcount" a transcontinental flow simply because it is flowing across multiple backbone nodes. • Fortunately, I2 routinely corrects for this phenomenon in the Weekly Report, and I2 provides a router node-by-router node mapping showing how interfaces are used, which allows you to identify backbone flows to exclude. For example, to get mapping data for 2008-01-16, an authorized user would rsync:flows/logs/2008/2008-01/2008-01-16/nfilter and/orflows/logs/2008/2008-01/2008-01-16/ifAlias.* deleting flows from backbone interfaces (they'll already have been counted elsewhere)

  16. A Flow From LOSA to WASH Should Only Be Counted Once, Not Five Times

  17. With Redundant Backbone Flows Deleted… • After removing redundant backbone flows, the size of our 2008-01-16 2100 UTC hour dataset drops substantially to: ATLA: 1.46 million records CHIC: 8.88 million records HOUS: 0.34 million records KANS: 1.73 million records LOSA: 1.51 million records NEWY: 6.82 million records SALT: 0.70 million records STTLng: 1.67 million records WASH: 4.05 million records27.16 million records (all values rounded) • That's still a LOT of data, but much less than 47.7 million records

  18. Protocol/Ports and Network Flows • A flow can be conceptualized as "a unidirectional stream of packets between a source and destination—both defined by a network-layer IP address and transport-layer port number"* (plus the flow's protocol, TOS, and input interface) • Note that each network flow has directionality, with packets flowing from a source IP address to a destination IP address. Most applications involve network flows in both directions, however those flows should be conceptualized as two related but separate flows, one in each direction, rather than a single bidirectional pipe. • The protocol and ports associated with a flow can give us hints about the application which may be generating that traffic. • What protocols do we see for our hour's worth of Internet2 Netflow data? ---- * http://www.cisco.com/en/US/docs/ios/12_0s/feature/guide/12s_sanf.html

  19. Octets Per Protocol Breakdown Some quick notes: -- No, you're not expected to read tiny fonts on screen, but if you can, I'm impressed :-) You might find it easier to look at these slides on your laptop while I talk. A couple of quick highlights… -- TCP is still largely the dominant protocol overall at 92.43%, with UDP chugging along at about 5% (we'll focus largely on that TCP traffic for the rest of the this talk) -- You'll notice that there are differences from node-to-node. For example, I found it interesting that GRE is surprisingly high at over 9% at LOSA, and ESP (a secure tunneling protocol) is at roughly 1.7% of octets at ATLA

  20. Enough About Protocols,What About Port Usage? • While you'd never believe it from looking at actual Netflow data, port numbers are an IANA-assigned number resource. • In particular, see http://www.iana.org/assignments/port-numbers-- "Well Known Ports are those from 0 through 1023. […] Well Known ports SHOULD NOT be used without IANA registration." -- "The Registered Ports are those from 1024 through 49151 […] Registered ports SHOULD NOT be used without IANA registration." -- "The Dynamic and/or Private Ports are those from 49152 through 65535" • Thus, application programmers should not just casually pick and begin to offer services using port numbers <= 49151 – doing so invites eventual chaos, and can reduce our ability to understand network loads. [The port 465 ("URD" vs. "SMTPS") mess is a nice example of why randomly using unassigned ports is a bad idea.]

  21. While The Preceding Chart Looks at Destination Ports, What About Source Ports? • In client-server applications, a relatively small query sent to a server will typically generate a potentially much larger "reply" or "response" flow. • That response flow will commonly "reverse" the source and destination ports, so that (for example) http response traffic "coming back from" a web server to a web client might legitimately and routinely have source port 80, with what may look like a "random" destination port. • For example, on the following chart of traffic by source ports, you'll see that http traffic accounts for over 36% of all TCP traffic in and of itself

  22. What Are Some of Those Non-Standard Ports Seen? • Some applications running on dedicated machines may intentionally use non-standard ports, or even a wide "block" or "range" of ports. Choice of those ports may end up happening at, um, "local discretion." • We know that at least some of these applications using unusual ports are crucial measurement tools or core applications driving a material fraction of the Internet2 Network's traffic. • For example, one of the top destination ports seen on the table a few slides back is port 5101/tcp. What's that?

  23. 5101/TCP: Talarian_TCP, Y!M, or ? src_as dst_as srcport dstport prot raw doctets AS668 DREN AS11537 I2 33207 5101 TCP[6] 11,736,000 AS7847 NASA-HPCC-ESS AS11537 I2 34272 5101 TCP[6] 7,677,000 AS7847 NASA-HPCC-ESS AS11537 I2 46487 5101 TCP[6] 6,921,000 AS7847 NASA-HPCC-ESS AS11537 I2 52600 5101 TCP[6] 6,894,000 AS7847 NASA-HPCC-ESS AS11537 I2 56799 5101 TCP[6] 6,336,000 • IANA says that 5101/tcp is assigned to "Talarian_TCP" • If you Google for port 5101/tcp, you'll see web pages such as http://www.cert.org/advisories/CA-2002-16.html which states"Yahoo! Messenger typically listens for peer-to-peer requests on port 5101/TCP […]" – but these flows seemed large for Y!M to me • Since the destination ASN was Internet2, I inquired (thanks again, as always, Matt!) and learned that these are actually knownnuttcp-related flows (nuttcp is a measurement tool similar to iperf, see http://www.wcisd.hpc.mil/nuttcp/Nuttcp-HOWTO.html )

  24. What About LHC Traffic? • Looking at an earlier snapshot of some Internet2 Netflow traffic, I observed traffic coming from AS3152 (FNAL) to AS7896 (U Nebraska), a well-known LHC site, with destination ports 20001/TCP, 20002/TCP, 20003/TCP, 56133/TCP, etc. • Given the size and source/destination of those flows, I contacted UNL and was able to confirm that these were indeed likely LHC-related flows involving the application "PhEDEx" (seehttps://lhcatfnal.fnal.gov/shift-operations/sitracker/data-transfer and "PhEDEx High-Throughput Data Transfer Management System" http://www.gridpp.ac.uk/papers/chep06_tuura.pdf for more information about PhEDEx) • What about the Access Grid, or Globus' GSIFTP, say?

  25. Ports and IntentionalAttempts at Obfuscation Other application programmers view the network environment as an adversarial/hostile place (sometimes for well founded reasons!), and may use non-standard ports in an effort to resist traffic analysis, app identification, and traffic shaping or blocking. For instance:-- Bandwidth intensive P2P applications may employ per-sessiondynamic port assignment (for example, uTorrent allows you to "randomize port each time uTorrent starts") or encryption (see www.azureuswiki.com/index.php/Message_Stream_Encryption) in an effort to avoid port-based traffic analysis or deep packet inspection, helping those programs to resist traffic identification-- Other applications may resort to tunneling "everything over port 80" in an effort to circumvent restrictive perimeter firewall policies which may have closed everything except for a few ports (e.g., see forum.skype.com/lofiversion/index.php/t15582.html )

  26. The Result of Intentional Obfuscation or Random Selection of Port Numbers • If users or applications randomly choose ports for application use, at the limiting case, traffic would be randomly distributed over more-or-less the entire set of all possible ports, with (potentially) 100/65K=0.00152% of all traffic on each of the 65K ports. • On the other hand, if users employed the alternative strategy mentioned previously, e.g., repurposing port 80 to carry virtually everything, in the limiting case you'd only see traffic on a small number of ports. • Either way, attempts at port-based traffic analysis might be rendered difficult at best, if not pointless altogether. • The following slide shows an example of a range of ports where I believe port numbers are not particularly illuminating, and traffic is mundanely distributed.

  27. Application Hinting Associated With Traffic Source and Destination Addresses • In addition to ports and protocols, the source address and the destination address of each flow may also provide hints as to the type of application associated with a given flow. • One obvious example would be dst addresses of multicast flows • In other cases, simply hearing a particular organization's name (such as "Youtube"), can be enough to tell you a lot about the application traffic you're probably seeing (although these sort of associations must be viewed as suggestive rather than conclusive). • One caution: mapping a /11 masked anonymized source address or destination address to a specific organization is not always possible. For example, a single /21 aggregate may encompass multiple independently assigned smaller blocks, and identifying which of the multiple sites in a /21 "owns" a particular flow may simply not be possible.

  28. SAS Will Let You Easily Write Port Based Rules to Categorize Traffic [* * *]type2='not classified';if prot=17 then type2='udp';else if prot=50 then type2='esp';else if prot=1 then type2='icmp';else if prot=47 then type2='gre';else if prot=6 then do; if (srcport=80) or (dstport=80) or (srcport=8000) or (dstport=8000) or (srcport=8080) or (dstport=8080) then type2='http'; else if (srcport=443) or (dstport=443) then type2='https'; else if (srcport=22) or (dstport=22) then type2='ssh'; else if (srcport=25) or (dstport=25) then type2='smtp'; else if (srcport=388) or (dstport=388) then type2='unidata'; else if (srcport=20) or (dstport=20) then type2='ftp';[etc]

  29. Of What's Left, Where's It Coming From/Going To? srcaddr doctetspercentsite193.48.96.0 1.3632E9 8.82 Renater192.108.40.0 5.6564E8 3.66 U Stuttgart 202.169.168.0 4.5723E8 2.96 Academia Sinica 198.9.0.0 4.4243E8 2.86 NASA140.90.32.0 3.9196E8 2.54 NOAA131.154.128.0 3.0826E8 2.00 INFN CNAF 130.14.24.0 3.0395E8 1.97 Natl Lib of Med198.118.192.0 2.664E8 1.72 NASA130.246.176.0 1.9162E8 1.24 Rutherford Appleton165.112.0.0 1.7309E8 1.12 NIH193.109.168.0 1.5452E8 1.00 ICGNET, Ukraine[etc] dstaddrdoctetspercentsite 129.93.232.0 2.058E9 13.32 UNL 198.32.40.0 5.5729E8 3.61 EP.Net144.92.176.0 5.5315E8 3.58 Wisconsin Madison192.239.80.0 4.492E8 2.91 Level3 [etc]

  30. Conclusion • At this point, I hope you have a sense of the sort of analyses you may be able to do using Internet2 Netflow data, even though I wouldn't begin to claim that I've even come close identifying the "missing half" of I2 Netflow data. • Maybe some of you here today, or network researchers back at your campuses, will be inspired to give this data a closer look, and begin to explore and work with the Internet2 Netflow data archives. • For those of you who may be interested, I've also attached a brief tutorial with some notes on the mechanics of working with Internet2 Netflow data, although we won't go over those slides today due to our limited time. • Thanks for the chance to talk today!

  31. A Brief Tutorial on The Use of Internet2's Netflow Archive

  32. Assumptions • You've already applied for, and been approved for access to Internet2 Netflow data, as previously described earlier in these slides. • You've retrieve and built flow-tools on a Unix or Linux host, again, as previously mentioned • You want to do analyses that are easiest/best done using a traditional statistical package such as SAS

  33. Browsing Directories With rsync • Data is stored on netflow.internet2.edu and is organized by the nine Internet2 router nodes: ATLA, CHIC, HOUS, KANS, LOSA, NEWY, SALT, STTLng, and WASH (note that's STTLng, not STTL) • To view all available datasets for the KANS node for 2008-01-16:%rsync --password-file ./rsync.passwd -v -n \ usrname@netflow.internet2.edu::flows/data\/KANS/2008/2008-01/2008-01-16/ [note: spaces matter!] • File collection times may vary by a second or two, so don't be surprised if file naming reflects that jitter.

  34. Actually Retrieving Flow Data With rsync • Once you've identified the files you'd like to retrieve, such as all datasets for 2008-01-16 for a particular hour, such as 2100 UTC (4PM EST, 3PM CST, 2PM MST, 1PM PST, etc.), you can retrieve those files using a command such as:% rsync --recursive --password-file ./rsync.passwd \-v usrname@netflow.internet2.edu::flows/data/\KANS/2008/2008-01/2008-01-16/ft-v05.2008-01-16.21* \ KANS/ft-v05.2008-01-16 [note: spaces matter!]

  35. Exporting Flow-Tools Format FilesTo Comma Separated Variables • While flow-tools is a great package, the statistical package I like to use is SAS (for information on SAS, see http://www.sas.com/), and that meant getting the data into a format that SAS could read. • To export a flow-tools data file (be sure you've installed the flow-tools package from http://www.splintered.net/sw/flow-tools/ first):%flow-export -f2 < ft-v05.2008-01-16.210001+0000 \> ft-v05.2008-01-16.210001.csv [note: spaces matter!]

More Related