1 / 25

Fun with L3 VPNs

Fun with L3 VPNs. aka, cutting VRFs until they bleed all over each other and give me a migraine. Dave Diller Mid-Atlantic Crossroads 20 January, 2008. MAX VRFs , a very very very brief history (because you’ve heard it all before). In the beginning, MAX had no VRFs, and that was OK.

elisha
Télécharger la présentation

Fun with L3 VPNs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fun with L3 VPNs • aka, cutting VRFs until they bleed all over each other and give me a migraine Dave Diller Mid-Atlantic Crossroads 20 January, 2008

  2. MAX VRFs, a very very very brief history(because you’ve heard it all before) • In the beginning, MAX had no VRFs, and that was OK. • Then Dan added a few, and started proselytizing. • Then, we added some more. And yea, verily, there was more proselytizing. • Now... we have NLR to add to the network. Guess what happens next?

  3. Starting Point inet.0 consists of: • Internet2 routes • MAX R&E customers • NGIX R&Es (ie NISN, DREN, NREN, ESNET, USGS)

  4. Desired Goals • Keep inet.0 sacrosanct • MAX customers need access to one another • NLR gets its own VRF with NGIX R&Es • new BLEND VRF of I2 + NLR with NGIX R&Es

  5. Routing ‘products’ enabled • Classic I2 R&E paths for I2-members • New NLR-only option for I2-non-members • Pre-mixed blend for those who primarily want R&E redundancy but don’t care about path • Roll-your-own for those who want granular control over the path on a per-destination basis

  6. Crossleaking routes • auto export - aka magic • rib groups Two ways to leak routes from one VRF to another: LEAK-inet0 { import-rib [ inet.0 BLEND.inet.0 NLR.inet.0 ]; import-policy LEAK-I2; }

  7. Limitations • One import policy to handle everything • NLR should get: NGIX R&Es and MAX customers • BLEND should get: I2 routes, *plus* the above. • Tried to match and set based on routing instance with “to instance BLAH” as policy, but it did not work. • By only being able to match and accept a route once into multiple places, I had to leak the same to all. • Means I2 routes end up in NLR VRF, and vice versa

  8. Solution • How do we make the best of this nastiness? • Local-pref games • Community games • Basically, pref down upon crossleak, and don’t announce the interloper to customers

  9. Example: BLEND.inet.0 • Initial inet.0 localprefs: • BLEND duplicated that, except that: • customers leaked from inet.0 and NLR.inet. as 95 • I2 and NLR both leaked in as 60 • 50 paths of last resort • 65 I2 backup • 70 I2 • 80 NGIX R&Es • 100 customers

  10. Net results / Rationalization • With leaking in from NLR and inet.0, BLEND is now: • Why customers leaked as 95? • Prefer VRF-native route at 100 to leaked one at 95 • Why I2/NLR as 60? • In the other VRFs, the interloper is now less preferred • 50 paths of last resort • 60 leaked I2 and NLR routes • 80 NGIX R&Es • 95 leaked I2 and NLR customers • 100 native BLEND customers

  11. Resultant lprefs in inet.0 • With leaking in from NLR.inet.0, inet.0 is now: • 50 paths of last resort • 60 leaked-in NLR routes • 65 I2 backup • 70 I2 • 80 NGIX R&Es • 95 leaked-in customer routes • 100 customers • With communities, inet.0 members never see NLR routes, yet they “do the right thing” in BLEND.inet.0

  12. Benefits • NLR.inet.0 looks exactly the same as inet.0, just with the players reversed • Each VRF has the same routes, just slanted differently • NLR-only routes available to those interested • I2-only routes available to existing members • Redundant and equal mix available as well

  13. Sample leaked route • This is a route in inet.0 that originates as a static route in BLEND. Bits have been changed to protect the guilty: 277.275.262.0/24 (1 entry, 1 announced) *Static Preference: 5 Next-hop reference count: 10 Next hop: 206.196.177.327 via xe-7/3/0.213, selected State: <Secondary Active Int Ext> Age: 4w1d 3:23:37 Task: RT Announcement bits (6): 0-KRT 1-RT 4-LDP 6-BGP RT Background 7-Resolve tree 4 8-Resolve tree 5 AS path: I () Communities: 10886:1 10886:10101 Primary Routing Table BLEND.inet.0 • View of same route from BLEND: Secondary Tables: inet.0 NLR.inet.0

  14. An interesting position... • I’ve now got three complete “R&E routing tables”, each slanted differently: • I2 primary, with NLR present but preffed down • NLR primary, with I2 present but preffed down • NLR and I2 equal, to let BGP do its thing • So, what can we see?

  15. Interesting I2 route stats • inet.0 has 10222 routes preferring the I2 peer (all routes not heard from customers or NGIX R&Es) • NLR.inet.0 has 2912 routes preferring the I2 peer (unique to I2 since preffed below everything else) • BLEND.inet.0 has 5626 routes preferring I2 As of this morning:

  16. Interesting NLR route stats • NLR.inet.0 has 7761 routes preferring the NLR peer (all routes not heard from customers or NGIX R&Es) • inet.0 has 451 preferring the NLR peer (unique to NLR since preffed below everything else) • BLEND.inet.0 has 5057 routes preferring NLR As of this morning:

  17. Observations on stats • BLEND is pretty darn well blended at this point • Initially (six months ago), much of ‘best path’ selection came all the way down to ‘oldest route’ in BLEND, so it was slanted a lot more to I2 as compared to the ‘new kid’ BGP session. • NLR has MANY fewer unique routes, but 2/3 of their total number are preferred when evenly mixed. • Are dual-connected networks doing TE to prefer NLR, or did normal route churn over the last few months even things out?

  18. IPv6 and Multicast • IPv6 posed no problem at all. Duplicated v4 rib groups and configs for v6. Worked like a charm for unicast. • I don’t have multicast working in the VRFs. • SAs some in and work fine. People in the same VRF can see each other fine. • Crossleaked doesn’t. Tree doesn’t seem to build right to cross the boundary. • Anyone have experience with L3 VPNs and multicast?

  19. Multicast workaround • Since NLR routes are present in inet.0 (albeit preffed down), multicast enabled members can receive the routes and have theirs visible on NLR with the right communities applied. • Nowhere near as balanced as BLEND, but gets their routes on NLR for now. • Sub-optimal but functional for now.

  20. Continued progress • Too many compromises, not as ‘clean’ a solution as I wanted. • inet.0 has NLR routes in it, so is not sacrosanct • too many reindeer games to get lpref working right • After this was implemented, worked with Juniper to find a better answer. Took quite a while to get anywhere but eventually we did.

  21. Secret sauce • Turns out there is the potential to match on a route multiple times inside the same policy, when importing it to multiple ribs, and its bloody obvious in retrospect: • match on “to rib” as part of the policy and have different actions based on destination routing table! • Tried “to instance” in the early experiments but it did not work, “to rib” never registered as a possibility. • Feel stupid, but even with escalations, it took one month (to the day) from opening the ticket for Juniper to propose this, so not obvious to them either. (trying to salve my pride here ;-)

  22. Saucy example • As I said, bloody flipping obvious, this works in the lab: dave@RE1-lab-t640> show configuration policy-options policy-statement TEST-LEAK                    term 10 {     from community TEST;     to rib TEST.inet.0;     then {         local-preference 79;         accept;     } } term 15 {     from community TEST;     to rib TEST2.inet.0;     then {         local-preference 78; community set TEST-2;         accept;     } } term 20 {     then reject; }

  23. In theory... • Which should allow me to do something like this, but I’ve not tested it yet, caveat emptor: term SEND-I2-to-BLEND {     from {         protocol bgp;         community ABILENE;     }     to rib BLEND;     then accept; } term REJECT-I2-to-NLR {     from {         protocol bgp;         community ABILENE;     }     to rib NLR;     then reject; }

  24. Next steps, aka v2.0 • Test “to rib” and be sure it does what it should, everywhere it should, giving me the granularity I initially wanted. • Figure out multicast and VRFs (implementing this policy will drop the preffed-down NLR routes from inet.0, so the multicast workaround will not function once things get cleaned up.

  25. questions?

More Related