2013/11/28

WB1 8.12 Auto-RP - Multiple Candidate RPs

8.12 Auto-RP - Multiple Candidate RPs

• Configure SW2 and SW4 as RPs for group ranges 224.0.0.0-231.255.255.255 and 232.0.0.0-239.255.255.255 respectively.
• In case one RP fails, the other one should provide backup for the other’s groups.
• The group 224.110.110.110 should be always switched in dense-mode.

------------------------------------------------------------------------------------------------

As discussed in the previous task, Auto-RP MAs amalgamate information
learned from multiple candidate RPs and advertise RP discovery messages. You
may want to use multiple RPs in cases where you want load-balancing,
redundancy or both.

1) If your goal is load balancing, try to configure the group-mapping access-lists
so that every RP services a range of groups.
2) If your aim is redundancy, make both RPs service the same group ranges.
The one with the highest IP address will be selected by the MA.
3) If you want to achieve both load-balancing and redundancy, you may map
RP1 to a specific group range, say 224.0.0.0-231.255.255.255 and permit
224.0.0.0 15.255.255.255 in the end of the respective ACL. Use the entry to
permit the range 232.0.0.0-239.255.255.255 for RP2 along with the entry
224.0.0.0 15.255.255.255 in the end. This will ensure that RP1 and RP2 are
used for specific group ranges, while RP1 is used for the rest of the groups when
RP2 fails or vice versa. This is based on the longest match selection criteria used
by the multicast routers and the fact that for overlapping ranges the MA will only
advertise the RP with the highest IP address.
Notice that the mapping list is compiled by the MA, but still the final RP selection
is performed by the multicast router. Longest match criteria is an effective rule to
allow for unique RP selection along with providing redundancy.

------------------------------------------------------------------------------------------------

R5:
no ip pim send-rp-announce Loopback 0 scope 10
SW2:
ip access-list standard SW2_GROUPS
permit 224.0.0.0 7.255.255.255
permit 224.0.0.0 15.255.255.255
deny 224.110.110.110
!
interface Loopback0
ip pim sparse-dense-mode
!
ip pim send-rp-announce Loopback 0 scope 10 group-list SW2_GROUPS

SW4:
ip access-list standard SW4_GROUPS
permit 232.0.0.0 7.255.255.255
permit 224.0.0.0 15.255.255.255
deny 224.110.110.110
!
interface Loopback0
ip pim sparse-dense-mode
!
ip pim send-rp-announce Loopback 0 scope 10 group-list SW4_GROUPS

------------------------------------------------------------------------------------------------

Look at the RP mappings on R5. Notice how the MA elects the best candidate
RP for a given range. The group 224.110.110.110 is negatively cached, and thus
is always processes in dense-mode.

Rack1R5#show ip pim rp mapping
PIM Group-to-RP Mappings
This system is an RP-mapping agent (Loopback0)

Group(s) 224.0.0.0/5
  RP 150.1.8.8 (?), v2v1
    Info source: 150.1.8.8 (?), elected via Auto-RP
         Uptime: 00:04:39, expires: 00:02:21
Group(s) 224.0.0.0/4
  RP 150.1.10.10 (?), v2v1
    Info source: 150.1.10.10 (?), elected via Auto-RP
         Uptime: 00:05:25, expires: 00:02:04
  RP 150.1.8.8 (?), v2v1
    Info source: 150.1.8.8 (?), via Auto-RP
         Uptime: 00:04:39, expires: 00:02:22
Group(s) (-)224.110.110.110/32
  RP 150.1.10.10 (?), v2v1
    Info source: 150.1.10.10 (?), elected via Auto-RP
         Uptime: 00:01:54, expires: 00:02:03
  RP 150.1.8.8 (?), v2v1
    Info source: 150.1.8.8 (?), via Auto-RP
         Uptime: 00:04:39, expires: 00:02:18
Group(s) 232.0.0.0/5
  RP 150.1.10.10 (?), v2v1
    Info source: 150.1.10.10 (?), elected via Auto-RP
         Uptime: 00:01:54, expires: 00:02:04
Rack1R5#


Check the Auto-RP cache of R6. Notice that it only has a single RP for every
range. SW4 is elected as the RP for all ranges except 224.0.0.0/5.

Rack1R6#show ip pim rp mapping
PIM Group-to-RP Mappings

Group(s) 224.0.0.0/5  RP 150.1.8.8 (?), v2v1
    Info source: 150.1.5.5 (?), elected via Auto-RP
         Uptime: 00:05:50, expires: 00:02:51
Group(s) 224.0.0.0/4
  RP 150.1.10.10 (?), v2v1
    Info source: 150.1.5.5 (?), elected via Auto-RP
         Uptime: 00:06:36, expires: 00:02:52
Group(s) (-)224.110.110.110/32
  RP 150.1.10.10 (?), v2v1
    Info source: 150.1.5.5 (?), elected via Auto-RP
         Uptime: 00:03:04, expires: 00:02:53
Group(s) 232.0.0.0/5
  RP 150.1.10.10 (?), v2v1
    Info source: 150.1.5.5 (?), elected via Auto-RP
         Uptime: 00:03:04, expires: 00:02:50
Rack1R6#


Now check to see if the group 224.110.110.110 is forwarded using PIM Dense
mode. Recall that R3 has joined this group before, and ping it from SW2.
However, make sure you temporality shut down R1’s Frame-Relay interface.
Otherwise, R1 will send PIM Prune message to R5 and R5 will remove the whole
interface from OIL for the group. You cannot override this behavior with PIM DM,
unless you use sub-interfaces.


R1:
interface Serial 0/0
shutdown

Notice that the group has no RP in the mroute output, just as it is supposed to be
for the dense group.

Rack1SW2#ping 224.110.110.110 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 224.110.110.110, timeout is 2 seconds:

Reply to request 0 from 155.1.0.3, 34 ms
Reply to request 1 from 155.1.0.3, 25 ms
Reply to request 2 from 155.1.0.3, 25 ms
Reply to request 3 from 155.1.0.3, 25 ms
Reply to request 4 from 155.1.0.3, 42 ms
Reply to request 5 from 155.1.0.3, 33 ms
Reply to request 6 from 155.1.0.3, 26 ms
Reply to request 7 from 155.1.0.3, 25 ms
Reply to request 8 from 155.1.0.3, 25 ms
Reply to request 9 from 155.1.0.3, 25 ms
Rack1SW2#


Rack1R5#show ip mroute 224.110.110.110
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.110.110.110), 00:04:51/stopped, RP 0.0.0.0, flags: D
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Serial0/0, Forward/Sparse-Dense, 00:04:51/00:00:00
    FastEthernet0/0, Forward/Sparse-Dense, 00:04:51/00:00:00
    Serial0/1, Forward/Sparse-Dense, 00:04:51/00:00:00

(155.1.58.8, 224.110.110.110), 00:01:02/00:02:28, flags: T
  Incoming interface: FastEthernet0/0, RPF nbr 155.1.58.8
  Outgoing interface list:
    Serial0/0, Forward/Sparse-Dense, 00:01:02/00:00:00
    Serial0/1, Prune/Sparse-Dense, 00:00:58/00:02:01
         
Rack1R5#

沒有留言:

張貼留言