2013/11/30

WB1 8.31 Anycast RP

8.31 Anycast RP

• Instead of relying on the BSR protocol to distribute the load between the
RPs in AS 200, implement a solution that hides both RPs behind the same
RP IP address 150.1.100.100.
• Every RP should inform the other one of the active sources registered with them.

---------------------------------------------------

Anycast RP is a special RP redundancy scenario, which allows using
redundant RPs sharing the same IP address
. Here Anycast means that
groups of RPs use the same IP address used by all multicast routers in the
domain to build shared trees. However, the PIM Joins are being sent to the
closest RP, based on the unicast routing table. Thus, different routers might
join shared trees rooted at different RPs. At the same time, different DRs will
pick up different physical RPs based on the anycast address to register their
local sources.

In order to maintain consistent source information, MSDP sessions should be
configured between the RPs. This will ensure that all routers joining different
RPs will still have full information about all potential sources in the domain.
Thus, the following are the guidelines to configure Anycast RP:

1) Use the same IP address on all routers as the candidate RP IP address.
Propagate this information via BSR or Auto-RP.

2) Using different IP addresses on every router, source MSDP sessions and
link all candidate RPs in a mesh. Note that you might need to manually
specify the MSDP originator ID to be different on every RP, or the MSDP
sessions won’t come up.

Anycast RP is a purely intra-domain solution, and does not deal with inter-domain
multicast
. Thus, it is a good example of using the inter-domain
technology inside a single multicast region to achieve RP redundancy above
the scheme used by PIM BSR.

In our scenario, we mix intra-domain MSDP with inter-domain connections.
That is, a domain with Anycast RP peers in another domain with regular RPs.
This results in a looped MSDP topology, which will successfully work due to
MSDP RPF checks.

------------------------------------------------------------------------------------

R5:
interface Loopback100
ip address 150.1.100.100 255.255.255.255
ip pim sparse-mode
!
router eigrp 100
network 150.1.100.100 0.0.0.0
!
ip msdp originator-id Loopback 0
ip msdp peer 150.1.8.8 connect-source Loopback 0
no ip pim rp-candidate Loopback0
ip pim rp-candidate Loopback100


SW2:
interface Loopback100
ip address 150.1.100.100 255.255.255.255
ip pim sparse-mode
!
router eigrp 100
network 150.1.100.100 0.0.0.0
!
ip msdp originator-id Loopback 0
ip msdp peer 150.1.5.5 connect-source Loopback 0
no ip pim rp-candidate Loopback0
ip pim rp-candidate Loopback100


------------------------------------------------------------------------------------

First, join receivers on R4 and SW3 to the multicast group 239.1.1.1. Check that
R4 actually uses the Anycast RP IP address as its RP.

R4:
interface Loopback0
ip pim sparse-mode
ip igmp join-group 239.1.1.1


SW3:
interface Loopback0
ip pim sparse-mode
ip igmp join-group 239.1.1.1


Rack1R4#show ip mroute 239.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.1), 00:53:08/00:02:37, RP 150.1.100.100, flags: SJCL
  Incoming interface: Serial0/1, RPF nbr 155.1.45.5
  Outgoing interface list:
    Loopback0, Forward/Sparse, 00:53:08/00:02:37

Rack1R4#

----------------------------------

Next source multicast from SW4 and confirm that it actually reaches the receivers.

------------------------------------

Rack1SW4#ping 239.1.1.1 repeat 2
Type escape sequence to abort.
Sending 2, 100-byte ICMP Echos to 239.1.1.1, timeout is 2 seconds:

Reply to request 0 from 155.1.79.9, 42 ms
Reply to request 0 from 155.1.45.4, 151 ms
Reply to request 0 from 155.1.45.4, 142 ms
Reply to request 0 from 155.1.79.9, 134 ms
Reply to request 0 from 155.1.79.9, 126 ms
Reply to request 0 from 155.1.79.9, 109 ms
Reply to request 0 from 155.1.45.4, 100 ms
Reply to request 0 from 155.1.79.9, 59 ms
Reply to request 0 from 155.1.79.9, 50 ms


------------------------------------------

Look at the SA caches of R5 and SW1. Both of them should have been updated
by SW2, as SW2 is the closest RP to the source (SW4) and the source registers
with it.

---------------------------------------------

Rack1R5#show ip msdp sa-cache
MSDP Source-Active Cache - 3 entries
(150.1.10.10, 239.1.1.1), RP 150.1.8.8, MBGP/AS 200, 00:01:04/00:05:21, Peer 150.1.8.8
(155.1.10.10, 239.1.1.1), RP 150.1.8.8, MBGP/AS 200, 00:01:04/00:05:21, Peer 150.1.8.8
(155.1.108.10, 239.1.1.1), RP 150.1.8.8, MBGP/AS 200, 00:01:04/00:05:21, Peer 150.1.8.8
Rack1R5#


Rack1SW1#show ip msdp sa-cache
MSDP Source-Active Cache - 3 entries
(150.1.10.10, 239.1.1.1), RP 150.1.8.8, MBGP/AS 200, 00:01:32/00:05:31, Peer 150.1.8.8
Learned from peer 150.1.8.8, RPF peer 150.1.5.5,
SAs received: 4, Encapsulated data received: 0
(155.1.10.10, 239.1.1.1), RP 150.1.8.8, MBGP/AS 200, 00:01:32/00:05:31, Peer 150.1.8.8
Learned from peer 150.1.8.8, RPF peer 150.1.5.5,
SAs received: 6, Encapsulated data received: 2
(155.1.108.10, 239.1.1.1), RP 150.1.8.8, MBGP/AS 200, 00:01:32/00:05:31, Peer 150.1.8.8
Learned from peer 150.1.8.8, RPF peer 150.1.5.5,
SAs received: 6, Encapsulated data received: 2
Rack1SW1#


---------------------------------------------

Now source traffic from AS 100 and make sure both receivers are able to hear it.
Check the SA caches of SW2 and R5 after that, to confirm that SW1 has actually
updated them.

--------------------

Rack1R6#ping 239.1.1.1 repeat 5
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 239.1.1.1, timeout is 2 seconds:

Reply to request 0 from 155.1.79.9, 32 ms
Reply to request 0 from 155.1.45.4, 52 ms
Reply to request 0 from 155.1.146.4, 32 ms
Reply to request 1 from 155.1.79.9, 28 ms
Reply to request 1 from 155.1.45.4, 60 ms
Reply to request 1 from 155.1.146.4, 48 ms


-----------------

MSDP Source-Active Cache - 4 entries
(150.1.10.10, 239.1.1.1), RP 150.1.8.8, MBGP/AS 200, 00:10:14/00:05:39, Peer 150.1.8.8
(155.1.10.10, 239.1.1.1), RP 150.1.8.8, MBGP/AS 200, 00:10:14/00:05:39, Peer 150.1.8.8
(155.1.67.6, 239.1.1.1), RP 150.1.7.7, MBGP/AS 100, 00:00:15/00:05:55, Peer 150.1.7.7
(155.1.108.10, 239.1.1.1), RP 150.1.8.8, MBGP/AS 200, 00:10:14/00:05:39, Peer 150.1.8.8
Rack1R5#


Rack1SW2#show ip msdp sa-cache
MSDP Source-Active Cache - 1 entries
(155.1.67.6, 239.1.1.1), RP 150.1.7.7, MBGP/AS 100, 00:01:13/00:05:34, Peer 150.1.5.5
Rack1SW2#

WB1 8.30 MSDP

8.30 MSDP

• Change PIM dense-mode on all links where it's configured to PIM sparse-mode.
• Configure R5 as the RP for AS 200 and SW1 as the RP for AS 100. Use
the BSR method to distribute RP information
, and configure BSR border
on the link between R3 and SW1.

• Create an MSDP peering session between SW1 and R5 sourcing it off the
Loopback 0 interfaces.

------------------------------------------------------------------------------------------------------

When implementing inter-domain multicast using PIM SM, each domain
usually has its own RP. In order to allow sources and receivers from different
domains to locate each other, RPs need to exchange the information about
their local active sources. After this information is exchanged between the
RPs, all routers that joined the respective shared trees may build shortestpath
trees toward the actual sources.

MSDP or Multicast Source Discovery Protocol is used to exchange multicast
source information between RPs. It is configured as a TCP connection
between the RPs, and used to exchange the so-called Source Active (SA)
messages. Note that all MSDP peerings are configured manually, using the
command ip msdp peer at both endpoints. When a source in one PIM SM
domain starts sending the multicast traffic, the respective DR will start the
registration process with the local RP. When the local RP receives the PIM
Register message, it replicates it to all of its MSDP neighbors as an SA
message. The SA message contains the IP address of the multicast source
as well as the destination group and the IP address of the RP sending the SA
message. The latter is known as the MSDP ID and can be changed using the
command ip msdp originator-id.

When any RP receives a new SA message, it checks if there are local
receivers that have joined the shared tree for the encapsulated group. If there
are any, the message is forwarded down the tree, allowing the receivers to
learn about the sources in another domain. After that, the receivers might join
the SPT toward the source in the other domain. This is only possible if the
source IP address is learned via BGP or some other inter-domain route
exchange procedure. Till the moment there is an active source in a domain,
the respective RP will forward periodic SA messages with an empty payload
to refresh the active state for this group/source in all other domains.

MSDP allows us to connect RPs in an arbitrary meshed topology, to include
loops. In order to prevent the SA messages from cycling the topology, as a
result of these loops, every MSDP peer forwards SA messages only after
they pass the RPF check. The RPF check is performed based on the RP IP
address (originator-ID) inside the message and the IP address of the MSDP
peer that relayed the message. If the MSDP peer is on the shortest path
towards the originating RP, the message is accepted, otherwise it is dropped.
This RPF check requires full routing information from other domains to in
order to discover routes to other RPs. If you have a stub multicast domain,
lacking full BGP information, you may use the command ip msdp
default-peer
to identify the upstream RP that forwards SA messages.
RPF checks are not applied to default peers and all SA messages are
accepted.

Our scenario is a bit tricky, as it has two RPs in AS 200. However, the BSR
protocol ensures that all routers in AS 200 will select the same RP for a given
group. Thus you only need to peer SW1 with SW2 and R5 via MSDP, but
there is no need to peer SW2 with R5 via MSDP.


----------------------------------------------------------------------------------------

R5:
ip msdp peer 150.1.7.7 connect-source Loopback 0 remote-as 100

SW2:
ip msdp peer 150.1.7.7 connect-source Loopback 0 remote-as 100
SW1:
ip msdp peer 150.1.5.5 connect-source Loopback 0 remote-as 200
ip msdp peer 150.1.8.8 connect-source Loopback 0 remote-as 200

----------------------------------------------------------------------------------------

在AS200中,BSR(MA)為SW4...........我們需要load balance......修改rp-hash

We want to ensure that we have two multicast groups that map to different RPs
inside AS 200. In order make this happen we will have to alter the “rp-hash”
value used on SW4:

Rack1SW4(config)# no ip pim bsr-candidate loopback0 0
Rack1SW4(config)# ip pim bsr-candidate loopback0 31

Now the groups 239.1.1.1 and 239.1.1.2 map to RPs R5 and SW2 respectively:

----------------------------------------------------------------------------------------

Rack1R5#show ip pim rp-hash 239.1.1.1       
  RP 150.1.5.5 (?), v2
    Info source: 150.1.10.10 (?), via bootstrap, priority 0, holdtime 150
         Uptime: 22:06:33, expires: 00:02:25
  PIMv2 Hash Value (mask 255.255.255.254)
    RP 150.1.5.5, via bootstrap, priority 0, hash value 1362971077
    RP 150.1.8.8, via bootstrap, priority 0, hash value 718054422


Rack1R5#show ip pim rp-hash 239.1.1.2
  RP 150.1.8.8 (?), v2
    Info source: 150.1.10.10 (?), via bootstrap, priority 0, holdtime 150
         Uptime: 22:06:24, expires: 00:02:26
  PIMv2 Hash Value (mask 255.255.255.254)
    RP 150.1.5.5, via bootstrap, priority 0, hash value 443334807
    RP 150.1.8.8, via bootstrap, priority 0, hash value 1364246456
Rack1R5#

---------------------------------------------------------------

Now configure speakers on both systems to join these groups.

R4:
interface Loopback0
ip pim sparse-mode
ip igmp join-group 239.1.1.1
ip igmp join-group 239.1.1.2

SW3:
interface Loopback0
ip pim sparse-mode
ip igmp join-group 239.1.1.1
ip igmp join-group 239.1.1.2

---------------------------------------------------------------

Initially, every router joins the shared tree in its own domain.

Rack1SW3#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.1), 00:00:42/00:02:35, RP 150.1.7.7, flags: SJCL
  Incoming interface: Vlan79, RPF nbr 155.1.79.7
  Outgoing interface list:
    Loopback0, Forward/Sparse, 00:00:42/00:02:35

(*, 239.1.1.2), 00:00:41/00:02:36, RP 150.1.7.7, flags: SJCL
  Incoming interface: Vlan79, RPF nbr 155.1.79.7
  Outgoing interface list:
    Loopback0, Forward/Sparse, 00:00:41/00:02:36

(*, 224.0.1.40), 22:10:49/00:02:20, RP 150.1.7.7, flags: SJCL
  Incoming interface: Vlan79, RPF nbr 155.1.79.7
  Outgoing interface list:
    Vlan9, Forward/Sparse, 22:10:49/00:02:20

Rack1SW3#

Rack1R4#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.1), 00:01:55/00:02:46, RP 150.1.5.5, flags: SJCL
  Incoming interface: Serial0/1, RPF nbr 155.1.45.5
  Outgoing interface list:
    Loopback0, Forward/Sparse, 00:01:55/00:02:46

(*, 239.1.1.2), 00:01:54/00:02:54, RP 150.1.8.8, flags: SJCL
  Incoming interface: Serial0/1, RPF nbr 155.1.45.5
  Outgoing interface list:
    Loopback0, Forward/Sparse, 00:01:54/00:02:54

(*, 224.0.1.40), 22:14:22/00:02:35, RP 0.0.0.0, flags: DCL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet0/0, Forward/Sparse, 22:14:22/00:02:35

Rack1R4#

--------------------------------------------

Enable MSDP debugging on SW1 and start pinging group 239.1.1.1 from SW4:

Rack1SW1#debug ip msdp detail
MSDP Detail debugging is on

Rack1SW4#ping 239.1.1.1 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 239.1.1.1, timeout is 2 seconds:
Reply to request 0 from 155.1.79.9, 68 ms
Reply to request 0 from 155.1.45.4, 135 ms
Reply to request 0 from 155.1.45.4, 126 ms
Reply to request 0 from 155.1.79.9, 118 ms
Reply to request 0 from 155.1.79.9, 101 ms
Reply to request 0 from 155.1.45.4, 76 ms
Reply to request 1 from 155.1.45.4, 42 ms
Reply to request 1 from 155.1.79.9, 109 ms
Reply to request 1 from 155.1.79.9, 101 ms
Reply to request 1 from 155.1.45.4, 76 ms
Reply to request 1 from 155.1.45.4, 59 ms
Reply to request 1 from 155.1.79.9, 51 ms

Notice that SW1 received Source Active messages for the sources on SW4.
Since SW4 uses all of its PIM-enabled interfaces to source multicast, there are
multiple SA messages for every registered source. The actual source is
registered with the RP located in AS 200.

----------------------------------------------------

Rack1SW1#
Nov 30 16:42:18.959 TPE: MSDP(0): Received 120-byte TCP segment from 150.1.5.5
Nov 30 16:42:18.959 TPE: MSDP(0): Append 120 bytes to 0-byte msg 22 from 150.1.5.5, qs 1
Nov 30 16:42:18.959 TPE: MSDP(0): WAVL Insert SA Source 155.1.10.10 Group 239.1.1.1 RP 150.1.5.5 Successful
Nov 30 16:42:18.959 TPE: MSDP(0): Forward decapsulated SA data for (155.1.10.10, 239.1.1.1) on Vlan79
Nov 30 16:42:18.967 TPE: MSDP(0): Received 120-byte TCP segment from 150.1.5.5
Nov 30 16:42:18.967 TPE: MSDP(0): Append 120 bytes to 0-byte msg 23 from 150.1.5.5, qs 1
Nov 30 16:42:18.967 TPE: MSDP(0): WAVL Insert SA Source 155.1.108.10 Group 239.1.1.1 RP 150.1.5.5 Successful
Rack1SW1#
Nov 30 16:42:18.967 TPE: MSDP(0): Forward decapsulated SA data for (155.1.108.10, 239.1.1.1) on Vlan79
Nov 30 16:42:18.976 TPE: MSDP(0): Received 120-byte TCP segment from 150.1.5.5
Nov 30 16:42:18.976 TPE: MSDP(0): Append 120 bytes to 0-byte msg 24 from 150.1.5.5, qs 1
Nov 30 16:42:18.976 TPE: MSDP(0): WAVL Insert SA Source 150.1.10.10 Group 239.1.1.1 RP 150.1.5.5 Successful
Nov 30 16:42:18.976 TPE: MSDP(0): Forward decapsulated SA data for (150.1.10.10, 239.1.1.1) on Vlan79
Rack1SW1#


------------------------------------

Now check that SW3 has joined the SPTs towards the sources in different the
AS. Notice that RPF information for these sources is taken from MBGP updates,
not the unicast routing table.

----------------------------------------------

Rack1SW3#show ip mroute 239.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.1), 00:08:31/00:02:26, RP 150.1.7.7, flags: SJCL
  Incoming interface: Vlan79, RPF nbr 155.1.79.7
  Outgoing interface list:
    Loopback0, Forward/Sparse, 00:08:31/00:02:50

(150.1.10.10, 239.1.1.1), 00:00:38/00:02:26, flags: LJT  Incoming interface: Vlan79, RPF nbr 155.1.79.7, Mbgp
  Outgoing interface list:
    Loopback0, Forward/Sparse, 00:00:38/00:02:50

(155.1.108.10, 239.1.1.1), 00:00:38/00:02:26, flags: LJT  Incoming interface: Vlan79, RPF nbr 155.1.79.7, Mbgp
  Outgoing interface list:
    Loopback0, Forward/Sparse, 00:00:38/00:02:50

(155.1.10.10, 239.1.1.1), 00:00:38/00:02:26, flags: LJT  Incoming interface: Vlan79, RPF nbr 155.1.79.7, Mbgp
  Outgoing interface list:
    Loopback0, Forward/Sparse, 00:00:39/00:02:50

Rack1SW3#

----------------------------

Now make sure the SPTs are built across the Frame-Relay link, as this is the
preferred path for multicast traffic. Use the show ip mroute command to
accomplish this.

-----------------------------------

Rack1SW1#show ip mroute 239.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.1), 00:10:18/00:03:08, RP 150.1.7.7, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Vlan79, Forward/Sparse, 00:10:18/00:03:08

(155.1.10.10, 239.1.1.1), 00:01:29/00:01:31, flags: M
  Incoming interface: FastEthernet1/0/3, RPF nbr 155.1.37.3, Mbgp
  Outgoing interface list:
    Vlan79, Forward/Sparse, 00:01:29/00:03:08

(155.1.108.10, 239.1.1.1), 00:01:29/00:01:31, flags: M
  Incoming interface: FastEthernet1/0/3, RPF nbr 155.1.37.3, Mbgp
  Outgoing interface list:
    Vlan79, Forward/Sparse, 00:01:29/00:03:08

(150.1.10.10, 239.1.1.1), 00:01:29/00:01:31, flags: M
  Incoming interface: FastEthernet1/0/3, RPF nbr 155.1.37.3, Mbgp
  Outgoing interface list:
    Vlan79, Forward/Sparse, 00:01:29/00:03:07

Rack1SW1#

-----------------

Rack1R3#show ip mroute 239.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.1), 00:07:54/stopped, RP 150.1.7.7, flags: SP
  Incoming interface: FastEthernet0/0, RPF nbr 155.1.37.7
  Outgoing interface list: Null

(155.1.10.10, 239.1.1.1), 00:03:21/00:00:08, flags:
  Incoming interface: Serial1/0.1, RPF nbr 155.1.0.5, Mbgp
  Outgoing interface list:
    FastEthernet0/0, Forward/Sparse, 00:03:21/00:03:05

(155.1.108.10, 239.1.1.1), 00:03:23/00:00:06, flags:
  Incoming interface: Serial1/0.1, RPF nbr 155.1.0.5, Mbgp
  Outgoing interface list:
    FastEthernet0/0, Forward/Sparse, 00:03:23/00:03:03

(150.1.10.10, 239.1.1.1), 00:03:23/00:00:06, flags:
  Incoming interface: Serial1/0.1, RPF nbr 155.1.0.5, Mbgp
  Outgoing interface list:
    FastEthernet0/0, Forward/Sparse, 00:03:23/00:03:05
         
Rack1R3#


----------------------

 You may also use the mtrace command, to trace the multicast delivery tree from
the leaf to the root. The first parameter is the source address and the second
parameter is the destination group. This command queries the neighbors for the
upstream multicast path and tells you the method used for RPF check at every
router. Notice that inside AS 100 the RPF checks are performed using MBGP.

-----------------------------

Rack1R6#mtrace 150.1.10.10 239.1.1.1
Type escape sequence to abort.
Mtrace from 150.1.10.10 to 155.1.67.6 via group 239.1.1.1
From source (?) to destination (?)
Querying full reverse path...
 0  155.1.67.6
-1  155.1.67.6 PIM/MBGP  [150.1.10.0/24]
-2  155.1.67.7 PIM/MBGP Reached RP/Core [150.1.10.0/24]
-3  155.1.37.3 PIM/MBGP  [150.1.10.0/24]

-4  155.1.0.5 [AS 200] PIM Reached RP/Core [150.1.10.0/24]
-5  155.1.58.8 [AS 200] PIM  [150.1.10.0/24]
-6  155.1.108.10 [AS 200] PIM  [150.1.10.0/24]
Rack1R6#


Rack1SW3#mtrace 150.1.10.10 239.1.1.1
Type escape sequence to abort.
Mtrace from 150.1.10.10 to 155.1.79.9 via group 239.1.1.1
From source (?) to destination (?)
Querying full reverse path...
 0  155.1.79.9
-1  155.1.79.9 PIM/MBGP  [150.1.10.0/24]
-2  155.1.79.7 PIM/MBGP Reached RP/Core [150.1.10.0/24]
-3  155.1.37.3 PIM/MBGP  [150.1.10.0/24]

-4  155.1.0.5 [AS 200] PIM Reached RP/Core [150.1.10.0/24]
-5  155.1.58.8 [AS 200] PIM  [150.1.10.0/24]
-6  155.1.108.10 [AS 200] PIM  [150.1.10.0/24]
Rack1SW3#


-----------------------------

You may now repeat the tests for the group 239.1.1.2 and see that it works as well.

-----------------------------

Rack1SW4#ping 239.1.1.2 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 239.1.1.2, timeout is 2 seconds:

Reply to request 0 from 155.1.79.9, 33 ms
Reply to request 0 from 155.1.45.4, 84 ms
Reply to request 0 from 155.1.45.4, 75 ms
Reply to request 0 from 155.1.79.9, 67 ms
Reply to request 0 from 155.1.79.9, 58 ms
Reply to request 0 from 155.1.45.4, 50 ms


-----------------------------

Rack1SW3#mtrace 150.1.10.10 239.1.1.2
Type escape sequence to abort.
Mtrace from 150.1.10.10 to 155.1.79.9 via group 239.1.1.2
From source (?) to destination (?)
Querying full reverse path...
 0  155.1.79.9
-1  155.1.79.9 PIM/MBGP  [150.1.10.0/24]
-2  155.1.79.7 PIM/MBGP Reached RP/Core [150.1.10.0/24]
-3  155.1.37.3 PIM/MBGP  [150.1.10.0/24]

-4  155.1.0.5 [AS 200] PIM  [150.1.10.0/24]
-5  155.1.58.8 [AS 200] PIM Reached RP/Core [150.1.10.0/24]-6  155.1.108.10 [AS 200] PIM  [150.1.10.0/24]
Rack1SW3#


-----------------------------

WB1 8.29 Multicast BGP Extension

8.29 Multicast BGP Extension
• Enable multicast exchange between AS 100 and AS 200 on both peering links.
• Multicast traffic should prefer to be routed across the Frame-Relay link,
using the Fast Ethernet link as backup.

-------------------------------------------------------------------------------------

A multicast BGP extension is commonly needed when you plan to exchange
multicast traffic between two different administrative domains, i.e. different
autonomous systems. To achieve this goal, you need to fulfill the following tasks

1) Enable PIM between the two domains, to allow signaling of shared and
shortest-path trees between them.

1.1) PIM SM is most often used for multicast traffic exchange between different
domains. Each domain usually has its own set of RPs, and thus you should
prevent BSR/Auto-RP information from leaking between the domains.

1.2) You must exchange information about active sources between the RPs in
every domain. Since the RPs are separated, one domain cannot easily learn
about the sources in another domain. As we’ll see later, a special protocol called
MSDP is used for this purpose.

2) In order to facilitate multicast traffic forwarding, you need to exchange
information on routes towards the multicast sources in each domain, to allow
routers performing RPF checks to do so correctly. PIM uses the unicast routing
table to perform these RPF checks, and thus it may use routes learned via either
IGP or BGP. However it is BGP is most commonly used to exchange this routing
information.

In some cases, you may want to apply different policies to unicast specific routes
exchanged via BGP as well as to the information about multicast sources. This is
possible thanks to Multi-Protocol BGP extensions. Using a special address
family, you may exchange prefixes under the “multicast” address-family, and
apply a different policy to this information. These prefixes are interpreted in the
same as the mroute command information – they are used for RPF checks on
the router that receives them. That is, if a prefix is learned via multicast BGP
extension, it is assumed to have RPF neighbor towards the next-hop IP address
found in the update. If needed, BGP performs recursive routing lookups for the
next hop via the IGP routing table to find the immediate RPF neighbor. Unlike the
mroute command, which is purely local, the information is propagated via BGP
to every neighbor configured for the multicast address family.

Using separate policies for multicast inter-domain RPF information allows the use
of different inter-domain links for unicast and multicast traffic. Or you may
selectively filter out certain multicast sources from another domain, while leaving
unicast routes intact.

These tasks require us to enable BGP multicast extensions an all BGP routers.
Notice the use of peer-group under the multicast address family. This is needed
to propagate multicast RPF information through both domains, as route-reflection
is configured separately per address family. Notice the use of AS-PATH
prepending to designate the primary path. Multicast prefixes are subject to the
same best-path selection procedure, and thus you may use the same methods of
path manipulation you used with unicast prefixes. Finally, PIM is activated on the
links connecting the two autonomous systems. The PIM BSR border command is
used to stop BSR information from leaking into AS 100.

------------------------------------------------------------------------------------------------

SW1:
router bgp 100
address-family ipv4 multicast
neighbor IBGP route-reflector-client
neighbor 150.1.3.3 peer-group IBGP
neighbor 150.1.6.6 peer-group IBGP
neighbor 150.1.9.9 peer-group IBGP

R3:
router bgp 100
address-family ipv4 multicast
neighbor 155.1.0.5 activate
redistribute ospf 1
neighbor 150.1.7.7 activate
neighbor 150.1.7.7 next-hop-self
!
interface Serial 1/0.1
ip pim sparse-mode

R6:
route-map PREPEND
set as-path prepend 100 100 100
!
router bgp 100
address-family ipv4 multicast
neighbor 155.1.146.4 activate
redistribute ospf 1
neighbor 155.1.146.4 route-map PREPEND out
neighbor 150.1.7.7 activate
neighbor 150.1.7.7 next-hop-self
!
interface FastEthernet 0/0.146
ip pim sparse-mode

SW3:
router bgp 100
address-family ipv4 multicast
neighbor 150.1.7.7 activate

-------------------------------------------------

R5:
router bgp 200
address-family ipv4 multicast
neighbor 155.1.0.3 activate
redistribute eigrp 100
neighbor IBGP route-reflector-client
neighbor 150.1.4.4 peer-group IBGP
neighbor 150.1.8.8 peer-group IBGP
neighbor 150.1.10.10 peer-group IBGP
neighbor IBGP next-hop-self
!
interface Serial 0/0/0
ip pim sparse-mode
ip pim bsr-border

R4:
route-map PREPEND
set as-path prepend 200 200 200
!
router bgp 200
address-family ipv4 multicast
neighbor 155.1.146.6 activate
redistribute eigrp 100
neighbor 155.1.146.6 route-map PREPEND out
neighbor 150.1.5.5 activate
!
interface FastEthernet 0/1
ip pim sparse-mode
ip pim bsr-border

SW2:
router bgp 200
address-family ipv4 multicast
neighbor 150.1.5.5 activate

SW4:
router bgp 200
address-family ipv4 multicast
neighbor 150.1.5.5 activate

-------------------------------------------------
Use the regular show BGP commands to check that the multicast address family
is activated between the routers. Repeat it on every BGP router to make sure
you didn’t miss anything.

Rack1SW1# show ip bgp ipv4 multicast summary
BGP router identifier 150.1.7.7, local AS number 100
BGP table version is 26, main routing table version 26
20 network entries using 2340 bytes of memory
28 path entries using 1344 bytes of memory
18/10 BGP path/bestpath attribute entries using 2520 bytes of memory
2 BGP AS-PATH entries using 48 bytes of memory
0 BGP route-map cache entries using 0 bytes of memory
0 BGP filter-list cache entries using 0 bytes of memory
BGP using 6252 total bytes of memory
BGP activity 42/2 prefixes, 227/168 paths, scan interval 60 secs

Neighbor        V    AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
150.1.3.3       4   100    1339    1382       26    0    0 00:18:18       19
150.1.6.6       4   100    1391    1386       26    0    0 00:16:17        9
150.1.9.9       4   100    1291    1391       26    0    0 00:15:22        0
Rack1SW1#


Before testing can proceed we need to ensure that EIGRP is running on the
S0/1/0 interface of R5.

router eigrp 100
network 155.1.45.5 0.0.0.0
Next, check the BGP tables on the border routers to make sure that best paths
toward the multicast prefixes are across the Frame-Relay cloud:

Rack1R4#show ip bgp ipv4 multicast regexp 100$
BGP table version is 31, local router ID is 150.1.4.4
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
              r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete

   Network          Next Hop            Metric LocPrf Weight Path
*>i150.1.3.0/24     150.1.5.5                0    100      0 100 ?
*                   155.1.146.6              3             0 100 100 100 100 ?
*>i150.1.6.0/24     150.1.5.5                0    100      0 100 ?
*                   155.1.146.6              0             0 100 100 100 100 ?
*>i150.1.6.6/32     150.1.5.5                3    100      0 100 ?
*                   155.1.146.6                            0 100 100 100 100 ?
*>i150.1.7.0/24     150.1.5.5                2    100      0 100 ?
*                   155.1.146.6              2             0 100 100 100 100 ?
*>i150.1.9.9/32     150.1.5.5                3    100      0 100 ?
*                   155.1.146.6              3             0 100 100 100 100 ?
*>i155.1.7.0/24     150.1.5.5                2    100      0 100 ?
*                   155.1.146.6              2             0 100 100 100 100 ?
*>i155.1.9.0/24     150.1.5.5                3    100      0 100 ?
*                   155.1.146.6              3             0 100 100 100 100 ?
*>i155.1.37.0/24    150.1.5.5                0    100      0 100 ?
*                   155.1.146.6              2             0 100 100 100 100 ?
*>i155.1.67.0/24    150.1.5.5                2    100      0 100 ?
   Network          Next Hop            Metric LocPrf Weight Path
*                   155.1.146.6              0             0 100 100 100 100 ?
*>i155.1.79.0/24    150.1.5.5                2    100      0 100 ?
*                   155.1.146.6              2             0 100 100 100 100 ?
Rack1R4#


Check the best paths in the other AS also.

Rack1R6#show ip bgp ipv4 multicast regexp 200$
BGP table version is 28, local router ID is 150.1.6.6
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
              r RIB-failure, S Stale
Origin codes: i - IGP, e - EGP, ? - incomplete

   Network          Next Hop            Metric LocPrf Weight Path
*>i150.1.4.0/24     150.1.3.3          2297856    100      0 200 ?
*                   155.1.146.4              0             0 200 200 200 200 ?
*  150.1.5.0/24     155.1.146.4        2297856             0 200 200 200 200 ?
*>i                 150.1.3.3                0    100      0 200 ?
*  150.1.8.0/24     155.1.146.4        2300416             0 200 200 200 200 ?
*>i                 150.1.3.3           156160    100      0 200 ?
*  150.1.10.0/24    155.1.146.4        2302976             0 200 200 200 200 ?
*>i                 150.1.3.3           158720    100      0 200 ?
*  155.1.0.0/24     155.1.146.4              0             0 200 200 200 200 ?
*>i                 150.1.3.3                0    100      0 200 ?
*  155.1.8.0/24     155.1.146.4        2172672             0 200 200 200 200 ?
*>i                 150.1.3.3            28416    100      0 200 ?
*  155.1.10.0/24    155.1.146.4        2175232             0 200 200 200 200 ?
*>i                 150.1.3.3            30976    100      0 200 ?
*>i155.1.45.0/24    150.1.3.3                0    100      0 200 ?

*                   155.1.146.4              0             0 200 200 200 200 ?
*  155.1.58.0/24    155.1.146.4        2172416             0 200 200 200 200 ?
   Network          Next Hop            Metric LocPrf Weight Path
*>i                 150.1.3.3                0    100      0 200 ?
*  155.1.108.0/24   155.1.146.4        2174976             0 200 200 200 200 ?
*>i                 150.1.3.3            30720    100      0 200 ?
Rack1R6#

2013/11/29

WB1 8.28 DVMRP Interoperability

8.28 DVMRP Interoperability

• Configure R4 for this task. Enable multicast routing and configure PIM
dense mode on the VLAN 146 interface and Loopack 0 interfaces.
• Create a DVMRP tunnel sourced off the Loopback 0 interface with a (nonexistent)
destination 204.12.X.100 (where X is your rack number).
• Advertise the VLAN 146 subnets to the DVMRP backbone with an offset
value of 3. Configure the use of DVMRP routes for RPF check by PIM on
VLAN 146 interface.

-----------------------------------------------------------------------------------

DVMRP or “Distance-Vector Multicast Routing Protocol” is defined in RFC 1075.
This protocol was implemented in the UNIX mrouted daemon and was the first to
gain more or less widespread adoption. DVMRP is based on RIP routing
protocol, and uses IGMPv1
messages to carry its routing information. For many
years, DVMRP was used as the core routing protocol of the MBONE – an
experimental set of multicast-capable networks, used to facilitate multicast
testing. However, DVMRP never was a scalable protocol, and most enterprises
use PIM as the standards based multicast routing protocol nowadays.

Similar to RIP, DVMRP propagates distance-vector information, but routing
updates carry subnets describing multicast sources and metrics to reach them.
The metric used by DVMRP is the same hop count used by RIP. When a router
receives a DVMRP update, it extracts the subnets contained in the update along
with their metrics and stores them in a separate multicast routing table. This table
is used to perform RPF checks only, not to route packets based on their
destination addresses.

DVMRP implements TRPB – Truncated Reverse Path Broadcasting, which is
another name for Constrained RPF. With respect to multicast flooding DVMRP is
very much similar to PIM Dense Mode - traffic is flooded using RPF checks and
then routers with no subscribers send “prune” messages upstream toward the
source.

Cisco IOS routers do not implement complete DVMRP stacks and rely on PIM for
multicast routing and signaling. However, Cisco routers might be configured to
border with a DVMRP domain and receive DVMRP routing updates. IOS routers
are capable of storing DVMRP information and using it to perform RPF checks
on packets received from the DVMRP cloud. At the same time, the routers will
generate DVMRP updates to cover sources in the PIM cloud and let DVMRP
systems receive multicast feeds. IOS routers may generate DVMRP prune and
graft messages in response to the respective PIM messages.

Use the command ip dvmrp interoperability to configure the IOS router
for interoperation with DVMRP systems. This will allow the router to accept/send
DVMRP updates and populate the multicast route cache. In order to use this
information for a particular interface, enter the ip dvmrp unicast-routing
interface-level command
. This will make the router prefer cached DVMRP
information over unicast routes for RPF checks. Like RIP, DVMRP performs
automatic summarization when crossing major subnet boundaries
. You may
disable this behavior, using the command no ip dvmrp auto-summary.

By default, the local router will only advertise directly connected subnets in
DVMRP updates. If you want to advertise more information, use the interface-level
command ip dvmrp metric <hops> [list <access-list>] [protocol <process-id>]
to redistribute static subnets or IGP networks to
all DVMRP neighbors on the interface. If you omit the protocol specification, the
command will only advertise connected routes. If you want to filter out certain
updates, use the metric value of zero to remove the matching subnets from the
DVMRP updates.

Another thing you might want to configure is a DVMRP tunnel. DVMRP tunnels
are supported on IOS routers to connect to remote DVMRP clouds over non-multicast
networks. Notice that you cannot configure a DMVRP tunnel between
two IOS routers,
and these configurations are always unidirectional. DMVRP
tunnels are commonly used to link your multicast network with the MBONE. Use
the following syntax to configure a DVMRP tunnel:

interface tunnel 0
 ip unnumbered Loopback0
 ip pim dense-mode
 tunnel source Loopback0
 tunnel destination <IP in MBONE>
 tunnel mode dvmrp

Notice that PIM is enabled on the interface, to allow multicast feeds to flow to the
MBONE, even though no real PIM adjacencies are ever established over the
tunnel.


-----------------------------------------------------------------------------------

R4:
ip dvmrp interoperability
!
access-list 40 permit 155.1.0.0 0.0.255.255
!
interface FastEthernet0/1
 ip dvmrp metric 3 list 40 eigrp 100
!
interface tunnel 0
 ip unnumbered Loopback0
 ip pim dense-mode
 tunnel source Loopback0
 tunnel destination 204.12.1.100
 tunnel mode dvmrp

-----------------------------------------------------------------------------------

There is no way to fully verify DVMRP interoperability unless you have a real
DVMRP router. However, you may verify that the router actually generates
DVMRP updates using debug commands.


Rack1R4#debug ip dvmrp detail
DVMRP(0): Building Report for FastEthernet0/1
DVMRP(0): Report 155.1.146.0/24, metric 32
DVMRP(0): Report 155.1.10.0/24, metric 1
DVMRP(0): Report 155.1.8.0/24, metric 1
DVMRP(0): Report 155.1.5.0/24, metric 1
DVMRP(0): Report 155.1.58.0/24, metric 1
DVMRP(0): Report 155.1.45.0/24, metric 1
DVMRP(0): Report 155.1.67.0/24, metric 32
DVMRP(0): Report 155.1.108.0/24, metric 1
DVMRP(0): Report 150.1.6.0/24, metric 32
DVMRP(0): Report 150.1.5.0/24, metric 1
DVMRP(0): Report 150.1.10.0/24, metric 1
DVMRP(0): Report 150.1.8.0/24, metric 1
DVMRP(0): Delay Report on FastEthernet0/1
DVMRP(0): 12 unicast, 0 MBGP, 0 DVMRP routes advertised
DVMRP(0): Send Report on FastEthernet0/1 to 224.0.0.4

WB1 8.27 Source Specific Multicast

8.27 Source Specific Multicast

• Using the default multicast group range enable PIM SSM functionality on your network.
• R6 should join the multicast feed (150.1.10.10,232.6.6.6).
• Ensure that the “ip pim bsr-border” commands have been removed from R1 and R4 VLAN 146 interfaces.

---------------------------------------------------------------------

Classic multicast delivery technologies use IGMPv2 and PIM DM/SM and are
known as “Any Source Multicast” or ASM. That is, receivers agree to accept
traffic from any source. This is why Rendezvous Points are actually needed in
PIM SM - to allow receivers to discover new sources
. The core of PIM SSM
protocol is the use of IGMPv3 signaling by clients
. This client-side protocol allows
receivers to specify sources that they want to listen to explicitly. That is, the host
may explicitly ask to join group G at source S. PIM SSM works in association
with IGMPv3 in building shortest-path trees (SPT) only, towards the sources.
There are no shared trees in PIM SSM and no RPs are used. Thus, there is no
need to use auxiliary protocols like BSR or Auto-RP to distribute RP information.

Notice that source discovery is outside the scope of PIM SSM and IGMPv3, and
must be accomplished via some other means, like global directory services.

Configuring PIM SSM is relatively straight-forward, since it uses regular PIM
messages. You just need to specify the range of groups that are using SSM
signaling with the command ip pim ssm range {default|range <Standard-ACL>}.
The default keyword means that range 232.0.0.0/8 will
be used for SSM. For the groups in the SSM range, no shared trees are allowed
and the (*,G) joins are dropped.

The second step in configuring PIM SSM is enabling IGMPv3 on the interfaces
connected to the receivers capable of using this protocol. Without IGMPv3 there
can be no use of PIM SSM, as no other IGMP version allows sources to be
selected by the receivers explicitly in order to build shortest-path trees.

---------------------------------------------------------------------

R1, R3, R4, R5, SW2, SW4:
ip pim ssm default

R1:
interface FastEthernet 0/0
ip igmp version 3

R6:
interface FastEthernet 0/0.146
ip igmp version 3
ip igmp join 232.6.6.6 source 150.1.10.10

---------------------------------------------------------------------

PIM SSM is generally easier to configure than ASM, as it does not require the
complicated RP infrastructure. All you need to do is verify the SPT toward the
explicit source.

Rack1R1#show ip igmp groups 232.6.6.6 detail
Flags: L - Local, U - User, SG - Static Group, VG - Virtual Group,
       SS - Static Source, VS - Virtual Source,
       Ac - Group accounted towards access control limit

Interface:      FastEthernet0/0
Group:          232.6.6.6
Flags:          SSM
Uptime:         00:00:44
Group mode:     INCLUDE
Last reporter:  155.1.146.6
Group source list: (C - Cisco Src Report, U - URD, R - Remote, S - Static,
                    V - Virtual, M - SSM Mapping, L - Local,
                    Ac - Channel accounted towards access control limit)
  Source Address   Uptime    v3 Exp   CSR Exp   Fwd  Flags
  150.1.10.10      00:00:44  00:02:54  stopped   Yes  R
Rack1R1#
Rack1R1#
Rack1R1#show ip mroute 232.6.6.6 150.1.10.10
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(150.1.10.10, 232.6.6.6), 00:01:24/00:02:33, flags: sTI
  Incoming interface: Serial0/0.1, RPF nbr 155.1.0.5
  Outgoing interface list:
    FastEthernet0/0, Forward/Sparse, 00:01:24/00:02:33

Rack1R1#

Rack1R5#show ip mroute 232.6.6.6 150.1.10.10
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(150.1.10.10, 232.6.6.6), 00:02:05/00:03:22, flags: sT
  Incoming interface: FastEthernet0/0, RPF nbr 155.1.58.8
  Outgoing interface list:
    Serial0/0, 155.1.0.1, Forward/Sparse, 00:02:05/00:03:22

Rack1R5#

Rack1SW2#show ip mroute 232.6.6.6 150.1.10.10
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(150.1.10.10, 232.6.6.6), 00:02:45/00:02:41, flags: sT
  Incoming interface: Port-channel1, RPF nbr 155.1.108.10
  Outgoing interface list:
    Vlan58, Forward/Sparse, 00:02:45/00:02:41

Rack1SW2#

Rack1SW4#show ip mroute 232.6.6.6 150.1.10.10
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(150.1.10.10, 232.6.6.6), 00:03:25/00:03:03, flags: sT
  Incoming interface: Loopback0, RPF nbr 0.0.0.0
  Outgoing interface list:
    Port-channel1, Forward/Sparse, 00:03:25/00:03:03

Rack1SW4#       

Rack1SW4#ping 232.6.6.6 repeat 3

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 232.6.6.6, timeout is 2 seconds:

Nov 29 17:31:46.007 TPE: IP(0): s=155.1.10.10 (Vlan10) d=232.6.6.6 id=181, ttl=254, prot=1, len=114(100), mroute olist null
Nov 29 17:31:46.007 TPE: IP(0): s=150.1.10.10 (Loopback0) d=232.6.6.6 (Port-channel1) id=181, ttl=254, prot=1, len=100(100), mforward
Reply to request 0 from 155.1.146.6, 59 ms

Reply to request 0 from 155.1.146.6, 59 ms
Reply to request 0 from 155.1.146.6, 59 ms
Rack1SW4#

WB1 8.26 Bidirectional PIM

8.26 Bidirectional PIM

• The group range 238.0.0.0/8 is used for network video-conferencing with many participants.
• Configure the network so that this group uses a single shared tree rooted on R5 for multicast traffic delivery.

------------------------------------------------------------------------------------------------------

Bidirectional PIM or PIM BiDir is a special extension to the PIM SM concept that
uses only the shared tree for multicast distribution. This mode of operation is
useful in situations where most receivers are also senders at the same time
. For
example, this might be the case when you run videoconferencing. In this
situation, in addition to joining the shared tree rooted at the RP, every receiver
needs to join the shortest-path multicast distribution tree rooted at every other
participant. If the number of participants is significant, the amount of multicast
route states in the core of the network will grow at a quadric rate.

One special feature of PIM SM shared and shortest path trees are that they are
unidirectional – traffic passes down from the root to the leaves of the tree. PIM
BiDir uses a single distribution tree rooted at the RP for all sources and receivers
at the same time. If there are multiple RPs, there could be many BiDir trees.
Unlike the classic tree, traffic may flow up and down this tree. When a source
sends multicast packets, they first flow up to the root of the tree (toward the RP)
and then down to all receivers.

To build the bi-directional tree, PIM elects special designated forwarders (DFs)
on every link in the network. A DF is elected based on the rules similar to the
ones used in the PIM Assert procedure – i.e., the router on the link that has the
shortest metric to reach the RP is selected as the DF. Notice that a single router
might be the DF on one link and a non-DF on another. After the elections, DF
routers are the only routers that are allowed to forward traffic toward the RP – via
the bi-directional tree (this is considered the “upstream” portion of the BiDir tree).
Every router in the multicast domain creates a (*,G) state for each BiDir group,
with the OIL built based on PIM Join messages received from its neighbors. This
is the “downstream” portion of the BiDir tree. Any packet received on a valid RPF
interface is forwarded based on the OIL. At the same time, the DF will forward a
copy of these packets toward the RP - upstream through the shared tree -
provided that the packet is not received on the interface pointing to the RP.

Notice that PIM BiDir does not utilize the source registration procedure, via PIM
Register/Register-Stop messages. Every source connected to a PIM BiDir
capable router may start sending at any time, and the packets will flow upwards
to the RP. After reaching the RP, packets are either dropped, if there are no
receivers for this group (i.e. the OIL for this (*,G) state is empty) or forwarded
down the BiDir tree. There is no way for the RP to signal the source to stop
sending traffic even if there are no receivers. This means commands like “ip pim
accept-register” covered in lab 8.8 will not work with PIM BiDir, due to the fact
that they rely on these “register-stop” messages to work.

Configuring PIM BiDir is relatively simple. You just need to enable BiDir PIM on
all multicast routers by using the command ip pim bidir-enable and
designate particular RP/Group combinations as bi-directional. You can do this in
a number of ways.

1) Using a static RP configuration with the command ip pim rp-address <IP> <ACL> bidir.

2) Using BSR or Auto-RP for RP information dissemination you may flag
particular group/RP combinations as bi-directional using the following syntax:

Auto-RP:
ip pim send-rp-announce <interface> scope <TTL> group-list <ACL> bidir

BSR:
ip pim rp-candidate <interface> group-list <ACL> bidir

This is all you need to enable bi-directional PIM. However, remember to enable
bi-directional mode on all routers in your network, or you might end up with
routing loops.

------------------------------------------------------------------------------------------------------

R1, R3, R4, SW2, SW4:
ip pim bidir-enable

R5:
ip pim bidir-enable
!
ip access-list standard GROUP238
permit 238.0.0.0 0.255.255.255
!
ip pim rp-candidate Loopback 0
group-list GROUP238 bidir

------------------------------------------------------------------------------------------------------

To verify, join R1 and SW4 to the bi-directional group 238.1.1.1.

R1:
interface FastEthernet 0/0
ip igmp join-group 238.1.1.1

SW4:
interface Vlan 10
ip igmp join-group 238.1.1.1

------------------------------------------------------------------------------------------------------

To verify, join R1 and SW4 to the bi-directional group 238.1.1.1.

R1:
interface FastEthernet 0/0
ip igmp join-group 238.1.1.1

SW4:
interface Vlan 10
ip igmp join-group 238.1.1.1

and then ping this group from R5.

------------------------------------------------------------------------------------------------------

Rack1R5#ping 238.1.1.1 repeat 100
Type escape sequence to abort.
Sending 100, 100-byte ICMP Echos to 238.1.1.1, timeout is 2 seconds:
Reply to request 0 from 155.1.108.10, 4 ms
Reply to request 0 from 155.1.0.1, 188 ms
Reply to request 0 from 155.1.0.1, 104 ms
Reply to request 0 from 155.1.108.10, 8 ms
Reply to request 1 from 155.1.108.10, 4 ms
Reply to request 1 from 155.1.0.1, 189 ms
Reply to request 1 from 155.1.0.1, 104 ms
Reply to request 1 from 155.1.108.10, 8 ms

Check the mroute states on all routers. Notice that some interfaces are marked
as Bidir-Upstream – these interfaces are used to send packets upwards to the
root of the tree. The root of the tree (the RP) will have no upstream interfaces


Rack1R1#show ip mroute 238.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 238.1.1.1), 00:08:33/00:02:43, RP 150.1.5.5, flags: BPL
  Bidir-Upstream: Serial0/0.1, RPF nbr 155.1.0.5
  Outgoing interface list:
    Serial0/0.1, Bidir-Upstream/Sparse, 00:08:33/00:00:00

Rack1R1#

Rack1R5#show ip mroute 238.1.1.1
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 238.1.1.1), 00:08:42/00:03:27, RP 150.1.5.5, flags: B
  Bidir-Upstream: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Serial0/0, 155.1.0.1, Forward/Sparse, 00:07:57/00:03:27
    FastEthernet0/0, Forward/Sparse, 00:08:42/00:02:39

Rack1R5#

WB1 8.25 Multicast Rate Limiting

8.25 Multicast Rate Limiting

• Configure R3 to limit the aggregate rate of the multicast traffic leaving the VLAN 37 interface to 1Mbps.
• Any source sending to the group IP 239.1.1.7 should be limited to 128Kbps.
• Multicast feeds from R6 to the IP address 239.1.1.100 should be limited to 256Kbps.

---------------------------------------------------------------------------------------

You could use regular QoS policing and rate-limiting commands to control
multicast traffic flow. However, there is a special command for multicast traffic
rate control that has its own unique features. The command is
ip multicast rate-limit {in | out} [group-list <acl>] [source-list <acl>] [<limit>]
where limit is specified in Kilobits per second. The in
and out keywords control the ingress or egress traffic respectively. If you omit
the group-list and source-list, the limit will apply to the aggregate multicast traffic
rate, for all groups. However, if you omit the speed limit parameter, the command
will discard any multicast traffic matching both the group and source-list, if those
are specified. When applied without any parameters, this command will simply
drop all multicast traffic.

This command behaves differently when configured with group-list (and possible
source-list). When you designate the groups to be rate limited, the limit will apply
to each source/group pair individually
. That is, if you configure the command to
limit the rate of traffic destined to the group 239.0.0.1 to 128Kbs and there are 3
independent sources, then the limit will apply to every source, resulting in 3x128
Kbps of aggregate rate.

You may combine the group-specific limits with the aggregate limit on the same
interface. This way, you will apply per-flow rate-limits and at the same time
control the aggregate rate. For example:

ip multicast rate-limit out group-list 100 128
ip multicast rate-limit out 512

Also, you may apply multiple group-specific multicast rate-limiting commands. In
this case, the particular (S,G) pair is limited based on the first match in the
configured access-lists. Notice that this restriction makes the order of the
statements particularly important! If you put the aggregate limit in the beginning
of the list, it will match everything and other entries will never be matched.

---------------------------------------------------------------------------------------

R3:
ip access-list standard GROUP7
permit 239.1.1.7
!
ip access-list standard GROUP100
permit 239.1.1.100
!
interface FastEthernet0/0
ip multicast rate-limit out group-list GROUP7 128
ip multicast rate-limit out group-list GROUP100 256
ip multicast rate-limit out 1000

SW1:
interface FastEthernet0/3
ip igmp join-group 239.1.1.100

---------------------------------------------------------------------------------------

To verify the limits, source some multicast packets to the groups 239.1.1.7 and
239.1.1.100. Then check the mroute states on R3. Notice that every group limit
appears next to the respective (S,G) entry:

Rack1R1#ping 239.1.1.7 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 239.1.1.7, timeout is 2 seconds:

Reply to request 0 from 155.1.37.7, 32 ms
Reply to request 0 from 155.1.37.7, 48 ms


Rack1R6#dddd
Translating "dddd"...domain server (255.255.255.255)
 (255.255.255.255)
Translating "dddd"...domain server (255.255.255.255)

% Unknown command or computer name, or unable to find computer address
Rack1R6#


Rack1R3#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.100), 02:00:28/stopped, RP 150.1.8.8, flags: SJCL
  Incoming interface: FastEthernet0/0, RPF nbr 155.1.37.7
  Outgoing interface list:
    Serial1/0.1, Forward/Dense, 02:00:28/00:00:00

(155.1.146.6, 239.1.1.100), 00:00:17/00:02:58, flags: LJT
  Incoming interface: Serial1/0.1, RPF nbr 155.1.0.5
  Outgoing interface list:
    FastEthernet0/0, Forward/Dense, 00:00:17/00:00:00, limit 256 kbps

(*, 239.1.1.7), 00:05:05/stopped, RP 150.1.8.8, flags: SJCF
  Incoming interface: FastEthernet0/0, RPF nbr 155.1.37.7
  Outgoing interface list:
    Serial1/0.1, Forward/Dense, 00:05:05/00:00:00

(155.1.146.1, 239.1.1.7), 00:02:50/00:00:09, flags: JT
  Incoming interface: Null, RPF nbr 155.1.13.1
  Outgoing interface list:
    FastEthernet0/0, Forward/Dense, 00:02:50/00:00:00, limit 128 kbps    Serial1/0.1, Forward/Dense, 00:02:51/00:00:00, A

(155.1.0.1, 239.1.1.7), 00:05:05/00:01:51, flags: FT
  Incoming interface: Serial1/0.1, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet0/0, Forward/Dense, 00:05:05/00:00:00, limit 128 kbps

(150.1.1.1, 239.1.1.7), 00:05:05/00:01:51, flags: JT
  Incoming interface: Serial1/0.1, RPF nbr 155.1.0.5
  Outgoing interface list:
    FastEthernet0/0, Forward/Dense, 00:05:05/00:00:00, limit 128 kbps

(*, 239.1.1.9), 01:59:35/00:02:25, RP 150.1.10.10, flags: SJC
  Incoming interface: FastEthernet0/0, RPF nbr 155.1.37.7
  Outgoing interface list:
    Serial1/0.1, Forward/Dense, 01:59:35/00:00:00

(*, 224.110.110.110), 02:00:34/00:02:59, RP 150.1.10.10, flags: SJCL
  Incoming interface: FastEthernet0/0, RPF nbr 155.1.37.7
  Outgoing interface list:
    Loopback0, Forward/Sparse-Dense, 02:00:34/00:02:59
    Serial1/0.1, Forward/Dense, 02:00:34/00:00:00

(*, 224.0.1.39), 02:00:25/00:02:45, RP 0.0.0.0, flags: DC
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet0/0, Forward/Dense, 02:00:25/00:00:00, Int limit 1000 kbps    Serial1/0.1, Forward/Dense, 02:00:25/00:00:00

(*, 224.0.1.40), 02:00:36/00:02:44, RP 0.0.0.0, flags: DCL
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet0/0, Forward/Dense, 02:00:36/00:00:00, Int limit 1000 kbps    Serial1/0.1, Forward/Dense, 02:00:36/00:00:00

Rack1R3#

WB1 8.24 Multicast Helper Map

8.24 Multicast Helper Map

• When R6 sends broadcast UDP packets on port 5000, those packets
should be transported across the network and broadcast on the VLAN 37
segment.
• Use the group 239.1.1.100 to accomplish this task and use DNS
broadcasts for testing.
• You may use static mroutes if needed to accomplish this task.

------------------------------------------------------------------------

The purpose of this feature is to allow forwarding of broadcast traffic across a
multicast capable network. Generally, you need a single Layer 2 domain between
two nodes to let them hear each other’s broadcast packets. However, broadcast
UDP packets can be relayed between two subnets using a special IOS feature
known as the helper-address. There are two variations of this feature:

1) Unicast helper (ip helper-address), which converts the broadcast
destination address to the fixed unicast IP address. Most often this feature is
used with DHCP to forward requests to the server.

2) Multicast helper (ip multicast helper-map), which converts the
broadcast destination to a fixed multicast address.

The multicast helper-map feature allows scalable forwarding of broadcast traffic
between disjoined segments. This is often needed to support legacy applications
like stock tickers that use broadcast to deliver information to multiple sources
simultaneously. To configure multicast helper, follow the steps outlined below:

Step 1:
Set up a multicast network between the segments that should exchange
broadcast packets.
You should select a group to deliver the broadcast packets
and decide which PIM mode to use. If you chose PIM SM, make sure the group
you chose maps to an RP. Make sure multicasting works, by joining an interface
on the egress router (closest to the broadcast receiver)
to the selected group and
ping this group from the ingress router (closest to the broadcast source).


Step 2:
Enabling broadcast forwarding on the ingress router – the one directly connected
to the source. If there are multiple sources, you have to configure all respective
routers. Use the command ip forward-protocol udp <port-number> to
enable forwarding of broadcast UDP packets sent to the specified port-number.

Step 3:
Configure a multicast helper-map on the ingress routers to redirect broadcast
packets to the selected multicast address. The syntax for this interface-level
command is ip multicast helper-map broadcast <mcast-address>
<ACL>
. The access-list controls which broadcast packets are eligible to be
converted into the multicast. For example, if you want to forward UDP packets
destined to port 5000 use an access-list similar to the following: access-list
100 permit udp any any eq 5000
. Note that the same UDP port must be
enabled for broadcast forwarding at Step 2. All broadcast traffic received on the
configured interface that matchs the access-list is converted and sent to the
specified multicast address. If the group is in sparse mode, the ingress router will
register the source with the RP, per the usual procedure.

Step 4:
Enable broadcast forwarding on the egress router, i.e. the router directly
connected to the destination subnet. Use the same command that you used in
Step 2 ip forward-protocol udp <port-number> to accomplish this.
Next, enable multicast helper map on the egress router for all interfaces that may
receive multicast traffic.
Note that you should not configure the multicast-helper
on the interface connected to the destination. Use the command ip multicast
helper-map <mcast-group> <directed-broadcast-IP> <ACL>
where
mcast-group is the same group you used in Step 3 and directed-broadcast-IP is
the broadcast subnet IP address on the segment that receives the broadcast
traffic.

Step 5:
Enable directed broadcasts on the interface connected to the receiving segment
using the command ip directed-broadcast. This is needed to successfully
send broadcasts out of this segment. By default, the broadcasts are sent to the
address 255.255.255.255 irrespective of the directed-broadcast-IP
configured at Step 4. If you want to use a different address, put the command ip
broadcast-address <IP>
on the same segment.

To test your configuration, you will need a broadcast packet source. You may
use the IP SLA command to generate UDP packets to a segment broadcast
address, but this might not work on some platforms/IOS versions. If that’s the
case, you may use either of the following two methods:

1) Enable DNS name resolution but do not configure a DNS server. After this, the
router will broadcast for any DNS name entered in the command line using the
address 255.255.255.255 out all interfaces. You will need to adjust the port in
your access-lists to forward this broadcast traffic.

2) Configure an extended traceroute command using parameters to trace to
the broadcast destination, starting off the port number that is covered by your
ACL.

Pitfall

There are some caveats associated with our particular scenario.

1) You need to select the proper ingress router. The group 239.1.1.100 is going
to be transported in sparse mode. Thus, the source should be registered with the
RP first. The only router allowed to register sources is the DR, which is R1. Thus,
we must configure R1 to generate the multicast packets.

2) The next issue, is that R3 is a stub multicast router. Thus, it does not
exchange any PIM messages with R5 and it won’t join group 239.1.1.100 once
you configure the multicast helper on the ingress interface. To resolve this issue,
SW1 should be configured with a static IGMP join for group 239.1.1.100 to let R5
know about that.

3) If you loaded our default configuration, then you will see that R3 has three
equal-cost routes to reach the VLAN 146 segment. This will cause RPF failures,
unless you configure ip multicast multipath on R3, allowing R3 to
perform RPF across the equal-cost paths.

4) Finally, R5 will prefer the path to the VLAN 146 segment across the Serial
connection to R4. Thus, in order to allow R5, which acts on behalf of R3, to join
the SPT towards the VLAN 146 segment, you need a static mroute on R5, to
make R1 the RPF neighbor for the VLAN 146 subnet.
Also notice that this scenario requires PIM NBMA mode to be configured on R5’s
Frame-Relay interface to work properly.

------------------------------------------------------------------------

R1:
ip forward-protocol udp 5000
!
ip access-list extended TRAFFIC
permit udp any any eq 5000
permit udp any any eq 53
!
interface FastEthernet 0/0
ip multicast helper-map broadcast 239.1.1.100 TRAFFIC

R3:
ip forward-protocol udp 5000
ip multicast multipath
!
ip access-list extended TRAFFIC
permit udp any any eq 5000
permit udp any any eq 53
!
interface FastEthernet 0/0
ip directed-broadcast
ip broadcast-address 155.1.37.255
!
interface Serial 1/0.1
ip multicast helper-map 239.1.1.100 155.1.37.255 TRAFFIC

R5:
ip mroute 155.1.146.0 255.255.255.0 155.1.0.1

SW1:
interface FastEthernet0/3
ip igmp join-group 239.1.1.100

------------------------------------------------------------------------

For verification we are going to use DNS broadcasts sent from R6. Configure R6
to resolve DNS names, but do not provide any DNS server:

R6:
ip domain lookup

Enable debugging and start a traffic flow on R6:

Rack1R1#debug ip mpacket
Rack1R1#conf t
Rack1R1(config)#access-list 100 permit udp any any eq 53
Rack1R1#debug ip packet detail 100

Rack1R3#debug ip mpacket
Rack1R3#conf t
Rack1R3(config)#access-list 100 permit udp any any eq 53
Rack1R3#debug ip packet detail 100

Rack1R6#dddd
Translating "dddd"...domain server (255.255.255.255)

R1 accepts the broadcasts and converts them to multicast packets. Initially, the
SPT is not built and some packets are lost, and the OIL for the (S,G) is empty.
When the SPT is finally finished, everything will work smoothly.

*Mar  2 18:19:21.937: IP: s=155.1.146.6 (FastEthernet0/0), d=255.255.255.255, len 51, rcvd 2
*Mar  2 18:19:21.941: IP(0): s=155.1.146.6 (FastEthernet0/0) d=239.1.1.100 id=2, ttl=254, prot=17, len=65(51), mroute olist null
*Mar  2 18:19:40.597: IP: s=155.1.146.6 (FastEthernet0/0), d=255.255.255.255, len 50, rcvd 2
*Mar  2 18:19:40.601: IP(0): s=155.1.146.6 (FastEthernet0/0) d=239.1.1.100 (Serial0/0.1) id=0, ttl=254, prot=17, len=50(50), mforward
Rack1R1#

*Mar  2 17:52:51.621: IP(0): s=155.1.146.6 (Serial1/0.1) d=239.1.1.100 (FastEthernet0/0) id=1, ttl=252, prot=17, len=50(50), mforward
Rack1R3#
*Mar  2 17:52:54.909: IP: tableid=0, s=155.1.146.6 (Serial1/0.1), d=155.1.37.255 (FastEthernet0/0), routed via RIB

Check the mroute states on R1, R3 and R5 to ensure the traffic follows the SPT.

Rack1R1#show ip mroute 239.1.1.100
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.100), 00:05:27/stopped, RP 150.1.8.8, flags: SPF
  Incoming interface: Serial0/0.1, RPF nbr 155.1.0.5
  Outgoing interface list: Null

(155.1.146.6, 239.1.1.100), 00:00:11/00:03:28, flags: FT
  Incoming interface: FastEthernet0/0, RPF nbr 155.1.146.6
  Outgoing interface list:
    Serial0/0.1, Forward/Sparse, 00:00:11/00:03:18

Rack1R1#

Rack1R5#show ip mroute 239.1.1.100
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.100), 00:22:24/stopped, RP 150.1.8.8, flags: SJC
  Incoming interface: FastEthernet0/0, RPF nbr 155.1.58.8
  Outgoing interface list:
    Serial0/0, 155.1.0.3, Forward/Sparse, 00:22:24/00:02:39

(155.1.146.6, 239.1.1.100), 00:00:58/00:02:29, flags: T
  Incoming interface: Serial0/0, RPF nbr 155.1.0.1, Mroute
  Outgoing interface list:
    Serial0/0, 155.1.0.3, Forward/Sparse, 00:00:59/00:02:37

Rack1R5#

Rack1R3#show ip mroute 239.1.1.100
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.100), 00:07:15/stopped, RP 150.1.8.8, flags: SJCL
  Incoming interface: FastEthernet0/0, RPF nbr 155.1.37.7
  Outgoing interface list:
    Serial1/0.1, Forward/Dense, 00:07:15/00:00:00

(155.1.146.6, 239.1.1.100), 00:01:54/00:01:30, flags: LJT
  Incoming interface: Serial1/0.1, RPF nbr 155.1.0.5
  Outgoing interface list:
    FastEthernet0/0, Forward/Dense, 00:01:54/00:00:00

Rack1R3#

----------------------------------------------------------------

By default, the ip helper-address command will forward these 8 UDP ports:

UDP PORT
Common Name.
69
TFTP
67
BOOTP Client
68
BOOTP Server
37
Time Protocol
49
TACACS
53
DNS
137
NetBios
138
NetBios Datagram

----------------------------------------------------------------

Furthermore by issuing "no ip forward-protocol udp 53" , the configuration appears in running config which verifies it is default behaviour.

Rack1R1(config)#no ip forward-protocol udp 53 
Rack1R1(config)#
Rack1R1(config)#do sh run
...

no ip forward-protocol udp domain