Stacking switches Part - V (Cisco Nexus vPC - Virtual Port Channel)
Now it is time to adress the configuration of most popular MLAG (multi chassis link aggregation) out there - Cisco Nexus Switches running NX-OS. I will not go in detail about Nexus series of switches as everybody knows about them. Instead let's just jump into how to configure vPC in nexus switches. Just one this Cisco calls their implementation of MLAG - vPC (virtual port channel).
Let's look at our network topology -
- Two Cisco Nexus switches will run peering between them and will form vPC/MLAG.
- Two Debian 10 linux machine will form multichassis link-aggregation with Nexus switches. They will simulate the client connection.
Cisco Nexus vPC Topology |
vPC peer-link - Link used to synchronize the state between vPC peer devices. In the topology, po256 interface containing eth1/6-7 interfaces is the peer-link.
vPC peer-keepalive link - The keepalive link between vPC peer devices; this link is used to monitor the liveness of the peer device. In the topology, interface mgmt0 is the keepalive link.
vPC domain - A domain containing the 2 peer devices. In a vPC topology only one domain can be active at a time and the domain number needs to be same on both peers. In our topology, our domain number will be 11.
vPC peer device - The two switches that are in a vPC domain and participating in the creation of vPC interface.
vPC interface - A LACP based aggregation interface that is created between two vPC peers and an end device. In our topology, po5 and po6 are the vPC interfaces.
vPC member port - These are the physical ports of the peer switches that forms the logical vPC interfaces. In the topology, eth1/1-2 are the member ports which in turn belongs to the vPC interface po5 or po6.
vPC role (primary/secondary) - Even though vPC is an active-active forwarding solution, the role determines during failure which switch are allowed to forward packets. For example, if the peer-link between two switches go down, then only primary switch is allowed to forward packet. In the topology, SW-Master has the primary and SW-Secondary has the secondary role.
Cisco Fabric Services (CFS) protocol - Underlying protocol running on top of vPC peer-link providing reliable synchronization and consistency check mechanisms between the 2 peer devices.
Now we are ready for the configuration part.
vPC peer-keepalive link configuration
In our example we are using out-of-management interface (mgmt0) as keepalive link. We can use any other layer-3 or SVI interface for that purpose. But we are using oob management interface because only with those interface as keepalive link we can activate another useful Nexus feature called switch-profile and config-sync. More on that in a blog post later.
In SW-Master -
!!! We need to activate LACP and vPC feature, by default they are turned off
feature vpc
feature lacp
interface mgmt0
vrf member management
ip address 10.1.1.1/24
vpc domain 11
!!! Lower priority means high likelihood of becoming vPC primary switch
role priority 10
!!! How to reach the peer-switch over keepalive link
peer-keepalive destination 10.1.1.2 source 10.1.1.1 vrf management
!!! Activates auto recovery, in case of primary switch failure, secondary with
!!! can forward packets
auto-recovery
!!! peer-switch represent both vPC peer as single device to the downstream
!!! STP aware devices and gives us shorter STP convergence time.
peer-switch
In SW-Slave -
feature vpc
feature lacp
interface mgmt0
vrf member management
ip address 10.1.1.2/24
vpc domain 11
role priority 20
peer-keepalive destination 10.1.1.1 source 10.1.1.2 vrf management
auto-recovery
peer-switch
At this point our, keepalive link should be up and running.
Master# show vpc
Legend:
(*) - local vPC is down, forwarding via vPC peer-link
vPC domain id : 11
Peer status : peer link not configured
vPC keep-alive status : peer is alive
Configuration consistency status : failed
Per-vlan consistency status : failed
Configuration inconsistency reason: vPC peer-link does not exist
Type-2 consistency status : failed
Type-2 inconsistency reason : vPC peer-link does not exist
vPC role : none established
Number of vPCs configured : 0
Peer Gateway : Disabled
Dual-active excluded VLANs : -
Graceful Consistency Check : Disabled (due to peer configuration)
Auto-recovery status : Enabled, timer is off.(timeout = 240s)
Delay-restore status : Timer is off.(timeout = 30s)
Delay-restore SVI status : Timer is off.(timeout = 10s)
Operational Layer3 Peer-router : Disabled
Virtual-peerlink mode : Disabled
As expected, the keepalive link is UP, but our peer status is down, because we have not configured any vPC peer-link. Let's configure that -
vPC peer-link configuration
To configure a peer-link between two nexus switches, we will create a port-channel interface (po256) over physical eth1/6-7 interfaces and define that po256 is our peer-link.
In SW-Master -
interface Ethernet1/6
channel-group 256 mode active
interface Ethernet1/7
channel-group 256 mode active
interface port-channel256
switchport mode trunk
!!! We are explicitly saying use po256 as peer-link for vPC
vpc peer-link
In SW-Slave -
interface Ethernet1/6
channel-group 256 mode active
interface Ethernet1/7
channel-group 256 mode active
interface port-channel256
switchport mode trunk
vpc peer-link
Now check again the status of vPC -
Master# sh vpc
Legend:
(*) - local vPC is down, forwarding via vPC peer-link
vPC domain id : 11
Peer status : peer adjacency formed ok
vPC keep-alive status : peer is alive
Configuration consistency status : success
Per-vlan consistency status : success
Type-2 consistency status : success
vPC role : primary
Number of vPCs configured : 0
Peer Gateway : Disabled
Dual-active excluded VLANs : -
Graceful Consistency Check : Enabled
Auto-recovery status : Enabled, timer is off.(timeout = 240s)
Delay-restore status : Timer is on.(timeout = 30s, 24s left)
Delay-restore SVI status : Timer is off.(timeout = 10s)
Operational Layer3 Peer-router : Disabled
Virtual-peerlink mode : Disabled
vPC Peer-link status
---------------------------------------------------------------------
id Port Status Active vlans
-- ---- ------ -------------------------------------------------
1 Po256 up 1
Now we can see that our "peer status" is "OK". The SW-Master has taken the role of primary switch because of low priority.
But the number of vPC interface is zero (0). Let's configure two LACP based vPC interface between the switches and servers running debian linux 10.
vPC interface configuration
We will configure two vPCs -
- Between SRV1 and Nexus switches over po5 interface.
- Between SRV2 and Nexus switches over po6 interface.
- The servers are placed in access vlan 101 with IP address 192.168.101.5/24 and 192.168.101.6/24 respectively.
In SW-Master -
vlan 101
name Server-Vlan
int ethernet 1/1
channel-group 5 mode active
int ethernet 1/2
channel-group 6 mode active
interface port-channel5
switchport access vlan 101
!!! This is vPC interface over LACP and port-channel and vPC numbers are
!!! always the same.
vpc 5
interface port-channel6
switchport access vlan 101
!!! This is vpc interface over LACP and port-channel and vpc numbers are
!!! always the same.
vpc 6
interface port-channel256
!!! Allow the vlan over vPC peer-link
switchport trunk allowed vlan 101
In SW-Slave -
vlan 101
name Server-Vlan
int ethernet 1/1
channel-group 5 mode active
int ethernet 1/2
channel-group 6 mode active
interface port-channel5
switchport access vlan 101
vpc 5
interface port-channel6
switchport access vlan 101
vpc 6
interface port-channel256
switchport trunk allowed vlan 101
That's it we are done with our vPC configuration. Let's verify with it from the servers connected with the Nexus switches.
Let's run our favorite command again, this time with the vPC number/port-channel interface number as an extra parameter to just get the status of vPC interfaces -
Master# show vpc 5
vPC status
----------------------------------------------------------------------------
Id Port Status Consistency Reason Active vlans
-- ------------ ------ ----------- ------ ---------------
5 Po5 up success success 101
Master# show vpc 6
vPC status
----------------------------------------------------------------------------
Id Port Status Consistency Reason Active vlans
-- ------------ ------ ----------- ------ ---------------
6 Po6 up success success 101
Master# show port-channel summary
Flags: D - Down P - Up in port-channel (members)
I - Individual H - Hot-standby (LACP only)
s - Suspended r - Module-removed
b - BFD Session Wait
S - Switched R - Routed
U - Up (port-channel)
p - Up in delay-lacp mode (member)
M - Not in use. Min-links not met
--------------------------------------------------------------------------------
Group Port- Type Protocol Member Ports
Channel
--------------------------------------------------------------------------------
5 Po5(SU) Eth LACP Eth1/1(P)
6 Po6(SU) Eth LACP Eth1/2(P)
256 Po256(SU) Eth LACP Eth1/6(P) Eth1/7(P)
In the server we have a aggregated interface called "bond0", let's have a look at it -
root@SRV1:~# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 00:50:00:00:03:02
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 2
Actor Key: 9
Partner Key: 32773
Partner Mac Address: 00:23:04:ee:be:0b
Even though we already know from the servers and switches that our interfaces are up and running, let's run a ping from SRV1 to SRV2, we can see that we are getting replies -
root@SRV1:~# ping 192.168.101.6
PING 192.168.101.6 (192.168.101.6) 56(84) bytes of data.
64 bytes from 192.168.101.6: icmp_seq=1 ttl=64 time=3.68 ms
64 bytes from 192.168.101.6: icmp_seq=2 ttl=64 time=3.57 ms
64 bytes from 192.168.101.6: icmp_seq=3 ttl=64 time=3.67 ms
64 bytes from 192.168.101.6: icmp_seq=4 ttl=64 time=3.56 ms
There are some variations of the "show vpc" command which we can use just to get specific information about a vPC component- feel free to try those commands -
Master# show vpc ?
<CR>
<1-16384> Enter a Virtual Port Channel number
brief Brief display of vPC status
consistency-parameters Show vPC Consistency Parameters
internal Commands for internal use
orphan-ports Show ports that are not part of vPC but have common VLANS
peer-keepalive VPC keepalive status
role VPC role status
statistics Statistics
Observation and enhancement
In this example, we have seen that with vPC/MLAG we need to configure the same things twice, one time in each vPC member switch. Like in our case - creating vPC interfaces po5 and po6 and it's related settings twice - in SW-Master and again in SW-Slave. In the next blog post - How to setup Cisco NX-OS switch-profile and config-sync, we will try to address that.
Reference
Comments
Post a Comment