Preview Tool

Cisco Bug: CSCus20438 - VEM losing connectivity to VSM due to vemdpa process going into a loop

Last Modified

Apr 06, 2018

Products (1)

  • Cisco Nexus 1000V Switch for VMware vSphere

Known Affected Releases


Description (partial)

On the VSM:

HH-NH11-SWW011# show mod
Mod  Ports  Module-Type                       Model               Status
---  -----  --------------------------------  ------------------  ------------
1    0      Virtual Supervisor Module         Nexus1000V          active *
2    0      Virtual Supervisor Module         Nexus1000V          ha-standby
3    1022   Virtual Ethernet Module           NA                  offline
4    1022   Virtual Ethernet Module           NA                  offline
6    1022   Virtual Ethernet Module           NA                  offline
7    1022   Virtual Ethernet Module           NA                  offline
8    1022   Virtual Ethernet Module           NA                  ok
9    1022   Virtual Ethernet Module           NA                  ok
10   1022   Virtual Ethernet Module           NA                  offline  <<==================

On the VEM:

~ # vemcmd show card
Card UUID type  2: 4c4c4544-0054-3110-8059-b5c04f375331
Card name: itnvs85008
Switch name: HH-NH11-SWW011
Switch alias: DvsPortset-0
Switch uuid: dd 05 32 50 14 52 d5 7e-13 d1 92 a2 15 1d 99 fb
Card domain: 303
Card slot: 10
VEM Tunnel Mode: L2 Mode
VEM Control (AIPC) MAC: 00:02:3d:11:2f:09
VEM Packet (Inband) MAC: 00:02:3d:21:2f:09
VEM Control Agent (DPA) MAC: 00:02:3d:41:2f:09
VEM SPAN MAC: 00:02:3d:31:2f:09
Primary VSM MAC : 00:02:3d:71:34:0c
Primary VSM PKT MAC : 00:02:3d:71:34:0d
Primary VSM MGMT MAC : 00:02:3d:71:34:0b
Standby VSM CTRL MAC : 00:02:3d:71:34:8c
Management IPv4 address:
Management IPv6 address: 0000:0000:0000:0000:0000:0000:0000:0000
Primary L3 Control IPv4 address:
Secondary VSM MAC : 00:00:00:00:00:00
Upgrade : Default
Max physical ports: 32
Max virtual ports: 990
Card control VLAN: 3502
Card packet VLAN: 3503
Control type multicast: No
Card Headless Mode : No  <<=================
DPA Status : Up
       Processors: 32
  Processor Cores: 16
Processor Sockets: 2
  Kernel Memory:   268421904
Port link-up delay: 5s
Heartbeat Set: True
Card Type: vem
PC LB Algo: source-mac
Datapath portset event in progress : no
Licensed: Yes
Global BPDU Guard: Disabled
DP Initialized: Yes
Tag Native VLAN: No
L3Sec Mode: FALSE

~ # vem status -v
Package vssnet-esxesx2013-release
Build 1
Date Fri Aug 15 00:22:33 PDT 2014
VEM modules are loaded
Switch Name      Num Ports   Used Ports  Configured Ports  MTU     Uplinks
vSwitch0         5632        1           128               1500
DVS Name         Num Ports   Used Ports  Configured Ports  MTU     Uplinks
HH-NH11-SWW011   1024        52          1024              1500    vmnic5,vmnic3,vmnic2,vmnic1,vmnic0,vmnic4
VEM Agent (vemdpa) is running
~ #

Problem would happen if a Vtracker command or 'show interface virtual pinning' is issued around the same time Vmotion of a VM happens. In this scenario, there is a narrow window during which port gets cleaned up on the DP on source host, but hasn't been cleaned up yet on DPA as it is waiting for acknowledgement from the VSM. If the Vtracker or pinning show command were to be processed by DPA in this time interval, then it would be stuck in a loop because of the previously mentioned bug in the code. The 'show interface virtual pinning' command is also executed as part of 'show tech-support details' and 'show tech-support details interface'.
Bug details contain sensitive information and therefore require a account to be viewed.

Bug Details Include

  • Full Description (including symptoms, conditions and workarounds)
  • Status
  • Severity
  • Known Fixed Releases
  • Related Community Discussions
  • Number of Related Support Cases
Bug information is viewable for customers and partners who have a service contract. Registered users can view up to 200 bugs per month without a service contract.