Archives For Nexus 7000

During the setup of a new vSphere cluster, I had to troubleshoot an issue that was causing latency for the NFS datastores. This new vSphere cluster was attached to a newly setup NetApp storage array with essentially no workloads yet hosted on it.

One of the first symptoms of the latency was noticed when browsing the NFS datastores in the vSphere client GUI. After clicking on Browse Datastore, and then clicking on any VM folder, the window would display “Searching datastore…” and often take 40-50 seconds to display the files. Further testing with the NFS datastores confirmed that slowness was also seen when performing file copy operations, or certain disk intensive operations in the guest OS.

Several weeks were spent troubleshooting and working with vendors to try and determine the cause. It was found that when the configuration on the NetApp storage array side (and switches) was changed to use access ports versus trunk ports, the issue went away. In addition, the issue did not occur when one of the hosts and NetApp storage array were connected to the same datacenter switch. No jumbo frames were in this equation.

The cause of the issue was found to be a conflict between the default behavior of NetApp when using VLAN tagging and the Nexus core switch QoS configuration. By default, NetApp assigns a default CoS value of 4 when using VLAN tagging (trunk ports). This caused the NFS storage traffic to get placed in a queue on the router that was limited in terms of bandwidth. A workaround was implemented on the switches for the storage array interfaces that essentially changed the CoS value to fit with the network configuration in the environment.

Here are some links that helped to connect the dots when researching the issue:

https://kb.netapp.com/support/index?page=content&id=3013708&actp=RSS

http://keepingitclassless.net/2013/12/default-cos-value-in-netapp-cluster-mode/

https://communities.cisco.com/message/137658

http://jeffostermiller.blogspot.com/2012/03/common-nexus-qos-problem.html

http://www.beyondvm.com/2014/07/ucs-networking-adventure-a-tale-of-cos-and-the-vanishing-frames/

Advertisements

For those with networks that use Cisco OTV with Nexus 7Ks to extend Layer 2 connectivity between sites, be aware that there is a bug (CSCuq54506) that may cause brief network connectivity issues for VMs that are vMotioned between the sites. The first symptom you may notice is that a VM appears to drop ping or lose connectivity for almost 1-2 minutes after it is vMotioned between sites.  Following a vMotion, a destination ESXi host will send RARP traffic to notify switches and update the MAC tables. When this bug occurs, the RARP traffic/updates basically don’t make it to all of the switches at the source site.  (Note: Since not having portfast enabled on the source or destination host switch ports can cause symptoms that may look a bit similar, it’s a good idea to confirm portfast is enabled on all of the ports.)

Troubleshooting that can be used from the VMware side to help identify if you are hitting the bug:

  • Start running two continuous pings to a test VM:  one continuous ping from the site you are vMotioning from, and one continuous ping from the site you are vMotioning to.
  • vMotion the test VM from one site to the other.
  • If you see the continuous ping at the source site (site VM was vMotioned from) drop for 30-60 seconds, but the continuous ping at the destination site (site VM was vMotioned to) stays up or only drops a ping packet, then you may want to work with Cisco TAC to determine if the root cause is this bug.