Ran into a networking-related issue the other day, and while it wasn’t anything new or necessarily exciting, it taught me a lesson about cut-through switching vs store and forward switching.  Sharing this one for my fellow non-Network engineers:

Symptoms:

Realized that (from the Server Engineer perspective) we were seeing frequent packet receive CRC errors/drops on multiple server interfaces (ESXi hosts, HP virtual connects) in a particular datacenter row.  In this case, these devices were connected to Nexus 5Ks, either directly or via Nexus 2Ks.

Lesson Learned:

Depending on the network switch, it can either use the store and forwarding method or the cut-through method to forward frames.  I believe Cisco Catalyst switches and Cisco Nexus 7000 series typically use store and forwarding.  I believe the Cisco Nexus 5000 and 2000 series use cut-through.  (Network engineers: please correct me if that is incorrect)

Store and Forwarding – When there are CRC errors generated, ie. from a device network interface with a bad cable, this corruption will get detected before the frames are sent and would not get propagated across the network or “switching domain”.

Cut-Through – When there are CRC errors generated, ie. from a device network interface with a bad cable, these corrupted frames may get propagated because cut-through switches aren’t able to verify the integrity of an incoming packet before forwarding it.  This is good because it reduces switching latency, but it can cause CRC errors to “spread like the plague”

Resolution:

Worked with a friendly Network Engineer to identify the device and interface that was the source of these propagating errors, and then replaced the cable.

There were a couple of good sources of info that were very helpful for this one:

 

 

Hello IT folks,

It has been a while since I added new content to the blog.  This is partly due to the type of IT work I’ve been heavily focusing on for the past several months (making sure projects stay on track, finishing a few design-related diagrams, strengthening my powershell skills, getting a better understanding of the [on-prem] CMP offerings currently on the market.)  For whatever reason, this has given me a case of blogger’s block.

As I mentioned in a previous blog post, I enjoy geeking out to various podcasts during my work commute (when I’m not in the mood to listen to music).  I just updated my “Podcasts” page with a new one that I really like – Datanauts.  If your work involves anything that is IT Datacenter related and you haven’t listened to it yet, definitely check it out!  Though I currently specialize in one or two of the areas in datacenter technology, I LOVE discussing, thinking, and learning about the “big picture”.

On another note – if you haven’t checked out TechReckoning, it’s definitely worth a visit.  The newsletter (as well as the Geek Whisperers podcast) are useful when you’re trying to keep up with the latest in the industry.

During the setup of a new vSphere cluster, I had to troubleshoot an issue that was causing latency for the NFS datastores. This new vSphere cluster was attached to a newly setup NetApp storage array with essentially no workloads yet hosted on it.

One of the first symptoms of the latency was noticed when browsing the NFS datastores in the vSphere client GUI. After clicking on Browse Datastore, and then clicking on any VM folder, the window would display “Searching datastore…” and often take 40-50 seconds to display the files. Further testing with the NFS datastores confirmed that slowness was also seen when performing file copy operations, or certain disk intensive operations in the guest OS.

Several weeks were spent troubleshooting and working with vendors to try and determine the cause. It was found that when the configuration on the NetApp storage array side (and switches) was changed to use access ports versus trunk ports, the issue went away. In addition, the issue did not occur when one of the hosts and NetApp storage array were connected to the same datacenter switch. No jumbo frames were in this equation.

The cause of the issue was found to be a conflict between the default behavior of NetApp when using VLAN tagging and the Nexus core switch QoS configuration. By default, NetApp assigns a default CoS value of 4 when using VLAN tagging (trunk ports). This caused the NFS storage traffic to get placed in a queue on the router that was limited in terms of bandwidth. A workaround was implemented on the switches for the storage array interfaces that essentially changed the CoS value to fit with the network configuration in the environment.

Here are some links that helped to connect the dots when researching the issue:

https://kb.netapp.com/support/index?page=content&id=3013708&actp=RSS

http://keepingitclassless.net/2013/12/default-cos-value-in-netapp-cluster-mode/

https://communities.cisco.com/message/137658

http://jeffostermiller.blogspot.com/2012/03/common-nexus-qos-problem.html

http://www.beyondvm.com/2014/07/ucs-networking-adventure-a-tale-of-cos-and-the-vanishing-frames/

For those with networks that use Cisco OTV with Nexus 7Ks to extend Layer 2 connectivity between sites, be aware that there is a bug (CSCuq54506) that may cause brief network connectivity issues for VMs that are vMotioned between the sites. The first symptom you may notice is that a VM appears to drop ping or lose connectivity for almost 1-2 minutes after it is vMotioned between sites.  Following a vMotion, a destination ESXi host will send RARP traffic to notify switches and update the MAC tables. When this bug occurs, the RARP traffic/updates basically don’t make it to all of the switches at the source site.  (Note: Since not having portfast enabled on the source or destination host switch ports can cause symptoms that may look a bit similar, it’s a good idea to confirm portfast is enabled on all of the ports.)

Troubleshooting that can be used from the VMware side to help identify if you are hitting the bug:

  • Start running two continuous pings to a test VM:  one continuous ping from the site you are vMotioning from, and one continuous ping from the site you are vMotioning to.
  • vMotion the test VM from one site to the other.
  • If you see the continuous ping at the source site (site VM was vMotioned from) drop for 30-60 seconds, but the continuous ping at the destination site (site VM was vMotioned to) stays up or only drops a ping packet, then you may want to work with Cisco TAC to determine if the root cause is this bug.

 

Tips when preparing for the 6.1 upgrade:

  • In addition, you can typically find some good blog posts with step-by-step guides for this type of work. http://virtumaster.com/ is just one example.  It can also be helpful to do a quick search to see what existing KBs have come out so far for the new release. Get to googling!
  • There are some very friendly folks in the virtualization community that enjoy sharing their tech experiences with others, so keep an eye on social media chatter regarding the upgrade (ahem, Twitter).
  • Test the upgrade in the lab. (Though, unfortunately, if your lab is not identical to production, you may not be able to test for all potential issues. In my case, the biggest bug I hit did not occur in the lab but was seen in the production environment)
  • Snapshot, snapshot, snapshot. Especially the IaaS server. And backup the IaaS DB.

 
A few issues to be aware of when upgrading to 6.1:

Issue #1 –  If you try to upgrade the ID appliance and get “Error: Failed to install updates (Error while running installation tests…” and you see errors in the logs about conflicts with the certificate updates, try the following:

SSH to the ID appliance and run rpm -e vmware-certificate-client. Then try running the update again. Thanks to @stvkpln for sharing the fix.

Issue #2 – If you are going through the IaaS upgrade and get the following error near the end of the wizard (before the upgrade install even begins):

exception

Check to see how many DEM registry keys you have in the following location:  HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\VMware, Inc.\VMware vCloud Automation Center DEM

If you see extra DEM or DEO keys (ie. you only have 2 DEM workers install on the server but you see 3 DEM worker keys), this may be related to your issue.

Workaround:

Option 1 (remove duplicate keys):

  • Export the DEM registry key to back it up: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\VMware, Inc.\VMware vCloud Automation Center DEM
  • Check the contents of the registry keys that match your installed DEMs for version and install path information.
  • Remove the duplicate DEMInstance keys for the DEO and DEM.
  • Run the upgrade.

Option 2 (remove/reinstall):

  • Remove all DEMS from machine
  • Remove DEM registry keys
  • Run Upgrade
  • Install 6.1 DEMs with the Same name as the 6.0 DEMs

I would recommend going with Option 2 (especially if it is difficult to confirm by looking at the contents which keys match the installed DEMs). Thanks to @virtualjad and VMware engineering for sharing the workaround.

Issue #3 – Make sure to import the vCO package update for vCAC 6.1, as mentioned in KB 2088838, especially if you use stub workflows.

 

If you are importing the HP Smart Array (hpsa) controller driver in Update Manager (VUM), be aware of the following issue, especially if your VUM repository already contains a previous version of the HP hpsa driver.

Recently, I imported the latest HP Smart Array (hpsa version 5.0.0.60-1) driver in VUM, created a new baseline and attached it to hosts. I noticed the compliance for this new baseline changed to green/compliant right away, though I knew these hosts did not have the latest hpsa driver update. After scanning the hosts again, the baseline still showed as green and the details showed “Not Applicable”. When I took a further look at the VUM repository, it appeared that the old version of the hpsa driver, version 5.0.0.44, which I then recalled was in the repository before I imported the newer version, was no longer there. There was only one HP hpsa host extension that could be seen in the VUM repository GUI, and its release date was consistent with the hpsa 5.0.0.60-1 driver.  It was almost as if the two hpsa driver versions had merged in VUM.

The root cause? It appears that the HP hpsa patch name and patch ID in the metadata was the same across various hpsa driver version releases, and had not been made unique by the vendor (hpsa driver for ESX, Patch ID hpsa-500-5.0.0 for multiple versions). In addition, VUM did not warn me that I was trying to import a patch with a patch name and patch ID that was already in the VUM repository.

Screen Shot 2014-07-10 at 12.05.20 AM

Since Update Manager 5.0 does not let you remove items from the repository, the solution that is often proposed is to reinstall Update Manager and start clean. However, I was provided with a workaround for the issue that did not require an immediate reinstall of VUM, though eventually a reinstall will be required to clean up the repository. Hopefully, HP and VMware engineers will make the improvement needed to prevent this type of issue (and make it easier to remove items from the repository).

Workaround – Before you import the latest hpsa driver bundle, extract the files from the bundle. Tweak the following lines in the .xml files below. Zip the files again.

For example, for the hpsa 5.0.0.60-1 bundle, modify the tag in the following two files (ie. add the exact version number to the end of the id):

• hpsa-500-5.0.0-1874739\hpsa-500-5.0.0-offline_bundle-1874739\metadata\vmware.xml
• hpsa-500-5.0.0-1874739\hpsa-500-5.0.0-offline_bundle-1874739\metadata\bulletins\ hpsa-500-5.0.0.xml

Screen Shot 2014-07-10 at 12.05.41 AM

As you import the modified bundle in VUM, you should be able to see in the import wizard if this worked because the patch ID you see in the wizard should match the patch ID you assigned in the steps above.  Try these steps in the lab, use at your own risk, and let me know if you find any issues with the workaround.

Speaking of the HP Smart Array controller ESXi driver, make sure to check out this advisory if you have not seen it already: http://bit.ly/1mVFHQK

When migrating virtual or physical servers from one data center to another, especially if you are moving from Cisco Catalyst to Nexus switches, it’s helpful to be aware of the concept of Proxy ARP.  Here is a link to a Cisco article that explains Proxy ARP:

http://www.cisco.com/c/en/us/support/docs/ip/dynamic-address-allocation-resolution/13718-5.html.

If Proxy ARP is enabled on a switch/router, it can hide or mask misconfigured default gateways/subnet masks on servers.  A switch/router with this setting enabled can help servers reach devices in other subnets, even if the configured default gateway on a server is incorrect. Once the misconfigured servers are moved to network equipment that has Proxy ARP disabled, the servers will no longer be able to communicate with devices in other subnets.  Proxy ARP is enabled by default on Catalyst switches and disabled by default on Nexus switches.  Make sure to review the Proxy ARP settings in both the originating and destination data center.  If this setting will be disabled at the destination site, run a script to check default gateways and subnet masks on servers before beginning a migration.

If you are a VMware user and have not checked out your local VMUG, I highly encourage you to check out an event in your area (www.vmug.com).  It’s a great way to connect with other users in the VMware community and hear about the latest products/solutions from vendors.

They have recently launched a new program called FeedForward.  Mike Laverick, who has been promoting this initiative in the community, has a blog post about it here:  http://www.mikelaverick.com/2014/04/coming-now-feedforward/.  It’s a mentoring program for users who are interested in sharing a presentation at a VMUG event.  While encouraging users to share with others in the VMUG, the program also provides users with an opportunity to hear feedback on their presentation before sharing it with others.  It’s great to see an initiative that would help support and drive user participation, since I found this to be lacking in my own limited experience attending VMUGs.  Sure, hearing from vendors is important, but it is extremely helpful to hear directly from other admins and engineers who are using the technology in the field (the good and the bad!).  After all, why spend unnecessary time trying to “reinvent the wheel” when there may be others out there that have encountered similar issues.

Full Disclosure: I’ve never presented or co-presented at a VMUG, and unfortunately due to upcoming obligations, it looks like I won’t be able to volunteer anytime soon. However, it’s definitely something I would consider in the future, and I’ll definitely blog here about my experience if that happens.

If you are interesting in presenting at a VMUG or being a mentor, you can sign up at the following page:  http://www.vmug.com/feedforward.

…written from the perspective of a Virtualization Engineer.  A very special thanks to Networking Guru @cjordanVA for being a key contributor on this post.

Overlay Transport Virtualization (OTV), which is a Cisco feature that was released in 2010, can be used to extend Layer 2 traffic between distributed data centers.  The extension of Layer 2 domains across data centers may be required to support certain high availability solutions, such as stretched clusters or application mobility.  Instead of traffic being sent as Layer 2 across a Data Center Interconnect (DCI), OTV encapsulates the Layer 2 traffic in Layer 3 packets.  There are some benefits to using OTV for Layer 2 extension between sites, such as limiting the impact of unknown-unicast flooding.  OTV also allows for FHRP Isolation, which allows the same default gateway to exist in the distributed data centers at the same time.  This can help reduce traffic tromboning between sites.

When planning an OTV implementation in an enterprise environment with existing production systems, here are a few things to include in the testing phase when collaborating with other teams:

  • Setup a conference call for the OTV implementation day and share this information with the Infrastructure groups involved in the implementation and testing, ie. Network, Storage, Server, and Virtualization engineers.  This will allow staff involved to easily communicate when performing testing following the change.
  • Test pinging physical server interfaces by IP address at one datacenter from the other datacenter, and from various subnets.  Can you ping the interface from the same site, but not from the other site? (Make sure to establish a baseline before implementation day.)  Is your monitoring software at one site randomly alerting that it cannot ping devices at the other site?
  • If your vCenter Server manages hosts located in multiple data centers, was vCenter able to reconnect to ESXi hosts at the other datacenter (across the DCI) after OTV was enabled?
  • If you have systems that replicate storage/data between the data centers, test this replication after OTV is enabled and verify it completes successfully.

 

Be aware of a couple of gotchas:

ARP aging timer/CAM aging timer – Make sure to set the ARP aging timer lower than the CAM aging timer to prevent traffic from getting randomly blackholed.  This is an issue to watch out for if OTV is being implemented in a mixed Catalyst/Nexus environment, and will not likely be an issue if the environment is all Nexus.  The default times for the aging timer depend on the Cisco platform.  The default for a Catalyst 6500 is different than the default for a Nexus 7000.

Symptoms of an aging timer issue:  You will more than likely see failures during the pings tests mentioned above or you may see intermittent issues with establishing connectivity to certain hosts.

MTU Settings – Since OTV adds additional bytes to IP header packets and also sets the do not fragment “DF” bit, a larger MTU will need to be configured on any interfaces along the path of an OTV encapsulated packet.  Check the MTU settings prior to implementation, and again if issues arise when OTV is rolled out.  If MTU settings were properly configured, consider rebooting the OTV edge devices as a troubleshooting step if issues are encountered to verify the MTU settings actually applied properly and did not get stuck — (it’s happened).

Symptoms of an MTU-related issue:  If you have a vCenter server in one data center that manages hosts at the other datacenter, it may not be able to reconnect to the hosts at the other data center.  Storage replication may not complete successfully after OTV has been enabled.

The other day while logged into my vCenter Orchestrator 5.5 client, I noticed that there were several packages and workflows missing from the library.  The images below show how it looked with the missing packages/workflows.  For example, you can see most of the subfolders are missing from the vCenter folder, and the vCenter package is completely gone from the library.

Missingworkflows Missingpackages

I logged into the Orchestrator configuration web interface and verified that everything looked correct and “green”, and then I tried rebooting the appliance to see if that would make the missing contents appear in the GUI.  After searching VMware communities and talking with GSS, I found out that this is apparently a common issue with vCO 5.5.

The following steps resolved the issue for me:  Log into the vCO configuration web interface.  Go to the ‘Troubleshooting’ tab, select ‘reset to current version’ to reinstall the plugins.  Then go into ‘Startup Options’.  Select ‘Restart the Configuration server’.  Log back into the vCO configuration web interface, go to ‘Startup Options’ again, and select ‘Restart Service’.

Link to communities thread related to this issue: https://communities.vmware.com/thread/468000

The images below show how it all looked after completing these steps.  Back to normal!

Workflowspresent Packagespresent

Oh, and by the way, if you’re impatient like me and you try to login to vCO immediately after completing the steps above, you may get this following error.  If you do, wait a few minutes and try again.

nodeerror