vSphere + Nimble: Standard vSwitch migration to vDS

More and more customers are asking about using the VMware vDS (vNetwork Distributed Switch) with Nimble Storage – let’s spend a few minutes going over some common questions as well as usage best practices. This info will also fold into the next version of our VMware best practices guide.

Is vDS supported by Nimble?
Yes, vDS is supported just like a standard vSwitch. In fact, it is recommended to use vDS if customer has the appropriate edition of vSphere and a large environment (10+ ESXi hosts)

Why vDS?
Very simple, instead of having to create the same vSwitch & port groups for each and every single ESXi host, you simply create a single vDS, along with the required port groups (distributed port groups if you will), then simply ‘attach’ each ESXi host to the vDS to inherit the network configuration. Of course, if you care about QoS feature like NetIOC or LACP for VM traffic, then those are all only available in the vDS.

How do I use it with Nimble? What are the best practices?
Here’s an example based on a ‘migration’ from standard vSwitch to vDS.

The “before vDS configuration” is based on existing best practice of two vSwitch with one uplink each, and configure NIC bonding for both vmkernel iscsi interfaces in sw/iscsi initiator:

beforevds
Let’sstart by adding a vDS by following the wizard in vCenter

NOTE: This is by no means a step-by-step as I only highlight key considerations specific to iSCSI connectivity.  Everything else was left as default except for the number of uplinks; we specify “2” as the number of uplinks as that’s the number of vmnic we have dedicated for iSCSI storage:

newvdswz

Before adding ESXi hosts to the vDS, I recommend creating two iSCSI distributed portgroups with meaningful names.  The helps keeping track of the specific active vmnic to assign as “active” for the NIC teaming policy.  For my configuration, I simply specify vmkiscsi1 and vmkiscsi2.  NOTE: Be sure to check “Customize default policies configuration” in the wizard so we could override the NIC teaming policy (remember for iSCSI NIC teaming when a single vSwitch is used, each vmkernel port group should only have one active adapter with the second one as being unused, not even “Standby”)

newportgroupcustomportgroupteamsetting

Now you are ready to add the ESXi host to the newly created vDS.  I personally recommend doing one vmkernel port at once.  It is certainly not a mandatory requirement but I like the peace of mind of a secondary, working path that I have not touched.  If anything were to go wrong, I still have storage access.  Sure enough, I screwed up the first time around.  Here’s my story:

During migration, I made the mistake of not assigning the uplink to the dvPortGroup for vmkiscsi1, and as a result, the following warning came up during impact analysis:

firstimpactfailure

 

Had I read the warning messages carefully, I would have gone back and corrected the mistake of not assigning vmnic1 to uplink 1 of the vDS.  However, I didn’t and below was the consequence: missing port group for the vmkernel interface as it got migrated to a vDS with no physical uplink!

nicbondingfailure

To correct this, I had to manually assign the vmnic to the uplink of the vDS, follow by a manual removal of old vmk1 port + addition of new one + adapter rescan to get back to happy, healthy state.  All of these tasks could be prevented had I not made the mistake and ignored the warning!

The second time around for the second vmkernel port, I learned my lesson and ensured the vmnic2 used for vmk2 is assigned to dvUplink2 for the vDS, along with the correct dvPortGroup assignment:

portgroup2migration

The impact analysis this time around looks much happier!

impactsuccess

That’s it!  You have successfully migrated from standard vSwitch to vDS without causing any down time to your storage environment for VMs.  If you are still not used to the web client in vSphere 5.5, you could still take a peak at the vDS from the thick client after migration:

finalproduct

Leave a Reply

Your email address will not be published. Required fields are marked *