Another deeper look at deploying Nimble with Cisco UCS

We continue to get customer inquiries on the specifics of deploying Nimble with Cisco UCS – particularly on what the service profile should look like for iSCSI vNICs.  So here we go, we will dive straight into that bad boy:

We will start with the Fabric Interconnect, then to the vNICS, then to Nimble array, and last but not least, the vSphere vSwitch

1.  Fabric Interconnect

  • Configure cluster for the FI
    • The FIs should be configured in cluster mode, with a primary and subordinate (clustering of FI does NOT mean data traffic flows between the two – it is an active/passive cluster with management traffic flowing between the pair)
    • Configure appliance ports
      • The ports connected with Nimble data interfaces should be configured with appliance port mode – why you may ask?  Well, prior to UCSM 1.4 release, the ports on the FI are Ethernet ports that will receive broadcast/multicast traffic from the Ethernet fabric.  Appliance ports are designed specifically to accommodate Ethernet-based storage devices such as Nimble so its ports don’t get treated as another host/VM connected to an Ethernet uplink port
      • Here’s what ours look like for each FI (under “Physical Ports” for each FI in the “Equipment” tab
  • FI-A (connect one 10G port from each controller to the FI-A)
  • FI-B (connect remaining 10G port from each controller to FI-B)
2.  vNIC (it’s important to pin the iSCSI vNICs to a specific FI)

In our service profile, we have two vNICs defined for iSCSI traffic, and each vNIC is pinned to a specific FI1

Here’s what the vNIC setting looks like for each vNIC dedicated for iSCSI (under “General” tab):
















We use VLAN 27 & 28 representing the two subnets we have

Why didn’t we check “Enable Failover”?  Simply put, we let ESX SATP/PSP to handle failover for us.  More on topic is discussed in my joint presentation with Mostafa Khalil from VMware.

3.  Nimble Array

Notice we have subnet 127 & 128?  Why you may ask – that is so we could leverage both FIs for iSCSI data traffic












4.  vSphere vSwitch

We will need two VMkernel ports for data traffic, each configured on a separate subnet to match our design. You could use either a single vSwitch or two vSwitches.  Note if you use a single vSwitch, your NIC teaming policy for each VMKernel port must be overridden like below:

How the hell do I know vmnic1 & vmnic2 are the correct vNICs dedicated for iSCSI?  Please don’t share this secret J  If you click on “vNICs” under your service profile/service profile template, you get to see the “Desired Order” in which they will show up in ESX – remember, ESX assigns this based on the PCI bus number.  Desired order of “1” will show up as vmnic0, so our vNIC iSCSI-A with desired order of “2” will show up as vmnic1, so forth with vNIC iSCSI-B

That’s it, wish I could make it more complicated.  If you missed my post on booting ESXi from SAN on UCS, check out my previous post here.

Leave a Reply

Your email address will not be published. Required fields are marked *