Nimble 2.0 PSP Integration with VMware vSphere Part I (VIB Install)

I finally got a chance to sit down and play with ESX 5.5 and Nimble 2.0 array – instead of installing yet another Windows VM and SQL server for vCenter, I decided to try out the Linux vCenter VA.  It was actually quite fast and easy.  Total deployment time for vCenter was 10 minutes (including time to download the VA!).  I simply did the following:

-install ESX5.5 + vSphere 5.5 client (you’ll need an ESX host to place your vCenter VA, and you’ll need the VI Client to import the OVF)

-import Linux vCenter VA via vSphere Client connected to ESXi 5.5 host

Only odd thing I ran into was the inability to login to vCenter server via web client.  It couldn’t authenticate the root login.  It turned out I had to manually enter a password for the single sign-on administrator account.  To do that, remember to stop vCenter Server service (vpxd) and then click on “Save Settings” in the SSO tab with the newly entered password.


One of the key new features we introduced in Nimble 2.0 software is the network connection manager (NCM) for iSCSI multipathing – before you use it, make sure the ESXi host is configured per recommended practice (my environment has Nimble connected directly to the Fabric Interconnect, so we have to create two VMkernel ports, one for each subnet).

For simplicity, create two vSwitches, each with one physical uplink adapter:

How do I know for sure vmnic1 & vmnic2 are the ones to choose for iSCSI traffic?  Very simple – all my ESXi hosts are deployed through a standard service profile that is cloned every time.  So the NIC ordering does not change from one host to another:

Notice ESXi starts NIC naming with vmnic0, so a vNIC in UCS with desired order of “1” would be vmnic0.  In my configuration, iSCSI NICs have desired order of “2” & “3” so they are represented as vmnic1 & vmnic2.

Next step after the creation of vmkernel ports is iSCSI NIC binding – always remember, just because you configure NIC teaming for the vSwitch does not give you multipathing capability for iSCSI storage.  NIC teaming simply provides NIC failover for your virtual machine/management portgroup traffic.  By binding both iSCSI vmkernel NICs, we enable PSA to control multipathing (and that is only half the story!)

Your setup should look something like this after port binding:

 

 

 

 

Now we are ready to complete the other half of the multipathing story by installing Nimble PSP plugin to choose best available path for I/O:

  • Before you install anything, ESXi host should enter maintenance mode
  • Install the VIB using esxcli (you can of course use Update Manager to create a baseline and apply it to all hosts in your cluster)

 

  • Always a good practice to check the VIBs got installed properly:

 

  • Now every LUN you presented, or will present to ESXi will default to use Nimble_PSP_Directed plugin.  Here are some ways you can check:

 

 

 

  • #esxcli storage nmp device list  will return the following info about each and every one of your volumes:

Notice Nimble_PSP_Directed as the PSP, path switch policy is set to switch based on iops, and the iops value is set to ‘0’ (that’s our recommended practice starting with 1.4.x, and the nice thing is you don’t have to manually set it!).  And lastly, both vmkernel ports that we bind earlier show up as working paths.  If you are UI person, then this is what you would see in vSphere client:

That’s it for part 1 – more deep dive on what intelligence we inject into the Nimble PSP in the next post.

2 thoughts on “Nimble 2.0 PSP Integration with VMware vSphere Part I (VIB Install)

  1. Excited to know part 2!
    May i know, what kind of switch you using in your lab?
    Do you create VLAN to separate the subnet and vmkernal port?

    1. Hi Mohd, We connect the Nimble directly to the UCS Fabric Interconnect, and since the pair of FI interconnect doesn’t pass data traffic, we need to configure the Nimble to operate on dual subnets. If you have an access layer switch, and the Nimble is connected at that layer, you don’t need to configure dual subnets. If you do decide to go w/ dual subnet configuration, you don’t need to use vlan tagging. Hope this helps – part II is getting published on 10/16.

Leave a Reply

Your email address will not be published. Required fields are marked *