Nimble 2.0 PSP Integration with VMware vSphere Part II (NCS + PSP deep dive)

Now that we have seen the Nimble PSP VIB getting installed on the ESXi host, let’s peel the onion on what happens behind the scene. When you list the VIB for more information, you will notice two components are installed on ESXi:

“NCS” stands for Nimble Connection Service, its job is to govern the number of iSCSI connections to establish between ESXi and Nimble array(s).

“PSP” stands for path selection plugin, its job is to find the best path to issue I/O.

NCS

Nimble NCS is a CIM Provider that runs as ESXi useworld (in case you are curious, that means it runs directly on the hypervisor; VMkernel provides services to both virtual machines (VMX) and userworld processes).  NCS CIM provider does the following chores:

1)monitor number of iSCSI connections

2)add new iSCSI connections as needed

3)remove iSCSI connections

The following config file dictates how often NCS comes in to perform chores above: /etc/nimble/ncm.conf

  • interval:120 –>by default, NCS comes alive every 2 minutes
  • min_volsessions:2 –>minimum # of sessions to establish per volume is 2
  • max_volsessions:8 –>maximum # of sessions to establish per volume is 8
  • worker_stop:0
  • log_level:1 –>our support team may ask to change this to a higher log level for troubleshooting purposes

Remember, the total number of connections per ESXi host (irrespective of number of volumes you have) is 1024.  The max_volsessions setting will come into play when you have a scale-out cluster of four arrays.  Good news for you as a customer is you don’t have to worry about a thing – simply let NCS handle the iSCSI path connections for you.

PSP

Next up is PSP: as you may recall, PSP is a critical component within ESXi PSA (pluggable storage architecture).  It works with SATP (storage array type plugin) hand-in-hand to ensure the best path is chosen by ESXi to issue I/O.  The Nimble PSP not only sets the optimal config parameters for load distribution (path switching based on iops, with number of iops=0), it also gets bin map information from Nimble array so ESXi knows which path to send I/O so it lands in the right array.  Unlike other “2.0” hybrid storage products, Nimble 2.0 is a true scale-out architecture with volume striping across array disk pools.  You no longer have to worry about changing the /etc/esx.conf file to add PSP rule for Nimble device, or manually set PSP with optimal parameters.  Again, simply install the VIB and let “Nimble_PSP_Directed” handle path selection and load distribution for you.

Enough theory, let’s see it in action.  After the VIB gets installed, there’s one little switch to toggle on the array side:

 

When “iSCSI Host Connection Method” is set to manual, ESX server will discovery static paths that are based on the actual IP address of the storage controller data interfaces:

If you stick with this manual setting on the array side for host iSCSI connection, you could expect to see additional paths getting discovered as you scale out to more Nimble arrays.  For example, by adding three additional arrays, each with two data interfaces, you could see six additional paths on a dual subnet configuration, or 12 additional paths on a single subnet configuration.  Remember the 1024 paths max limit in ESXi?  You could end up with more paths than you need, and not being able to provision more volumes to the ESXi cluster.  This is where the Nimble Connection Service comes in.  By enabling “Automatic” mode for host iSCSI connection, we abstract the individual storage controller data interfaces by “Virtual Target IP” for the entire group of Nimble arrays:

Now that we have enabled “Automatic” host connection mode, let’s take a look at the static targets tab for the ESXi SW/iSCSI initiator(Note the static targets are no longer tied to the individual controller data interface, but rather the “Virtual Target IP” address):

Here’s a magical “gotcha” you could also check out.  After selecting “Automatic” mode in the Nimble array, click on one of the Nimble datastore and bring up the “Manage Paths” screen.  You will see two additional paths discovered for the volume, and then paths becoming “dead” and eventually disappear!  Don’t panic!  It simply means NCS is doing its job in cleaning up the paths so all paths are directed towards the virtual target IP.

In conclusion, for Nimble 2.0, we recommend customers to:

1) Install Nimble PSP VIB on every ESXi host connected to Nimble array(s)

2) Enable “Automatic” iSCSI Host Connection Method

In the finale post, I will dig deeper into how NCS/PSP perform their magic when we have more than one Nimble array, as well as some CLI madness.

 

Leave a Reply

Your email address will not be published. Required fields are marked *