Tag Archives: cisco ucs

ucsblade

How to get massive cache for Cisco UCS blades

Quite a few customers have been asking how they could get massive cache on their Cisco UCS blades.  It is really quite simple – with three steps to follow in UCSM:

1.  Configure “Local Storage Configuration Policy”

The B200M4 blades are equipped “FlexFlash”, dual SD slots for installing ESXi.  Ensure “FlexFlash” is enabled for the local storage configuration policy.  Two SD cards are recommended for redundancy.  Leveraging FlexFlash to install and boot ESXi will eliminate the need to configure RAID for the local SSDs.  Therefore, “No RAID” can be selected as the the mode of operation for the local SCSI controller.  There is absolutely no need to log into the SCSI controller BIOS during blade boot up – just configure the policy once, and use it across the board with each blade server in the chassis.

localstoragepolicy

2.  Attach policy to service profile template

Two simple settings to apply:

a)ensure the template is created as an “updating template”: any updates to the template will automatically roll out changes to all blades that are attached to the service profile template

updatetemplate

b)ensure the local storage configuration policy defined in step 1) above is selected within the service profile template

Screen Shot 2015-08-17 at 12.22.50 AM

3.  Power on the blades and enjoy massive cache for your VMs

Two blade slots with 1.6TB SSD drive each will give your VMs ~3.2TB of raw flash capacity.  With DVX data reduction (inline dedupe and compression) of 5X, each ESXi host can enjoy over 15TB of effective cache.  Don’t worry about setting application policies to enable/disable cache, or pin certain applications volumes/LUNs to flash – just use more, and think less.

Want to learn more, come visit our booth @VMworld!

 

iSCSI Booting UCS blade/rack server with Nimble Storage

More and more customers are inquiring about how to do boot-from-SAN with Nimble Storage (save a few bucks on having to buy local SAS/SSD disks) – fortunately, we got both in our playground.  Here it is, step-by-step checklist/instructions…

*NOTE*

*The setup is configured to attach Nimble directly to the fabric interconnects with two subnets defined (one for each 10G interface of the controller) – if you attach the Nimble to a pair of access layer switches such as Nexus 5k with vPC, then dual subnets is NOT needed.  Remember, even though the FIs are configured as a cluster pair, the cluster interconnect interfaces between the FIs DO NOT carry data traffic, thus, the need for dual subnets for both FI connections to be active to the Nimble array

Continue reading iSCSI Booting UCS blade/rack server with Nimble Storage