Red Hat OpenStack up and running in minutes with Nimble Cinder Driver? The “Why”, “What” and “How”….

There are lots of buzz around OpenStack these days – I’m sure you want to give it a spin and see what the hype is all about.  Here’s a quick way to try it out using your existing resources (seriously, most of us don’t have the freaking budget to buy new servers for experimental/eval projects; if you do, more power to you!).  NOTE: NOTHING below is officially supported by VMware, Red Hat, Nimble Storage.  Why am I doing it?  Because I can J  Wouldn’t you want to leverage what you have in the infrastructure and check out what the hype is all about?  If you answered yes, then read on. (*Special Thanks to Jay Wang and Brady Zuo in our R&D team for making this happen!)

OpenStack 101

For those of you that don’t know what OpenStack is, here’s a quick 101 introduction, with key buzzwords to use so you sound like a pro:

Continue reading Red Hat OpenStack up and running in minutes with Nimble Cinder Driver? The “Why”, “What” and “How”….

Another deeper look at deploying Nimble with Cisco UCS

We continue to get customer inquiries on the specifics of deploying Nimble with Cisco UCS – particularly on what the service profile should look like for iSCSI vNICs.  So here we go, we will dive straight into that bad boy:

We will start with the Fabric Interconnect, then to the vNICS, then to Nimble array, and last but not least, the vSphere vSwitch

1.  Fabric Interconnect
Continue reading Another deeper look at deploying Nimble with Cisco UCS

iSCSI Booting UCS blade/rack server with Nimble Storage

More and more customers are inquiring about how to do boot-from-SAN with Nimble Storage (save a few bucks on having to buy local SAS/SSD disks) – fortunately, we got both in our playground.  Here it is, step-by-step checklist/instructions…

*NOTE*

*The setup is configured to attach Nimble directly to the fabric interconnects with two subnets defined (one for each 10G interface of the controller) – if you attach the Nimble to a pair of access layer switches such as Nexus 5k with vPC, then dual subnets is NOT needed.  Remember, even though the FIs are configured as a cluster pair, the cluster interconnect interfaces between the FIs DO NOT carry data traffic, thus, the need for dual subnets for both FI connections to be active to the Nimble array

Continue reading iSCSI Booting UCS blade/rack server with Nimble Storage

vSphere + Nimble Best Practices in 2 Minutes

 

In a super hurry and bored at the same time?  Check out a two minute video summarizing key best practices for deploying vSphere on Nimble Storage – it was created using a subscription based software called Sparkol Videoscribe…I personally find it pretty easy to use (at least the basic shape/flow/features), as a true whiteboard RSA style  production one would have taken me days to create.  The whole video creation took about 90 minutes, which includes 30 minutes of ramp-up time on using the tool for the first time.  Enjoy!  If folks find it useful, I’d create additional ones for Site Recovery Manager & VDI.

VMW + Nimble Best Practices RSA Style

 

To LACP or NOT to LACP?

We have gotten quite a few customer inquiries lately about using LACP with Nimble Storage – and I was fortunate enough to secure an HP Procurve network switch to dig deeper.  If you have LACP in mind for your virtualized environment, this post could be useful to you (from both VM traffic and iSCSI storage traffic perspectives)

First of all, let’s discuss what are the use cases/benefits for LACP for VM traffic:
Continue reading To LACP or NOT to LACP?

VMware vSphere on Nimble Best Practice Express Edition

This is a vSphere on Nimble best practices express guide – for those that are on-the-go or too busy to read our 20+ page best practices guide, this is a quick and dirty express edition.  Keep in mind this one is specific to VMDK ONLY.  We will have in-guest attached one in the near future.

Here we go, feedback/additional questions welcome.

Continue reading VMware vSphere on Nimble Best Practice Express Edition

Storage Performance 102 – 100,000 IOPS with 0.1 ms latency?!?! PROVE IT!

No, this post is not about how to get 100,000 IOPS with sub-millisecond latency from your storage solution.  Instead, I will show you some tips on how to make sure your storage vendor lives up to their performance claims.  We had discussed the fundamental concepts of storage performance in the part 1 post, it’s time to dig deeper into storage IO performance monitoring and measurement.  Knowing this will help making sure whatever you end up buying is actually better than the legacy crap you are replacing.  Isn’t that the whole point anyway? :) NOTE: Physical server measurement technique is out of scope for this post – we are only focusing on VMware ESX server in this post.

The million bucks question is —- how do I get a rough idea of how many IOPS my ESX server is running?  And what is the average latency?

Continue reading Storage Performance 102 – 100,000 IOPS with 0.1 ms latency?!?! PROVE IT!

Storage Performance 101 (part 1a Quick tip on ‘esxtop’)

In my quest of writing part 2 of storage performance 101 (so folks can be equipped to combat bullshit storage vendor claims), I came across a pretty neat esxtop command line option.  So I decided to do a mini post so folks could try this out.  Pretty sweet option for those of you with large environments, and want performance stats for a specific VM/LUN/vSwitch portgroup).

If you are not yet a fan of ‘esxtop’ yet, I highly encourage you to read my prior post on basic performance health check with esxtop .  This experimentally supported option is quite useful, especially when you 1)have a large environment with large number of VMs/LUNs and 2)you only care about a subset, or one specific VM, portgroup or LUN.  The option is the “-export-entity” and “import-entity”.  They go hand-in-hand as you need to first export the list of objects available for display, then run esxtop to import the modified list.

Continue reading Storage Performance 101 (part 1a Quick tip on ‘esxtop’)

Storage Performance 101 Part 1 – Back to Basics

I have recently spoken to a number of prospective customers who are evaluating new storage solutions (emerging storage vendors that are building flash enabled storage arrays) – what I find (not to my surprise), is that a good number of vendors are touting ambiguous/misleading marketing claims.  If you are evaluating storage solutions out there, be sure to challenge and validate the vendor’s claims, and equip yourself to smell B.S and push the bullshit button.  This will be a two part blog post on storage performance.  Here are some examples of ridiculous bullshit that I heard:
Continue reading Storage Performance 101 Part 1 – Back to Basics