Several customers have inquired about space reclamation in vSphere environment – let’s dive into this interesting topic for a bit.
Why does space reclamation matter?
Reason is fairly straightforward, after a VM gets deleted or storage vMotion from one datastore to another, it’d be a good idea to free up the space on the storage array so they could get used by others.
How does one perform space reclamation and keep track of the status?
Simple answer is to issue ‘esxcli storage vmfs unmap –volume-label=<label of VMFS datastore>. VMW KB 2057513 has very good details on the command usage, as well as examples. That’s it, simply issue the command and let the Nimble array find the best time to free up the space through garbage collection.
Why doesn’t the space get freed up immediately by the array?
For certain legacy vendors who want to get a quick check box for VAAI support, you may see space clean up immediately on the array, resulting in performance degradation. How bad is the degradation? Bad enough for this feature to be disabled by default in vSphere. For Nimble, space clean up/garbage collection is well thought out from day 1, it is an ongoing process that gets triggered when the controller is not busy processing I/O. The commands will be sent over to the array, and processed when the Nimble OS finds an optimistic time.
NOTE: Customers are encouraged to upgrade to Nimble OS 1.4.11 or 2.0.7 to take advantage of the latest enhancements made specific to space reclamation (SCSI_UNMAP)
To track the command success, we could leverage our favorite tool esxtop. See step-by-step instructions below:
- Windows 2008 R2 VM
- 40GB provisioned space with eager zero thick VMDK for the OS
- array compression ratio of 1.66x saved 12.68GB of space plus zero block compression saved 10 extra GB of space
To reclaim space, we can simply follow VMW KB 2057513 for vSphere 5.5 – I am glad to see the improvements they have made to simplify the command (no more crazy math calculation to get % of space to reclaim)!
#esxcli storage vmfs unmap –volume-label=<volume_name>
If you are curious about tracking the status/stats of the SCSI_UNMAP commands, you could follow the steps below:
1)Find EUI for the corresponding volume under software iSCSI path: note the ‘unmaptest’ volume corresponds to vmhba32:2:33 (canonical name for the volume)
2)Back to “Devices” tab for the sw/iSCSI HBA, identify the corresponding EUI identifier for target 33(given Nimble is single volume, single target, target #33 is unique)
3)Here’s a tip to reduce the number of objects in the screen for esxtop (–export-entity allows one to select only objects of interest when viewing stats in esxtop)
#esxtop –export-entity unmap.txt
#vi unmap.txt (go to the “Device” section and delete all other volumes’ EUI except for the one of interest)
4)issue #esxtop –import-entity unmap.test, type “U” to get to device view, you should only see the volume of interest J and then type “f” to select VAAI stats to display, do so by typing “O”
5)given the large number of stats that get displayed, it’s a good idea to move the VAAI stats to the front (unless you have a 50” monitor widescreen) – type “o” to bring up stats ordering, and type capital ‘O’ until you see it close to the front. My example below shows “O” (VAAI stats) being in the forefront next to fields A &B:
6)Type Enter and you will see all the VAAI primitive stats – the ones of interest to us would be “DELETE”, “DELETE_F” and “MBDEL/s” – “F” means failed so if you see a non-zero number after execution of the esxli unmap command, then it’s worth contacting our support team to see what’s up
7)Now issue ‘esxcli storage vmfs unmap –volume-label=unmaptest’ from another putty session; the esxtop screen should show incremental commands executed to the Nimble array to free up space. Each command frees up 200MB of space by default.
Last but not least, go grab a cup of coffee, and come back to the Nimble UI to check the space usage for the given volume. You will not receive a call from the application team complaining about array performance going down the drain (unlike certain storage vendors – you know who you are out there)
Happy volume with space freed up after a nice cup of coffee