I would like to maximize performance and utilize hardware available but without maximizing risk at the same time. We have a host server that contains 2 hard drives: one SATA and a SSD drive that allows about 80GB+ of available space, but we're using a shared storage system in the environment where LUNs are used for the VMs. From my understanding of vSphere and how the vswapping goes, this SSD drive can be used for a "swap to host cache" option, which as I understand it, allows a VM to use this local storage basically as memory or a place to run the VM from and you'll get better performance, considering it uses this faster SSD and local technology rather than a shared storage system or regular hard drive. My concern is, how will this work in an HA environment? What happens when the server that VM is on goes down? A few of hosts in our clustered environment do NOT have SSD drives. How will those handle this if a host that does have an SSD goes down? Is this only used for over commitment of memory? How does this work?
My assumption is as follows:
1) You enable the option on the datastore:
Select the Host > 'Configuration' tab > Host Cache Configuration > right-click available SSD datastore > Properties... > Select 'Allocate space for host cache' and configure size
2) Select a VM > right-click 'Edit Settings...' > Options tab > select 'Swapfile Location' > Select 'Store in the host's swapfile datastore'
[That option states that "If a swapfile datastore is specified for the host, use that datastore. Otherwise store the swapfile with the virtual machine."]
What I get from this ^^ is that if the host is down that the VM is running on (with the local SSD), and if the VM must be restarted on a different host, the swapfile is kept (or, will be recreated) with the VM on whatever datastore it would get moved to. Is this correct?
Anyone ever successfully implement this technique?