To get the equivalent amount of usable storage as the popular storage vendors array we’d need 7 VSAN nodes. Two disk pools with 1 x 800GB SSD and 7 X 1.2TB SAS disks each, giving us 1.6TB of SSD cache and 7.5TB of SAS storage (again these figures are usable based of a default VSAN storage policy of 2n). Putting together our VSAN only node, to compete on numbers, I would size it like this: Looking at an HP D元80 Gen9 with one CPU (E5-2623) 32GB of Ram. All in, not bad for the price point and a good system all round. As you would expect from an enterprise storage system it has a good deal of redundancy built-in with 4 nodes to manage the storage and 8 x 10GB Ethernet ports. For this project we decided to use the SSD as a caching layer. This figure gives us 48TB of HDD storage in 64 SAS disks and 9TB of SSD storage in 8 SSD disks (these figures are usable). I have a quote from a major vendor for £198,409.45. So this is interesting and I’ve decided to look at a couple of real world examples below.
In a future post I’ll look at this again.Īs this is only intended to be a storage service the licensing should be one ESXi-VSAN license (I’ve guessed it to be £1,500 but could be as high as £2,000, which I’ve also given as a cost per TB below) Network speed, controller card, SSD speed, SSD size, and so on and so forth. Performance on the other hand could be very interesting topic, a complex topic, but still interesting. three or four hosts and that’s that taken care of. Kinda obvious, i know, but redundancy would be taken care of by VMware clustering technology.
Easier said than done I know but still should be considered.
Now this is easy to implement immediately but if you want to add a bit more intelligence around it, some kind of construct that has a virtual IP that could move between hosts or something like the virtual IP address technology from Log insight clusters.
SMB2.x and NFS3 prefer to access data through a single IP address or hostname. Mounting the VMDK to multiple ESXi hosts would allow the data to be taken advantage of by NFS4 and SMB3 compliant hosts. We already know that NFS4 and SMB3 can take advantage of multiple IP addresses (hosts) to provide multi-channel and VMware clusters are, quite frankly, an incredible implementation of clustering technology. VMDK’s can be accessed by multiple ESXi hosts. ESXi could mount the VMDK and write any file system in there. There would be no real changes needed to the VMFS file system to accommodate a much more granular permission structure that would be required by SMB. VMDK wins. I would think that using VMDK’s instead of folders would be a much better idea. lets explore this idea more!įolders (native on the file system) or VMDK’s This would put them in direct competition with Storage vendors and would greatly reduce the cost for storage in the datacenter and allow for a huge amount of flexibility for businesses of all sizes. What if VMware made a VSAN only cluster, no VM’s allowed only storage exports.
So where’s the love for VSAN? Push this technology and it will really change the datacenter. its being pushed everywhere, including presence into “competitors” such as AWS.
If you really want to do the software defined storage thing then really go for it. I mean it makes sense doesn’t it? Where’s NFS, where’s SMB? I know a linux architect who would love to see this come in. I liked what I was seeing but about an hour towards the end of the allotted time I put my hand up and asked why there were no NAS features planned for the future release. I was lucky enough to attend VMworld 2015 and luckier to be invited to the VSAN pioneer summit, which gave us a real in-depth look at the future of VSAN. There have been a few posts speculating on the future of VSAN and I for one am looking forward to it with great anticipation. However, I don’t think VMware really know what a hugely transformative technology VSAN could be.