I feel bad about this, but I’ve just realised I never blogged about vSAN 7 when it was released earlier this year. VMware vSAN is one of my favourite storage technologies to work with and to see the rapid development of the product over the last 6 years has been an amazing journey. Now with the release of vSAN 7 update 1 there are even more features to discuss so in this post I’ll pick my top 5.
This Prior to vSphere 7 U1, the boundary for storage in vSAN was the cluster. This means any host outside of the cluster would not be able to consume vSAN storage resources. Yes, it is true you can publish iSCSI storage externally, but this is not supported for ESXi hosts to use and targeted at Windows Server Failover Clustering (WSFC).
With VSAN 7 U1, you can now mount VSAN storage resources across vSAN clusters that share the same vCenter instance. This helps address the issue of “stranded capacity”, where your environment may have sufficient capacity overall, but not in the cluster where it’s needed. In this release up to 5 clusters can mount remote storage and you need enterprise licensing or higher to use this.
When vSAN 7 launched it included native file services for the first time but only supported NFS v3 and NFS v4.1. vSAN 7U1 builds on this and adds support for SMB v2.1 and v3, along with Active Directory integration with support for Kerberos. I’m sure there’s someone out there thinking “what about SMB v1”, it shouldn’t be a surprise to hear that it’s not supported.
While it may be too early to replace dedicated file server appliances, there are a ton of good use cases for this. For example, vSAN is very popular with VDI environments. With vSAN 7 U1 it is now possible to store user data natively in vSAN alongside the desktops in a single solution.
Data efficiency services have been available on vSAN for quite some time and this has always been enabled as a cluster feature using both deduplication and compression on all flash clusters. It’s great to see there’s now the option for just compression and this give the same level of flexibility as most primary storage arrays. There’s little point burning CPU cycles on data you know won’t deduplicate very well. Another advantage is the failure domain is now a single storage device rather than the entire disk group, so this reduces the rebuild requirements.
2 node clusters have been extremely popular for many businesses with lots of remote or branch offices, retail is a great example of this with 2 node clusters being used in the shops connecting back to a central location. The challenge until now has been the requirement for each cluster to have its own witness appliance running at the central location. While the witness appliances do not take a huge amount of resource it soon stacks up when you have tens or potentially hundreds of remote sites. In vSAN 7 U1 it is now possible to use a shared witness that can support up to a maximum of 64 two node clusters and only requiring 6 vCPU and 32GB RAM to run. This reduces the amount of overhead required at the central site to support many two node deployments. I’m not just talking about hardware resources, but the deployment time and on-going management can be made far more efficient.
Enhanced Durability During Maintenance
When running a vSAN cluster it is important to consider the impact that maintenance operations will have on the workloads. When it comes to performing host maintenance there is a choice to be made when rebooting a host. The most common is “Ensure Accessibility” as this minimises data movement whilst ensuring it is available but there is the chance of reduced compliance depending on your policy. VSAN 7 U1 introduces an additional layer of protection by writing any new data to another host when a host enters maintenance mode. Rather than me butchering words to try and explain this have a look at the YouTube video in this link to see how the process works.
In summary vSAN 7 U1 is now very feature rich storage platform that is ready for all kinds of production workloads and don’t forget VMConAWS is powered by vSAN as well, so it’s clear to see where the future lies.