VMWare Storage Appliance (VSA) – Deep Dive!

I had the opportunity to play around with VMWares VSA – Company’s primary offering for SMB and Remote Offices. It brings with it enterprise solution without the costly SAN component.

In a nutshell, it uses servers local storage to act as “shared” that enables the cluster to have other solutions possible – vmotion, svmotion drs, to name a few.

For a complete market descrition, I suggest you head over to VMware site – as this post doesn’t cover this 🙂


NOTE: I won’t discuss the step by step process on how the cluster is done as there are already plenty of resources in the interwebs describing this. I’ll mainly focus on technical – what happens in the background part, from what I observe.

Now, lets get straight to the technical part – as the title of this post indicate. VSA has two setup: a 2Node config and a 3 Node Config. They are basically the same but I’ll discuss first a 3 Node as it would be easier to understand.

3 Node Cluster

What happens when you click a Create VSA cluster?  The VSA Manager installed in the vCenter pushes the VSA vAPP to each member of the VSA cluster you have chosen. This VSA vApp is based on SuSe Linux distribution. It reads the local storage of the ESXi Host and creates “VMDKS” that it then mount to the OS as its own “local disk”. The size of VMDKs are 7GB each – this may have been chosen for performance reason.

After this, it subdivides this storage into two – one to be used as VSADs while the other for Replica. So if you have a 2TB local storage – you would have roughly, 1TB of “shared” storage in your cluster per ESXi Host member.

The VSA vApp, then creates an FTP Mount point on the VSADs partition, using the supplied VSADs IP you have provided during the setup. This IP then gets mounted to each member of the cluster as NFS storage.

Whats the replica storage partition for? This becomes the “backup” to other VSADs storage in the VSA cluster.

What about for 2 Node? its basically the same as 3 Node, the only difference is that a Cluster Service is used to prevent split-brain scenario for the VSA cluster members.

Here’s a quick visio diagram I made to better understand:

VSA-2 Node Simple Diagram

Take Away

Having exposed the NFS mount point the VSAD’s IP, on the same subnet as the VSA, and Management IP, i wold strongly advise to have this on a different VLAN as other Production VLAN. This is to avoid storage traffic to mix Production VM traffic.


The technology being used by VSA is fairly common in IT world. In windows world, we have DFS and Windows DB mirroring. In Linux, its called DRBD.

Basically, the licensing amount you’ll be paying for it is mainly for the ease of convenience and tight integration with vCenter.

With a bit of effort, you can easily setup the same solution for free! Please take a look at Linux DRBD http://www.drbd.org/ on how to get started.



Leave a Reply


This site uses Akismet to reduce spam. Learn how your comment data is processed.