Typically, in a highly-available setup, we need shared storage, i.e. disk space that is accessible by 2 or more servers at the same time, as depicted in the picture below.
That shared storage is usually provided by a storage array network (SAN) appliance, as depicted in the picture below.
And as indicated in the picture above, SAN appliances tend to be very expensive.
Storage Spaces Direct allows us to use 2 or more servers to do something similar to a SAN appliance, as depicted in the picture below. And servers are usually cheaper than SANs.
Storage Spaces Direct also has features like failover and fault-tolerance. If 1 disk or 1 server dies, the shared storage will still be up and running. This means we can use Storage Spaces Direct for some of our highly-available setups.
 |
| A highly-available setup using servers, not SAN, for shared storage. |
In this demonstration, we will be using 2 virtual machines to represent the 2 servers that will be providing the shared storage. These 2 virtual machines will form a cluster, and we will build Storage Spaces Direct on top of the cluster, as depicted below.
We will then build our Scale-Out File Server on top of Storage Spaces Direct...
At this point, Storage Spaces Direct is considered done. The shared storage, accessed via the Scale-Out File Server, can be used in a variety of ways.
In this demonstration, we configure a Hyper-V cluster to use the Scale-Out File Server to store files of its virtual machines. (In this demonstration, there will be only 1 node in the Hyper-V cluster. This article is about Storage Spaces Direct, not Hyper-V clustering.)
Thus, we have a highly-available setup, with shared storage, but without using any SANs.
The rest of the article will be about actually executing the above setup. All the commands and GUIs can be executed from a server that is part of the domain, logged in as domain administrator, and installed with the necessary Remote Server Administration Tools. It can be, but need not be, any of the servers above. (Click on the images to get a more detailed view.)
 |
| Use "Get-Disk" command to make sure that the disks can be seen. |
 |
| Install the needed Windows features (File-Services) in all servers. |
 |
| Install the needed Windows features (Failover-Clustering) in all servers. |
 |
"Test-Cluster" on DS1 and DS2, to make sure that later on when forming the cluster, it will not encounter fatal errors.
Important parameter is '-Include "Storage Spaces Direct". |
 |
Forming the cluster with "New-Cluster". (To get a better idea of what cluster we are forming, please refer to images at the beginning of this article.)
In this demonstration, we used the parameter "-NoStorage" because we want to manually create the storage later. |
 |
| Issue command "Enable-StorageSpacesDirect" to use all 6 disks (where are spreaded across the 2 nodes in the cluster) to form a storage pool. (To get a better idea of what pool we are forming, please refer to images at the beginning of this article.) |
 |
When the previous command is done, we can use "Failover Cluster Manager" to look at the cluster we have created.
Notice that all 6 disks are listed under DS2. (The same 6 disks are also listed under DS1.) |
 |
| The storage pool that we have just created, made up of 6 disks spreaded across DS1 and DS2. |
 |
Using "New-Volume" command to create the Cluster Shared Volume from the storage pool we just created.
Note that the fault-tolerance setting is specified here. |
 |
| "New-Volume" command completed. |
 |
Using "Failover Cluster Manager" to look at the Cluster Shared Volume that we have just created.
Note the "Health Status", "Operational Status", and "Resiliency".
Also note that it is 150GB in size. |
 |
If we look at the storage pool again, noticed that it says 302GB is used up, even though we just created a 150GB volume.
This is because the data is mirrored between the disks in DS1 and the disks in DS2. |
 |
At this point, our cluster does not have a Witness. A Witness is important for proper failover operations.
The next few screenshots shows the process of creating a Witness. The process is not specific to Storage Spaces Direct. It is the same for any failover cluster. In this demonstration, we will be creating a File Share Witness, using a File Share in HVnode1. |
 |
| First, note the computer object of the cluster. This computer object needs to be given full access to the witness file share that we will be creating later. |
 |
| The folder for the File Share Witness. |
 |
| Creating the file share using Server Manager... |
 |
| Choose "SMB Share - Quick". |
 |
| Choose "Type a custom path" and key in the local path of the shared folder. |
 |
| A descriptive share name is recommended. |
 |
| Click "Next". |
 |
This is the important part. We will be configuring the cluster computer object to have full access to the file share.
Click "Customize permissions..." |
 |
| Click "Add". |
 |
| Click "Select a principal". |
 |
(Click "Object Types" to ensure that "Computer" type objects can be selected.)
Enter the computer object of the cluster, click "OK". |
 |
| Select "Full control". Click "OK". |
 |
| Click "OK". |
 |
(Notice that the cluster computer object now has "Full Control".)
Click "Next". |
 |
| Click "Create". |
 |
| Click "Close". |
 |
| In "Failover Cluster Manager", right click the cluster, mouseover "More Actions", click "Configure Cluster Quorum Settings". |
 |
| Click "Next". |
 |
| Choose "Select the quorum witness". Click "Next". |
 |
| Choose "Configure a file share witness". Click "Next". |
 |
| Key in the full network path of the file share that we have created just now. Click "Next". |
 |
| Click "Next". |
 |
| Click "Finish". |
 |
| Check at "Failover Cluster Manager" that the cluster now has a Witness. |
 |
We are now configuring the "Scale-Out File Server" role in our cluster.
In "Failover Cluster Manager", right click the cluster, click "Configure Role". |
 |
| Click "Next". |
 |
| Select "File Server". Click "Next". |
 |
| Choose "Scale-Out File Server for application data". Click "Next". |
 |
| A descriptive name for the File Server is recommended. Click "Next". |
 |
| Click "Next". |
 |
| Click "Finish". |
 |
| In "Failover Cluster Manager", click "Roles", right-click the Scale-Out File Server role that we have just created, click "Add File Share". |
 |
| Choose "SMB Share - Applications", click "Next". |
 |
| Choose "Select by volume", Select the "CSVFS", click "Next". |
 |
| Assign a Share name. A descriptive Share name is recommended. Click "Next". |
 |
| Click "Next". |
 |
| This is an important step. In this demonstration, later on, we will be configuring a Hyper-V cluster to use this file share. All the nodes in the Hyper-V cluster needs full access to this file share. We will skip the screens showing the process of giving the nodes full access to this file share. |
 |
Notice that "HVnode1" has full access. "HVnode1" will be in the Hyper-V cluster later.
Click "Next". |
 |
| Click "Create". |
 |
| Click "Close". |
At this point, Storage Spaces Direct is considered done. Any file/folder placed/created in the above file share, i.e. \\SOFS1\CSVFolder, will be highly available.
 |
| Anything in this folder is highly available. |
We will skip the screens showing the process of creating the Hyper-V cluster. Also, in this demonstration, our Hyper-V cluster has only 1 node, i.e. "HVnode1". From this point onwards, our Hyper-V cluster is formed, it has 1 node, "HVnode1". We will now create the virtual machine that will use this Hyper-V cluster and the disk space from the highly available file share supported by Storage Space Direct.
 |
| First, we create a folder in the highly available file share, for storing all the virtual machine's files. |
 |
| We begin creating the virtual machine. Notice that it is done via "Failover Cluster Manager", not "Hyper-V Manager". |
 |
| We have only 1 node. So we select that node, click "OK". |
 |
| Click "Next". |
 |
| Note the location of folder to store the virtual machine. It is the highly available file share. (We will skip the next few screens to go to the important part.) |
 |
| Note the location of the vhdx file for the virtual machine. It is the highly available file share again. (We will skip the next few screens again, and go to the part where the virtual machine is created and ready to be started.) |
 |
| Starting virtual machine... |
 |
| Installing OS in virtual machine... |
 |
| OS installed in virtual machine. |
We will now run a simple test. We will open Notepad in the virtual machine, do some editing, turn off one of the nodes running Storage Spaces Direct, and see whether our virtual machine dies.
 |
| Editing a file in our virtual machine... |
 |
| Notice that our clustered shared volume is "Healthy" and "Max". |
 |
| Turning off one of the nodes. ("Turn Off" is similar to cutting off power supply suddenly. "Shut Down" is similar to doing a clean shutdown. We are emulating an unexpected event. So we "turn off" the node.) |
 |
| Node is now off... |
 |
| The cluster shared volume is now "degraded" because... |
 |
| ... one of the nodes is down. |
 |
| The virtual machine is still running, and we can edit files normally. |
One of the nodes supporting the holding the data of the virtual machine is down. But the virtual machine is still running normally. We will now power up the node that was down.
 |
| Powering up the node that was down. |
 |
| Node is now up. |
 |
| Cluster shared volume is now "regenerating". |
 |
| Virtual machine is still running. Files can be modified. |
 |
| Cluster shared volume is now "healthy". |
 |
| Virtual machine runs as-per-normal. |
In this demonstration, we took 2 servers, put 3 disks in each server, combine all the raw space in the 6 disk (using Storage Space Direct), and produced a highly available file share, We use the space in that file share to run a virtual machine. When one of the servers goes down, the virtual machine keeps running. Thus, with Storage Spaces Direct, we are able to build a highly available storage system, without using traditional SANs.