When it comes to data access for users, faster is always better. While many factors affect how efficiently the data is delivered to users, including networking and processing, the most important may be storage architecture efficiency. Many storage architectures are complex, layering multiple storage protocols along with virtual appliances simply to present block storage to virtual machines. Scale Computing HyperCore’s storage architecture delivers high efficiency and streamlined simplicity.
The Scale Computing Reliable Independent Block Engine (SCRIBE) is the core component of SC//HyperCore, combining the storage drives from each node into a single, logical, cluster-wide storage pool. The pooling occurs automatically with no user configuration required. Blocks are stored redundantly across the cluster to allow for the loss of individual drives or an entire node.
The SCRIBE storage pool is available to all nodes of the cluster and presented without any file systems, traditional storage protocols like iSCSI/NFS, or virtual appliances. SCRIBE is embedded directly in the SC//HyperCore operating system. When a VM is created on SC//HyperCore, the virtual disks provide direct block access between the virtual machines and the SCRIBE storage pool. The only file systems created in SC//HyperCore are the file systems used by the guest operating systems in the VMs to address the virtual disks.
Other storage architectures tend to emulate SAN or NAS devices that were traditionally used in virtualization. These start with a storage pool on the lowest level with a file system layered on, then presented to the hypervisor where another file system is layered on and managed by a virtual storage appliance (VSA), and finally presented to a VM where another file system is layered on. Aside from the multiple levels of protocols that must be traversed for each I/O operation, the VSAs managing the storage can consume a large portion of the RAM and CPU that would otherwise be used for creating more VMs.
As a comparison, the route of an I/O operation in a VSA architecture may look something like:
Application -> RAM -> Hypervisor -> RAM -> VSA -> RAM -> Hypervisor -> RAM -> Write-cache SSD -> Erasure Code(SW R5/6) -> Disk -> Network to next node -> RAM -> Hypervisor -> RAM -> VSA -> RAM -> Hypervisor -> RAM -> Write-cache SSD -> Erasure code (SW R5/6) -> Disk
On SC//HyperCore, the same I/O operation route would look more like this:
Application -> RAM -> Disk -> RAM -> Network to next node -> RAM -> Disk
The example above includes an SSD cache as part of the VSA architecture. SCRIBE can also incorporate SSD but instead of using it merely as a cache, it is used as a storage tier within the SCRIBE storage pool. As a cache, the higher speed of SSD helps mask the inefficient design in VSA architecture. In SC//HyperCore, SSD is used as a storage tier, increasing both the size and the overall speed of the storage pool access. With SCRIBE, a hybrid storage architecture allows for data to be moved dynamically between SSD and HDD tiers using HyperCore Enhanced Automated Tiering (HEAT) technology.
In addition to higher performance, the efficiency of SCRIBE is what allows us to run on smaller form factors like the HE100 family of Scale Computing Hardware. Making use of fewer resources means that customers can now consider using platforms that were never an option before...therefore lowering the TCO in the infrastructure.
Data sets are most commonly characterized by only a small percentage of data being actively accessed. In a hybrid, tiered storage system, efficiency comes from the active data residing on SSD and the inactive data being stored on slower HDD storage. The HEAT technology in SCRIBE monitors data access and creates a dynamic mapping of active data blocks and moves these blocks to the SSD storage tier while moving inactive storage blocks off of SSD onto HDD.
By default, each virtual disk tries to achieve an equal level of efficiency by moving data blocks between SSD and HDD tiers. Sometimes, one virtual disk requires an even greater efficiency level than another disk; in these cases, SC//HyperCore administrators can prioritize those virtual disks.
For each virtual disk, the relative priority of SSD utilization can be adjusted on a scale from 0-11. The default for all disks is 4 and if no disk is ever changed, no disk has priority over any other. Any disk can be dynamically adjusted in priority, up or down, to increase or decrease the SSD utilization on that disk. Priority 0 will completely bypass SSD and only utilize HDD for that disk. Priority 11 will attempt to put the data for that disk completely on SSD, or at least as possible given the available SSD on the cluster, taking into account all other disk priorities.
SC//HyperCore meters real-time IOPS for each virtual disk, so the results of changing the priority on a particular disk can begin to be seen immediately. Because the virtual disks can be changed dynamically, the priority can be adjusted as much as necessary to achieve the correct balance of IOPS efficiency across virtual disks and VMs.
SCRIBE and HEAT are both designed to automatically incorporate new storage from additional SC//HyperCore nodes in a cluster. When a cluster is scaled out with new nodes, the storage from the new nodes is seamlessly added to the SCRIBE pool and immediately visible and usable from every node in the cluster. Administrators don’t need to configure storage, but they have the option to adjust SSD priority as needed.
By design, embedding the storage system in the hypervisor enables SC//HyperCore’s extreme efficiency and simplicity. With SC//HyperCore, users experience the speed benefits of SSD, the cost benefits of HDD, and the peace of mind of running a highly available cluster that absorbs the loss of disks or entire nodes.
This is the SC//HyperCore storage advantage. This storage system works specifically with modern virtualization, with none of the complexity of SAN and NAS that were designed to work with physical servers. Efficiency, hyperconvergence, and user experience have been simplified in one system.