What is Hyperconvergence?

Hyperconvergence or Hyperconverged InfrastructureNoun - The integration of the storage, compute, and virtualization layers of infrastructure into a single solution architecture.

The idea behind hyperconvergence is to eliminate complexity from the modern datacenter. Virtualization has already brought IT a long way in consolidation and management of server workloads, but the underlying infrastructure is still built on traditional hardware component architecture. Let’s set the stage by describing the state of traditional infrastructure design.

Traditional Infrastructure

Build Your Own / DIY

The tried and true IT infrastructure design is a combination of technologies that pre-date server virtualization with a virtualization hypervisor layered on top. The process involves installing hypervisors on several brand name servers acting as hosts and adding a SAN or NAS to create a cluster. While this architecture offers a great deal of flexibility in choosing hardware and combining multiple vendor solutions, it comes with the complexity of maintaining and supporting these disparate solutions within the architecture.

Monolithic Storage Single Point of Failure

The DIY architecture relies on multiple servers and hypervisors having the ability to share a common storage system, which makes the storage a critical single point of failure for the entire infrastructure. This is commonly referred to in the industry as 3-2-1 architecture with 1 representing the single shared storage system that all servers and VM’s depend on (also called the inverted pyramid of doom). “Scale-out” storage systems have become available to distribute storage processing and redundancy across multiple independent “nodes”, but this technology only adds additional cost and complexity to the solution.

Reference Architecture

The market reacted to the complexity of the DIY approach by creating a reference architecture – a set of “pre-certified” components to run as a proven architecture. This approach gives the IT generalist the ability to quickly implement a known solution based on well documented use cases, but it still relies on the complexity of each layer being managed independently. It is a step toward convergence, but falls short when it comes to reducing the complexity of implementation and ongoing maintenance and support.

Converged Solutions

Converged solutions combine usually just two of the “pre-certified” hardware components into a single system to be easily consumable by the midmarket IT admin. These solutions generally use software defined architecture to eliminate some of the complexity in melding hardware components from different vendors. These solutions are just part of a larger solution that includes other components that were not converged, like the virtualization hypervisor. Many of these solutions call themselves “hyperconverged” even though they use a third party hypervisor. Don’t be fooled.  Because of their limited scope, converged solutions have only solved part of the complexity problem.

The Turning Point

The turning point for infrastructure architecture was the ability to combine all of the components needed to simply plug in and turn on the infrastructure and start creating virtual machines. The key missing component in convergence, the hypervisor, is the “hyper” in hyperconvergence.  This model effectively converges the hypervisor, storage, and compute in a single solution stack to provide a highly available infrastructure without the complexity associated with traditional multi-vendor architectures.  Delivered as a cluster of three or more appliance nodes, hyperconvergence connects with your network as a full infrastructure solution.

Virtualization Old vs New

Key Benefits


Hyperconvergence is an appliance-based approach combining servers, storage, and virtualization in a single vendor solution. Purchasing and support are done through a single vendor. The simplicity of the solution includes rapid-deployment, automated management capabilities, and a single pane of management. Scaling out can be as simple as adding additional cluster nodes.

High Availability

Hyperconverged appliances are designed for integrated clustering for automatic failover of your applications and data among the nodes of a cluster. Applications can be migrated between nodes in a live state without any service interruption. In the event of a complete node failure, applications are automatically restarted on other nodes to minimize downtime.


The resources of each node are aggregated into a cluster, creating a single computing platform for your applications and data. If more computing power is needed, simply add a node. If more storage is needed, simply add a node. Nodes of different types can be mixed and matched in a single cluster, providing flexibility to build out the perfect infrastructure for applications.


By combining so many components into a single solution, the cost of a hyperconverged solution is significantly lower than traditional solutions. The reduction in cost may be realized in the initial price vs. a traditional virtualization infrastructure design but will definitely be realized in ongoing management costs. The simplicity and scalability eliminate the need for expensive IT training and certifications. The built in high availability features reduce downtime and the high costs of lost productivity and service outages. 

The Scale Computing Difference

At Scale Computing, we challenge traditional thinking on infrastructure design and architecture with the idea that IT infrastructure should be so simple that anyone could manage it, with any level of experience. This idea goes against decades of IT managers and administrators being put through weeks of training and certification to manage servers, storage, and most recently, virtualization and cloud. Our idea is that infrastructure should not need multiple control panels filled with nerd knobs that must be monitored ad nauseam. Rather, we believe the expertise of IT administrators should be focused on applications and business processes.