HC3 Edge FAQ

What is Edge Computing?

Edge computing describes a physical computing infrastructure, intentionally located outside the four walls of the datacenter, so storage compute resources can be placed where they are needed. Run on a small or tiny hardware footprint, infrastructure at the edge collects, processes and reduces vast quantities of data and can be further uploaded to either a centralized datacenter or the cloud. Edge computing acts as a high performance bridge from local compute to both private and public clouds.

What Makes Edge Different than Ordinary On-Premises (Branch/ROBO/Other) Deployments? 

Branch and ROBO deployments certainly fall into the broad definition of edge computing. What is different today is that these small, on-premises locations are now running core, mission-critical applications. What was once simple infrastructure to run a remote DNS or print server may now be housing applications collecting IoT data, making intelligent decisions on manufacturing processes, or enhancing a retail experience. As these types of critical applications are deployed locally, the infrastructure they reside on must evolve to match the criticality of the workloads.

What is the Challenge of Edge Computing?

The core challenge of edge computing can be summed up as follows: The applications running on the edge are just as critical as the applications running in the datacenter; however, the resiliency, scalability, security, high-availability, and human IT resources that exist in the datacenter do not inherently exist at the edge. This creates a mismatch between the importance of the applications and the infrastructure and IT support they are provided. 

Automation and Machine Intelligence are Critical

Among the many challenges of edge computing, none is more apparent than the lack of human IT resources in remote locations. Sometimes there is limited staff, sometimes there is none at all. In the datacenter it is possible to “make it work” by applying human IT intelligence to problems. For example, when there is a system fault, a human can be alerted, and that IT professional can take the actions to identify the problem and then follow the steps required to bring the system back online.  At the edge, the luxury of alerting-followed-by-human-action is not a practical reality, as there may be no IT professionals located locally.

What is needed is an infrastructure based around machine intelligence, which can supplement and augment a central IT staff, providing the local intelligence which otherwise is not able to be physically present. The infrastructure must be able to identify complex problems, and then take the necessary steps to correct those problems while maintaining maximum uptime and reliability for the applications.

Flexibility, Overhead, and Cost are All Key Considerations

Because edge locations have previously only run a few critical applications, the existing infrastructure may be a mishmash of different point solutions, servers types, and infrastructure software components which were deployed, modified, and added to over time. No good datacenter would be deploying in such a chaotic fashion and for good reason: delivering high availability and efficiency with such a deployment is impossible.

A good edge deployment can be looked at as a micro-datacenter combined with intelligent automation.  Datacenter functions such as compute, storage, backup, disaster recovery, and application virtualization can be consolidated into a single, integrated platform. Infrastructure silos difficult to manage in a centralized datacenter become unmanageable at the edge, and thus, convergence of these into a singular platform is both efficient and cost effective.  

IT resource overhead is also a key consideration. A good edge infrastructure strategy is based around flexibility. After all, new applications, devices, datasources, and needs emerge continuously. Some edge applications may be resource and data storage heavy. Others may only need to run a few, very lightweight applications. For example, say a new deployment needs to run a few small applications which collectively consume just 16 GB of RAM. Would it make sense to deploy infrastructure that itself consumes 10x those resources in order to function? Of course not. Furthermore, the additional cost of such an infrastructure due to this type of excessive overhead is a significant barrier, especially if multiplied across dozens, hundreds, or thousands of locations.

Instead, what is needed is a solution that delivers the core functions of a datacenter, but that is scalable both up, and down, in size. Edge computing brings with it the need to deploy many micro-datacenters of varying sizes, and a proper platform should be able to scale in both directions to accommodate these needs.

Changes in Physical Size, Deployments, and Scalability

While centralized datacenters are measured in terms of rows of full racks, edge deployments are most often far smaller. They may be single racks, partial racks, or even completely different form factors such as tower-type systems, or IoT-sized platforms no larger than a Raspberry Pi.  It is quite possible to deploy a micro-datacenter of three servers, collectively no larger than a shoebox, which can run dozens of applications while delivering 10,000 IOPS. This type of radical change in form factor has an impact on where systems are deployed, the power they require, the heat they generate, and cost required for the hardware itself.  Furthermore, as needs change, resources will need to be added, and that must be done without disrupting the existing workloads. Taking a 3-hour maintenance window for each of 100s of edge deployments with no local IT staff is simply not a viable option.

Find Out More About HC3 Edge - Request A Demo

Scale Computing is at the forefront of making edge computing more accessible and more affordable for organizations of any size. We’d like to help you succeed with your current and future edge computing projects.

For more information or to request a demo, contact us at: 877.SCALE.59