Edge Computing is getting a great deal of attention as businesses across a range of industries have begun in earnest to invest in putting applications and processing power in close proximity to where they’re needed most.
However, as the edge comes into greater focus for organizations looking to drive deeper value from their applications and data, IT leaders should take care to avoid some common pitfalls from those who blazed a path before them. We offer these four examples of common mistakes to be wary of as you begin your journey to the edge:
1. Complexity is the Enemy of Reliability
The more complexity that exists in a system, the more likely it is that a small issue in one area will ultimately cascade into a catastrophic failure. When it comes to the edge, complexity means that the more things need to be monitored, patched, and updated, the more points of failure there will be.
As Dave Demlow, Scale Computing’s Vice President of Product Strategy, says in this webinar: “In some of our more successful edge deployments, complexity was the number one thing they set out to solve. They specifically wanted to go from an environment where they had a bunch of different networking appliances, one physical server for each critical application and its own software stack – and sometimes its own support organization that cared about that one application – and instead move to a consolidated architecture that could run multiple applications, that could be monitored, managed, and patched independently, that had built-in resiliency and redundancy so that when something did go wrong such as a power failure or hard drive failure, they didn’t have to shut everything down.”
It’s important to remember that the entire point of an edge deployment should be to help the business succeed. That means avoiding excessively complex technologies and requirements, and instead ensuring that the edge delivers on its promise of cost-effectiveness and business enablement beyond the data center.
2. Edge Systems Must be Purpose-Built
Life at the edge is quite different from the highly regulated environment of the data center, where everything is tightly controlled. Temperatures are constant, skilled IT personnel are always on-hand to troubleshoot issues, and power and connectivity are never in doubt.
The edge meanwhile is more like the ‘organized chaos’ that might be found in a classroom. Controls must be managed individually, and the reality of the edge is that there are often no highly trained personnel to turn to when things go awry.
Unlike conventional equipment that you’ll find in a data center that has specific cooling and power requirements, a purpose-built edge system is designed with the unknowns in mind. Whether your edge is situated in the depths of an ocean freighter or in the middle of a sweltering factory floor, modern edge equipment is designed to work in the messy world that we actually inhabit.
Edge components and systems need to be thought of as “universal” products that can be deployed when and where they’re needed, with few operational constraints, and made appropriately secure in any given environment.
3. Edge Deployments Shouldn’t Require Downtime
Resiliency and automated failover are the purposes of the modern data center. But at the edge, there are no guarantees. So when a server or application goes down in a remote retail branch, it’s back to writing receipts in longhand and recording credit card transactions via the old carbon imprinter.
Downtime has become an unfortunate yet unavoidable fact of life for traditional edge locations. Whether it’s the initial deployment of a new application or migrating a finicky legacy application to pushing out a critical security patch, some unknowable amount of downtime has come to be expected.
However, just because an older application wasn’t designed with resiliency in mind doesn’t mean that your platform shouldn’t be able to bring resiliency to the application.
A modern hyperconverged infrastructure means not having to schedule planned downtime to implement simple updates such as adding more capacity or boosting your compute cycles to run a new application. Ask yourself, when you need to make these types of routine updates to your edge infrastructure… is it plug-and-play or plug-and-pray?
4. Manual Management is a Recipe for Disaster
A surprising number of edge deployments are still being managed with some degree of manual intervention. Whether relying on a set of Bash or Powershell scripts to configure endpoint devices or tuning a load balancer by hand, the more that you need to physically touch a piece of infrastructure, the greater the chance that mistakes will be made.
The problem with manual management of edge infrastructure becomes all the more acute as you scale out the number of edge sites. While it might be feasible to rely on some level of manual processes for managing a few separate locations, it quickly becomes untenable when that number balloons to hundreds or thousands of locations.
Because edge deployments are so dynamic, it’s critical that they can be managed in a highly programmatic and repeatable manner. A standardized approach that requires little or no customization and can be overseen by someone with minimal technical skills should be the goal. When possible, a modern edge system should also offer or embrace infrastructure as code (IaC), which greatly simplifies the change control process and enables a fleet of edge deployments to be managed and maintained in a unified fashion.
Want to see edge computing and HCI in action? Request a demo today!