Facing Challenges at the Edge
November 10, 2020

Tenry Fu
Spectro Cloud

Edge applications are here ... edge platforms and apps no longer present a chicken and egg problem, but platform and app manageability (and just about everything else) is still a challenge. We won't see mainstream adoption until a proper enterprise management platform exists which addresses these concerns. Here I'll lay out some challenges and potential designs and solutions in more detail.

First, let's compare edge environments to a traditional enterprise's multi-cluster and multi-cloud environments.

What is unique about the edge environments? Edge may have many locations (think smart restaurant, retail shops, airport, cruise ships, etc.), and these locations will not necessarily have a dedicated IT staff for operating and maintaining hardware and software; low touch and plug-n-play management is required.

Edge also may suffer from unreliable networks, the internet connection may temporarily get disconnected or be very slow, and workloads may still have to continue to work in those environments.

Because of the lack of IT administration on site, ideally the edge should directly work on bare metal hardware with less management and cost overhead (instead of a full blown datacenter virtualization layer).

And unlike the typical datacenter or cloud environment, each edge location will only need one Kubernetes cluster. These edge location Kubernetes clusters should be centrally managed from a datacenter or cloud, but the management solution at the edge should be part of an end-to-end edge stack, without requiring extra footprint for a separate management appliance.

Assuming you agree with these basic assumptions and requirements, let's get to the detail of potential solutions to address these challenges.

HW Discovery, Registration, and Infrastructure Provisioning

Because edge compute would optimally run on bare metal without a virtualization layer and be as low touch and plug-n-play as possible, one potential approach is for each compute node to be flashed with one lightweight bootstrap image. This image is very similar to a mobile phone's recovery partition, it will only have basic functionality to interact with the central management system to flash the rest of node partitions. This is a pretty clean and well understood approach.

How might this work? Upon boot up, the compute node could automatically register with the central management system, along with some metadata, such as its site location, capacity, hw info, and the management system will put it into a group of newly discovered/unassigned nodes.

Later, during the infrastructure provisioning process, the central management system admin can create an edge cluster with a full stack cluster definition that is composed of selected stack components: base OS (e.g., ubuntu 20.04), Kubernetes (e.g., k3s), CSI and CNI integrations, any necessary add-ons such as logging and monitoring, and also the additional edge applications (e.g., facial recognition, smart retailer ordering system).

Then the admin can assign the node to the edge cluster with different roles such as master nodes, worker nodes, etc. If things are not working properly, the node can be booted back into the bootstrap partition and start over. This approach requires centralized management and internet connectivity at this initial phase, but would give admins more fine grained control and flexibility.

Another approach could be a fully baked image for offline provisioning. The central management system can generate a full-stack offline provisioning image based on the cluster definition the admin has created.

The first compute node to come online becomes the master and worker node, and registers itself with the central management system.

The second node online within the local network will automatically discover the first node through broadcast and join the cluster as a worker node, and so on.

Once the cluster has more than three nodes, the initial master node can pick two extra worker nodes to make them also perform dual duty as master/worker to expand the cluster to be a multi-master HA setup.

The first approach is more flexible, but a two-step provisioning process (bootstrap + infra provisioning) takes more time and requires an internet connection.

The second approach works in a plug-n-play fashion without requiring an internet connection, but an admin has less control over each node's role, and since the full image can be pretty big, it probably would be better to preflash the nodes before shipping them to the edge site. Personally I prefer the second approach when possible, but it's likely that some form of both will be supported.

Manageability

Remote, or Centralized Remote Management, is really the key to making edge solutions successful. The remote management should cover both infrastructure stack and application updates.

For infrastructure updates, the cleanest approach is immutable and rolling updates, i.e., start with a new infrastructure node with the updated infra image, have it join the cluster, tear down an old node, and repeat. This is the common practice for Kubernetes upgrades in virtualized environments, whether it is in private clouds or public clouds.

However, for bare metal, it is not so easy, since there is no spare node to rotate with. One possible solution is creating an A-B dual partition: if the current infrastructure stack is on A partition, it can write the updated infra stack to B partition, and flip the boot table to boot from B partition. If post reboot there is anything wrong, it is fairly easy to revert back to A partition without losing data.

Once the infrastructure lifecycle management is resolved, the rest of add-on and application lifecycle management is pretty easy. After all, most of these components are just HELM charts, and it is easy to do rolling updates in K8s environment.

Visibility and monitoring are also very important when dealing with 100s to 1000s of edge clusters. Each cluster should embed logging and monitoring solutions, periodically ship the log to a central system, and also send system status and metrics to a central TSDB. Because of the scalability requirements, multi-cluster monitoring systems are recommended (Prometheus alone may not be able to scale to monitor a large number of clusters).
Also, because the network is not necessarily reliable at edge locations, the system should have the capability to cache the logs and metrics locally until it is able to send them to the central management system.

It is also worth noting that the central management system should just be a management plane, not a traditional control plane. In Kubernetes, the master nodes are the local control plane. In the event of management plane failure, all application workloads can still run locally without any issue. This is also applicable if the edge cluster lost internet connection to the management plane due to network issues.

Workload Placement

For edge micro datacenters, because the application workload can be part of a cluster profile stack definition, the workload placement in general is not an issue.

However, if it is the case that different edge locations need to have different kinds of workloads, since the central management system has all edge cluster metadata, it is easy to have a rule-based or machine learning based placement engine perform intent driven placement. The user can specify some intent such as SLA, location, latency, cost, performance requirements, and the workload placement engine can decide which edge location(s) to deploy the workload to.

Service Mesh and Data Sharing

Because of the nature of edge network unreliability and because each edge cluster is running independently, workloads at the edge probably will not have many multi-cluster service mesh use cases compared to their datacenter and cloud counterparts. However, there can be use cases where some data needs to be shared and some services need to be accessed by all edge locations. Such data and services will more likely be hosted in public clouds or a datacenter, but allow edge clusters to keep some local cached data for a period of time. These will be dealt on a case by case basis, and are more likely to be handled in the application layer rather than the edge infrastructure.

In conclusion, Kubernetes will be ideal for edge, but edge cluster management is still challenging. As of today there are no commercial solutions that have addressed all the challenges, but several building block technologies have emerged. Given the interest in edge computing, we should see robust edge management solutions in the very near future.

Tenry Fu is Co-Founder and CEO of Spectro Cloud
Share this

Industry News

April 25, 2024

JFrog announced a new machine learning (ML) lifecycle integration between JFrog Artifactory and MLflow, an open source software platform originally developed by Databricks.

April 25, 2024

Copado announced the general availability of Test Copilot, the AI-powered test creation assistant.

April 25, 2024

SmartBear has added no-code test automation powered by GenAI to its Zephyr Scale, the solution that delivers scalable, performant test management inside Jira.

April 24, 2024

Opsera announced that two new patents have been issued for its Unified DevOps Platform, now totaling nine patents issued for the cloud-native DevOps Platform.

April 23, 2024

mabl announced the addition of mobile application testing to its platform.

April 23, 2024

Spectro Cloud announced the achievement of a new Amazon Web Services (AWS) Competency designation.

April 22, 2024

GitLab announced the general availability of GitLab Duo Chat.

April 18, 2024

SmartBear announced a new version of its API design and documentation tool, SwaggerHub, integrating Stoplight’s API open source tools.

April 18, 2024

Red Hat announced updates to Red Hat Trusted Software Supply Chain.

April 18, 2024

Tricentis announced the latest update to the company’s AI offerings with the launch of Tricentis Copilot, a suite of solutions leveraging generative AI to enhance productivity throughout the entire testing lifecycle.

April 17, 2024

CIQ launched fully supported, upstream stable kernels for Rocky Linux via the CIQ Enterprise Linux Platform, providing enhanced performance, hardware compatibility and security.

April 17, 2024

Redgate launched an enterprise version of its database monitoring tool, providing a range of new features to address the challenges of scale and complexity faced by larger organizations.

April 17, 2024

Snyk announced the expansion of its current partnership with Google Cloud to advance secure code generated by Google Cloud’s generative-AI-powered collaborator service, Gemini Code Assist.

April 16, 2024

Kong announced the commercial availability of Kong Konnect Dedicated Cloud Gateways on Amazon Web Services (AWS).

April 16, 2024

Pegasystems announced the general availability of Pega Infinity ’24.1™.