A Guide to Stateful Kubernetes: Federation and Multi-Cluster Explained
February 09, 2021

Cyril Plisko
Replix

So, you've finally decided to use Kubernetes for stateful applications? Congrats! (And good luck.)

But first, let's put the Champagne back on the ice and talk about data — the chain that binds your stateful architecture to a single location. If you're only using a single region, you're in luck, but what happens when the same application needs to run on multiple regions? Or, even worse, multiple clouds?

Stateless applications use service meshes so that the application layer could communicate across clusters. But stateful applications are a different animal. They require you to have available synced data. 

Now you are faced with some tough questions. How can you ensure that your application is running consistently if the distance between the application and its data varies? How would you solve that issue? Perhaps it’s better to not venture into that mess at all?

Stateful Challenges Come at Scale. Remember the CAP Theorem?

If I'm running a database on a Kubernetes cluster, all the pods require access to a local volume to store and read data. In other words, any entry that was done with one pod should be seen by the rest of the pods.

Consistency, or the requirement that every read is updated with the latest write, sounds simple. But if your goal is to enjoy the true benefit of distributed network availability, limiting yourself to applications that run close to their data with as little room as possible for error is not enough.

Not a problem, you might say. I'll set up a centralized database to take care of all my pod's and cluster’s stateful requests.

Congrats, you've just introduced a single point of failure to unite them all; if something happens, none of your pods will have access to data, a double-edged sword that breaks the partition tolerance.

Balance is key, and the tradeoff between consistency, availability, and partitioning is of paramount importance. Could we solve this by simply adding another cluster?

What is Multi-Cluster, and What to Do with It?

Once you've designed and coded your application and you've built containers, in theory, all that is left is the simple task of running them. But getting from code to up and running is not nearly as simple, as anyone who has ever built a containerized application will attest.

Before deploying to the production environment, you need to run various dev/test/stage cycles. You also need to think of scale — your production application may need to run in many different places for reasons like horizontal scalability, resiliency, or close proximity to end-users.

Multi-cluster is a deployment strategy that runs multiple Kubernetes clusters. Running multiple clusters is common, but the issues start when you need pods to communicate with one another.

Multi-cluster is a strategy to deploy containerized applications across multiple Kubernetes Clusters.

Multi-cluster use cases:

Improved application availability: A cluster that does not have another cluster is a source of failure. Having multiple cloned clusters that can failover in case a main cluster is damaged provides higher regional performance.

Support for large organizations:  Running multiple clusters in different environments. Multi-clustering will consolidate all clusters into a single management portal, giving the ability to deploy applications across multiple availability zones and clusters. By standardizing the cluster creation across environments, overhead can be reduced as well as  time to market for features and updates. In addition, multi-cluster deployments are easily scalable.

Isolation: The ability to un-multicluster by creating individual fault domains. Updating of the clusters can be phased to reduce the impact of faulty versions or malicious code.

Performance: The closer the application's proximity to the end-user, the lower the latency and the risk to data in transit.

Compliance: There are laws in many countries that govern where you can store users' data. Depending on the regulations, you might have to store the data from users in China within the country. Having a system that spans multiple regions enables you to do just that. If you only have a data center in the US, then you're going to have a tough time working with a global user-base.

Federating Stateful Applications

The idea behind federation is to provide a single configuration to manage the application across multiple clusters or regions.

Federation use cases:

Reduced Configuration Management Complexity: A single location to consolidate cluster management. In this use case, the data is not shared across the application, and it works well for stateless applications.

High Availability (HA): Add cluster redundancy for business continuity (BCP), which is also a good solution for stateless applications.

Stateless applications enjoy the true benefits of multi-cluster and federated Kubernetes; stateful is a different story

The portability of stateless applications gives them the ability to run anywhere, but not all applications are stateless; most applications are dependent on data, data that does not act by the same rule book as stateless applications.

Data binds the application to its storage locations. A physical location becomes an app dependency, and every request from the data creates latency according to its distance from the application resulting in service inconsistency.

When it comes to stateful applications, you can solve those problems by treating your state just as you do your containers.Instead of forcing the application to run where the data happened to be originally provisioned, the data needs to follow the application.

Cyril Plisko is Founder and CTO of Replix
Share this

Industry News

December 06, 2021

Ascend.io announced support for Amazon Redshift Serverless powered on Amazon Web Services, Inc. (AWS), a fully managed petabyte-scale cloud data warehouse.

December 06, 2021

Neosec formed a strategic partnership with Kong Inc. to integrate its API security platform with Kong Gateway to provide a complete enterprise-class solution for managing and securing APIs and microservices.

December 02, 2021

Mirantis announced DevOpsCare, powered by Lens, a vendor-agnostic, fully-managed CI/CD (continuous integration/continuous deployment) product for any Kubernetes environment, offering developers higher levels of productivity more quickly.

December 02, 2021

The D2iQ Kubernetes Platform (DKP) is now available in AWS Marketplace, a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on Amazon Web Services, Inc. (AWS).

December 01, 2021

Bugcrowd announced the availability of Bugcrowd's cybersecurity solutions on the AWS Marketplace, providing customers with easy access, simplified billing, quick deployment, and streamlined license management.

December 01, 2021

Kublr received Microsoft Azure Arc-enabled Kubernetes validation, including for Azure Arc-enabled Kubernetes for Data Services.

December 01, 2021

CloudSphere achieved Amazon Web Services (AWS) Migration and Modernization Competency for discovering, planning, and helping enterprise customers move business services to AWS to reduce cost, increase agility and improve security.

November 30, 2021

JFrog introduced a new container registry and package manager for running JFrog Artifactory with Kubernetes clusters on-premises, in the cloud, or both.

November 30, 2021

Docker announced the availability of Docker Official Images directly from Amazon Web Services (AWS).

November 30, 2021

Weaveworks announced the general availability of Weave GitOps Enterprise, a GitOps platform that automates continuous application delivery and Kubernetes operations at any scale.

November 30, 2021

Amazon Web Services announced AWS Mainframe Modernization, a new service that makes it faster and easier for customers to migrate mainframe and legacy workloads to the cloud, and enjoy the superior agility, elasticity, and cost savings of AWS.

November 29, 2021

Quali announced the newest release of Torque Enterprise, which includes enhanced integration with Terraform, new custom tagging capabilities, and improved cost visibility dashboards, unleashing an entirely new level of self-service access to application environments on demand.

November 29, 2021

Vertical Relevance (VR), a financial services-focused consulting firm, achieved Amazon Web Services (AWS) DevOps Competency status.

November 18, 2021

Loft Labs announced the launch of Loft version 2 with a focus on ease of use that overcomes the major complaint that Kubernetes is complex and hard to set up.

November 18, 2021

Perforce Software announced new functionality to speed remediation of discovered defects in automated scans.