Navigating the Complexities of Operating Large-Scale Kubernetes Environments - 1
July 13, 2022

Sayandeb Saha
NetApp

As containers become the default choice for developing and distributing modern applications and Kubernetes (k8s) the de-facto platform for deploying, running, and scaling such applications, enterprises need to scale their Kubernetes environments rapidly to keep up. However, rapidly scaling Kubernetes environments can be challenging and create complexities that may be hard for you to address and difficult to resolve without a clear strategy. This blog specifies a few common techniques that you can use to navigate the complexities of managing scaled-out Kubernetes environments.

Operating Clusters as Fleets

Most scaled-out Kubernetes environments contain hundreds, if not thousands, of clusters because Kubernetes at its core is also a cluster commoditization technology, making it extremely easy to create, run, and scale clusters.

Consequently, many large Kubernetes environments experience cluster sprawls. Operating these clusters as a fleet of compute clusters on which you apply consistent configuration, security, governance, and other policies so that they are easy to manage, monitor, upgrade, and migrate is a best practice.

Also, reduce the blast radius of your fleets (of K8s clusters) by isolating them in different geographies/public cloud regions so that a failure of one fleet because of a service impacting problem does not impact others, resulting in cascading failures, which could be catastrophic. Commercial software tools are available that can help with such tasks.

Auto Scaling Infrastructure

Large Kubernetes environments need highly elastic infrastructure to provide compute, storage, and networking resources, which is consumed on-demand to keep the environment humming. Kubernetes clusters scale up and down automatically to support application needs. Resource-constrained clusters can impact the availability of a service provided by the application implementing the service. Over-provisioning is always an option, but it's expensive to do so.

In public clouds, auto-scaling infrastructure is easier to realize if you watch the costs and instrument cost optimization tools to manage your costs. On-premises, it's much harder to build a true auto-scaling infrastructure. It means the ability to provision and (potentially de-provision) thousands of virtual or bare metal worker nodes, terabytes of storage, and networking resources in minutes to keep up with the dynamic nature of Kubernetes workloads. To mitigate the auto-scaling requirement for large Kubernetes deployments, you may want to adopt a "Namespace-as-a-Service" operating model described in the next section, which has many advantages.

Namespace-as-a-Service Operating Model

As enterprises grapple with the many challenges of managing and maintaining large-scale Kubernetes estates, they adopted an operating model called ‘Namespace-as-a-Service" for managing such environments. In the "Namespace-as-a-Service" operating model, you use a small number of very large Kubernetes clusters. You then onboard application teams on the clusters, allocate one or more namespaces (virtual clusters) for application teams based on their needs, add worker nodes as needed, and add storage and other cluster resources. You can then use role-based access control (RBAC), network policies, and ResourceQuotas at the namespace level to limit and share the consumption of aggregate resources available in the cluster in a multi-tenant environment securely.

As new application teams or applications from existing teams need cluster real estate, this process is repeated to achieve controlled scaling of your Kubernetes estate that is easier to manage and maintain. This operating model mitigates cluster sprawl and enables policy-based control over resource consumption.

Well-Architected Horizontally-Scaled Apps

Architecting the apps that run on Kubernetes properly also goes a long way towards scaling your Kubernetes environment. With Kubernetes, it is essential to design applications that scale horizontally so that it is easier to scale your Kubernetes environment as your applications scale. This design pattern is distinct from vertical scaling, where resources (CPU, memory, disk I/O) are allocated to a single application stack, which can hit limits making the environment unstable.

Ideally, Kubernetes applications should be implemented by using a set of microservices, which communicate with each other using an API. This is distinct from traditional monolithic applications, where subsystems of an application communicate with each other using internal mechanisms. Your developers can leverage Kubernetes to optimize the placement of the microservices on node(s) that are right sized to handle the resource requirements of the microservices. Designing your applications in this manner allows for offloading the complexity of managing these apps to the operational realm where Kubernetes can manage them for you.

Go to: Navigating the Complexities of Operating Large-Scale Kubernetes Environments - 2

Sayandeb Saha is Sr. Director, Product Management, at NetApp
Share this

Industry News

August 16, 2022

Canonical welcomes the .NET development platform, one of Microsoft’s earliest contributions to open source projects, as a native experience on Ubuntu hosts and container images, starting in Ubuntu 22.04 LTS.

August 16, 2022

Veracode announced the launch of the Veracode Velocity Partner Program.

August 16, 2022

Render announced a new monorepository feature that enables its customers to keep all of their code in one super repository instead of managing multiple smaller repositories.

August 15, 2022

Gadget announced Connections, a major new feature that gives app developers access to building blocks that enable them to build and scale ecommerce apps in a fraction of the time, at a fraction of the cost.

August 15, 2022

Opsera is on the Salesforce AppExchange to help enterprise customers shorten software delivery cycles, improve pipeline quality and security, lower operations costs and better align software delivery to business outcomes.

August 15, 2022

Virtusa Corporation earned the DevOps with GitHub on Microsoft Azure advanced specialization, a validation of a services partner's deep knowledge, extensive experience and proven success in implementing secure software development practices applying DevOps principles and using Azure and GitHub solutions.

August 15, 2022

Companies looking to reduce their cloud costs with automated optimization can now easily procure CAST AI via Google Cloud Marketplace using their existing committed spend.

August 11, 2022

Granulate, an Intel Company, announced the upcoming launch of its latest free cost-reduction solution, gMaestro, a continuous workload and pod rightsizing tool for Kubernetes cost optimization.

August 11, 2022

Rezilion announced the availability of MI-X, a newly created open-source tool developed by Rezilion's vulnerability research team.

August 11, 2022

Contrast Security announced its enhanced application programming interface (API) security capabilities within the Contrast Secure Code Platform.

August 10, 2022

Mirantis made it even easier to integrate Mirantis Container Cloud into developer workflows and provide developers and operators with easy access and visibility into the Kubernetes clusters with the Mirantis Container Cloud Lens Extension announced today.

August 10, 2022

ArmorCode announced an integration with Traceable AI which will bring its data into the ArmorCode platform and improve Application Security Posture from code to cloud.

August 10, 2022

Quali unveiled enhanced features for its Torque platform to unify infrastructure orchestration and governance.

August 09, 2022

Veracode announced the enhancement of its Continuous Software Security Platform with substantial improvements to its integrated developer experience.

August 09, 2022

Normalyze announced General Availability for its Freemium offering, a self-serve, free platform that democratizes data discovery and classification in all three public clouds, Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).