Navigating the Complexities of Operating Large-Scale Kubernetes Environments - 1
July 13, 2022

Sayandeb Saha
NetApp

As containers become the default choice for developing and distributing modern applications and Kubernetes (k8s) the de-facto platform for deploying, running, and scaling such applications, enterprises need to scale their Kubernetes environments rapidly to keep up. However, rapidly scaling Kubernetes environments can be challenging and create complexities that may be hard for you to address and difficult to resolve without a clear strategy. This blog specifies a few common techniques that you can use to navigate the complexities of managing scaled-out Kubernetes environments.

Operating Clusters as Fleets

Most scaled-out Kubernetes environments contain hundreds, if not thousands, of clusters because Kubernetes at its core is also a cluster commoditization technology, making it extremely easy to create, run, and scale clusters.

Consequently, many large Kubernetes environments experience cluster sprawls. Operating these clusters as a fleet of compute clusters on which you apply consistent configuration, security, governance, and other policies so that they are easy to manage, monitor, upgrade, and migrate is a best practice.

Also, reduce the blast radius of your fleets (of K8s clusters) by isolating them in different geographies/public cloud regions so that a failure of one fleet because of a service impacting problem does not impact others, resulting in cascading failures, which could be catastrophic. Commercial software tools are available that can help with such tasks.

Auto Scaling Infrastructure

Large Kubernetes environments need highly elastic infrastructure to provide compute, storage, and networking resources, which is consumed on-demand to keep the environment humming. Kubernetes clusters scale up and down automatically to support application needs. Resource-constrained clusters can impact the availability of a service provided by the application implementing the service. Over-provisioning is always an option, but it's expensive to do so.

In public clouds, auto-scaling infrastructure is easier to realize if you watch the costs and instrument cost optimization tools to manage your costs. On-premises, it's much harder to build a true auto-scaling infrastructure. It means the ability to provision and (potentially de-provision) thousands of virtual or bare metal worker nodes, terabytes of storage, and networking resources in minutes to keep up with the dynamic nature of Kubernetes workloads. To mitigate the auto-scaling requirement for large Kubernetes deployments, you may want to adopt a "Namespace-as-a-Service" operating model described in the next section, which has many advantages.

Namespace-as-a-Service Operating Model

As enterprises grapple with the many challenges of managing and maintaining large-scale Kubernetes estates, they adopted an operating model called ‘Namespace-as-a-Service" for managing such environments. In the "Namespace-as-a-Service" operating model, you use a small number of very large Kubernetes clusters. You then onboard application teams on the clusters, allocate one or more namespaces (virtual clusters) for application teams based on their needs, add worker nodes as needed, and add storage and other cluster resources. You can then use role-based access control (RBAC), network policies, and ResourceQuotas at the namespace level to limit and share the consumption of aggregate resources available in the cluster in a multi-tenant environment securely.

As new application teams or applications from existing teams need cluster real estate, this process is repeated to achieve controlled scaling of your Kubernetes estate that is easier to manage and maintain. This operating model mitigates cluster sprawl and enables policy-based control over resource consumption.

Well-Architected Horizontally-Scaled Apps

Architecting the apps that run on Kubernetes properly also goes a long way towards scaling your Kubernetes environment. With Kubernetes, it is essential to design applications that scale horizontally so that it is easier to scale your Kubernetes environment as your applications scale. This design pattern is distinct from vertical scaling, where resources (CPU, memory, disk I/O) are allocated to a single application stack, which can hit limits making the environment unstable.

Ideally, Kubernetes applications should be implemented by using a set of microservices, which communicate with each other using an API. This is distinct from traditional monolithic applications, where subsystems of an application communicate with each other using internal mechanisms. Your developers can leverage Kubernetes to optimize the placement of the microservices on node(s) that are right sized to handle the resource requirements of the microservices. Designing your applications in this manner allows for offloading the complexity of managing these apps to the operational realm where Kubernetes can manage them for you.

Go to: Navigating the Complexities of Operating Large-Scale Kubernetes Environments - 2

Sayandeb Saha is Sr. Director, Product Management, at NetApp
Share this

Industry News

December 06, 2022

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of Argo, which will join other graduated projects such as Kubernetes, Prometheus, and Envoy.

December 06, 2022

Wib announced API PenTesting-as-a-Service (PTaaS) designed to help organizations proactively cover the latest PCI-DSS 4.0 mandates for testing application security, APIs, and vulnerabilities in Business Logic.

December 05, 2022

Harness announced Harness Cluster Orchestrator to allow customers to optimize their Kubernetes cloud workload costs and realize up to 90% cloud cost savings with Amazon Elastic Compute Cloud (Amazon EC2) Spot instances from Amazon Web Services (AWS).

December 01, 2022

Salesforce introduced a new Automation Everywhere Bundle to accelerate end-to-end workflow orchestration, automate across any system, and embed data and AI-driven workflows anywhere.

December 01, 2022

Weaveworks announced that Flux, the original GitOps project, has graduated in the Cloud Native Computing Foundation (CNCF®).

December 01, 2022

Tigera announced enhancements to its cluster mesh capabilities for managing multi-cluster environments with Calico.

December 01, 2022

CloudBees achieved the Amazon Web Service (AWS) Service Ready Program for Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances.

November 30, 2022

GitLab announced the limited availability of GitLab Dedicated, a new way to use GitLab - as a single-tenant software as a service (SaaS) solution.

November 30, 2022

Red Hat announced an expansion of its open solutions publicly available in AWS Marketplace.

November 30, 2022

Sisense announced the availability of the Sisense CI/CD Git integration module.

November 29, 2022

Codenotary announced TrueSBOM for Serverless, a self-updating Software Bill of Materials (SBOM) for applications running on AWS Lamda, Google Cloud Functions and Microsoft Azure Functions that is made possible by simply adding one line to the application source code.

November 29, 2022

Code Intelligence announced its open-source Command-Line Interface (CLI) tool, CI Fuzz CLI, now allows Java developers to easily incorporate fuzz testing into their existing JUnit setup in order to find functional bugs and security vulnerabilities at scale.

November 29, 2022

Parasoft announced the 2022.2 release of Parasoft C/C++test with support for MISRA C:2012 Amendment 3 and a draft version of MISRA C++ 202x.

November 28, 2022

Kasm Technologies announced the release of Kasm Workspaces v1.12, providing major enhancements to its portfolio of digital workspaces delivering Desktop as a Service (DaaS), Virtualized Desktop Infrastructure (VDI), Remote Browser Isolation (RBI), Open-Source Intelligence Collection (OSINT), Training/Sandboxes, and Containerized Application Streaming (CAS).

November 28, 2022

Cloud4C has achieved Amazon Web Services (AWS) DevOps Competency status.