Navigating the Complexities of Operating Large-Scale Kubernetes Environments - 2
July 14, 2022

Sayandeb Saha
NetApp

As containers become the default choice for developing and distributing modern applications and Kubernetes (k8s) the de-facto platform for deploying, running, and scaling such applications, enterprises need to scale their Kubernetes environments rapidly to keep up. However, rapidly scaling Kubernetes environments can be challenging and create complexities that may be hard for you to address and difficult to resolve without a clear strategy. Part 2 of this blog specifies a few more common techniques that you can use to navigate the complexities of managing scaled-out Kubernetes environments.

Start with: Navigating the Complexities of Operating Large-Scale Kubernetes Environments - 1

Keeping Up with Kubernetes Updates

Kubernetes is a thriving open-source project delivering rapid innovation with releases three times a year. If using fully managed Kubernetes from public cloud providers, be prepared for Kubernetes service life cycles that are aggressive. Test your applications with newer versions of Kubernetes as they are released to minimize upgrade-related downtime. If possible, avoid in-place upgrades of Kubernetes clusters — create new clusters, clone your applications to the new clusters, divert traffic to the new clusters, and retire the old clusters. Proactively adopt more recent versions of Kubernetes for running your business-critical applications to prevent public cloud providers from upgrading your Kubernetes control plane version after the end of life of a particular version of the Kubernetes control plane.

For self-managed Kubernetes platforms, vendors also release aggressively to keep up with upstream innovation. You will have more control over when to upgrade, but you do not want to fall behind as it becomes difficult to upgrade if you are too far back and vendors discontinue support for the versions you are on.

Most Kubernetes providers document their life cycle. Read, understand, and take the necessary actions to keep up with rapid releases and subsequent end-of-life schedules.

Reduce or Eliminate Application/Cluster Downtime

Like all other applications and environments, Kubernetes applications and clusters can also experience service-impacting disasters or outages, which can be self-inflicted or accidental. To keep up with the rapid upgrades as explained in the previous section and recover from unplanned outages, use commercially licensed or open-source Kubernetes data protection solutions that provide backup, DR, and mobility for Kubernetes applications. While adopting such solutions look for ones can handle scaled out multi-cluster environments providing a single pane of glass for your K8s protection needs.

GitOps for Application Life-Cycle Management

Releasing applications on Kubernetes can be challenging and even more daunting in scaled-out environments. GitOps, which leverages the power of Git, a popular software version control tool, to provide both revision and change control for applications within the Kubernetes platform, is a best practice that you should consider adopting in large Kubernetes environments.

This model stores the system's desired state in a software version control system like Git. Developers make changes to the configuration files representing the desired state instead of using CLI or GUI to directly make changes on the K8s clusters. A delta between the desired state stored in Git and the system's actual state indicates the changeset that needs to be deployed. These changesets can be reviewed and approved (or rejected) through standard Git processes such as pull requests, code reviews, and merges to master. Approved and merged changesets to the main branch are applied to K8s clusters for changing the system's current state to the desired state based on the configuration stored in Git.

You can quickly and easily release applications using this practice and roll back as needed if things don't go according to plan. Using GitOps for change control leverages Kubernetes' core functionality as a reconciliation engine. This process provides an implicit audit trail of actions taken while releasing applications enabling easier troubleshooting and root cause analyses in large K8s environments.

Comprehensive Observability

Rich observability is essential for maintaining large Kubernetes environments so that you can proactively and reactively mitigate issues that can otherwise become a revenue and/or productivity impacting outage. Kubernetes observability is complex as Kubernetes constitutes multiple layers of infrastructure and several distinct, highly distributed services, each producing its own set of monitoring data with no single master source/log.

To maintain large Kubernetes environments, you must implement:

■ Monitoring of K8s infrastructure (cluster, nodes, namespaces, pods, etc.) and application resources (CPU, memory, storage, networking)

■ Log collection and management for all Kubernetes services and infrastructure

■ Alerts and notifications

Monitoring data generated from various sources need to be collected separately, correlated, and sometimes analyzed to provide the full context of each event or change to an admin, who can understand it, and take corrective action(s) as needed to keep your environment humming without disruption.

Summary

If you have started dabbling into Kubernetes or have small/medium K8s environments, it's only a matter of time you will be managing a large K8s environment as developers embrace containers and Kubernetes for new apps and refactor existing apps. Adopting a few strategies outlined here can reduce some of your pains that are associated with large K8s estates. Seek solutions that can help with your data management needs for large scale Kubernetes environments making upgrades easier, recover from disasters faster, and backup your precious application data with support for "Namespace-as-a-Service" operating models commonly used in such environments.

Sayandeb Saha is Sr. Director, Product Management, at NetApp
Share this

Industry News

December 06, 2022

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, announced the graduation of Argo, which will join other graduated projects such as Kubernetes, Prometheus, and Envoy.

December 06, 2022

Wib announced API PenTesting-as-a-Service (PTaaS) designed to help organizations proactively cover the latest PCI-DSS 4.0 mandates for testing application security, APIs, and vulnerabilities in Business Logic.

December 05, 2022

Harness announced Harness Cluster Orchestrator to allow customers to optimize their Kubernetes cloud workload costs and realize up to 90% cloud cost savings with Amazon Elastic Compute Cloud (Amazon EC2) Spot instances from Amazon Web Services (AWS).

December 01, 2022

Salesforce introduced a new Automation Everywhere Bundle to accelerate end-to-end workflow orchestration, automate across any system, and embed data and AI-driven workflows anywhere.

December 01, 2022

Weaveworks announced that Flux, the original GitOps project, has graduated in the Cloud Native Computing Foundation (CNCF®).

December 01, 2022

Tigera announced enhancements to its cluster mesh capabilities for managing multi-cluster environments with Calico.

December 01, 2022

CloudBees achieved the Amazon Web Service (AWS) Service Ready Program for Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances.

November 30, 2022

GitLab announced the limited availability of GitLab Dedicated, a new way to use GitLab - as a single-tenant software as a service (SaaS) solution.

November 30, 2022

Red Hat announced an expansion of its open solutions publicly available in AWS Marketplace.

November 30, 2022

Sisense announced the availability of the Sisense CI/CD Git integration module.

November 29, 2022

Codenotary announced TrueSBOM for Serverless, a self-updating Software Bill of Materials (SBOM) for applications running on AWS Lamda, Google Cloud Functions and Microsoft Azure Functions that is made possible by simply adding one line to the application source code.

November 29, 2022

Code Intelligence announced its open-source Command-Line Interface (CLI) tool, CI Fuzz CLI, now allows Java developers to easily incorporate fuzz testing into their existing JUnit setup in order to find functional bugs and security vulnerabilities at scale.

November 29, 2022

Parasoft announced the 2022.2 release of Parasoft C/C++test with support for MISRA C:2012 Amendment 3 and a draft version of MISRA C++ 202x.

November 28, 2022

Kasm Technologies announced the release of Kasm Workspaces v1.12, providing major enhancements to its portfolio of digital workspaces delivering Desktop as a Service (DaaS), Virtualized Desktop Infrastructure (VDI), Remote Browser Isolation (RBI), Open-Source Intelligence Collection (OSINT), Training/Sandboxes, and Containerized Application Streaming (CAS).

November 28, 2022

Cloud4C has achieved Amazon Web Services (AWS) DevOps Competency status.