6 Kubernetes Pain Points and How to Solve Them - Part 2
March 06, 2018

Kamesh Pemmaraju
ZeroStack

With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale. Far from it. There are six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments, and there are also some best practices companies can use to address those pain points.

Start with 6 Kubernetes Pain Points and How to Solve Them - Part 1

Pain Point 3 - Development teams are often distributed in multiple sites and geographies

Companies do not build a single huge Kubernetes cluster for all of their development teams spread around the world. Building such a cluster in one location has DR implications, not to mention latency and country-specific data regulation challenges. Typically, companies want to build out separate local clusters based on location, type of application, data locality requirements, and the need for separate development, test, and production environments. Having a central pane of glass for management becomes crucial in this situation for operational efficiency, simplifying deployment, and upgrading these clusters. Having strict isolation and role-based access control (RBAC) is often a security requirement.

IT administrators should implement a central way to manage diverse infrastructures in multiple sites, with the ability to deploy and manage multiple Kubernetes clusters within those sites. Access rights to each of these environments should be managed through strict BU-level and Project-level RBAC and security controls.

Pain Point 4 - Container Orchestration is just one part of running cloud-native applications and infrastructure operations

Developing, deploying, and operating large-scale enterprise cloud-native applications requires more than just container orchestration. For example, IT operations teams still need to set up firewalls, load balancers, DNS services, and possibly databases, to name a few. They still need to manage infrastructure operations such as physical host maintenance, disk additions/removals/replacements, and physical host additions/removals/replacements. They still need to do capacity planning, and they still need to monitor utilization, allocation, performance of compute, storage, and networking. Kubernetes does not help with any of this.

The IT team should have full manageability for all the underlying infrastructure that runs Kubernetes. IT operations teams should be provided with all the intelligence they need to optimize sizing, perform predictive capacity planning, and implement seamless failure management.

Pain Point 5 - Enterprises have policy-driven security and customization requirements

Enterprises have policies around using their specifically hardened and approved gold images of operating systems. The operating systems often need to have security configurations, databases, and other management tools installed before they can be used. Running these on public cloud may not be allowed, or they may run very slowly.

The solution is to enable an on-premises data center image store where enterprises can create customized gold images. Using fine-grained RBAC, the IT team can share these images selectively with various development teams around the world, based on the local security, regulatory, and performance requirements. The local Kubernetes deployments are then carried out using these gold images to provide the underlying infrastructure to run containers.

Pain Point 6 - Enterprises need a DR strategy for container applications

Any critical application and the data associated with it needs to be protected from natural disasters regardless of whether or not these apps are based on containers. None of the existing solutions provide an out-of-the-box disaster recovery feature for critical Kubernetes applications. Customers are left to cobble together their own DR strategy.

As part of a platform's multi-site capabilities, IT teams should be able to perform remote data replication and disaster recovery between remote geographically-separated sites. This protects persistent data and databases used by the Kubernetes cluster. In addition, the underlying VMs that are running Kubernetes clusters can also be brought up at another site to provide an active-passive failover scenario.

Kubernetes has been a godsend for developers of cloud-native applications, but IT teams can scramble to provision and manage a Kubernetes infrastructure. As we have seen, however, these pain points can be managed.

Kamesh Pemmaraju is VP of Product at ZeroStack
Share this

Industry News

February 27, 2020

Datadog announced an integration with Nessus from Tenable.

February 27, 2020

Talend announced the Winter ‘20 release of Talend Data Fabric.

February 27, 2020

Alcide announced that the Alcide Kubernetes Security Platform now supports compliance scans for PCI and GDPR, enabling DevOps to deliver regulatory compliance checks rapidly and seamlessly alongside Alcide’s leading Kubernetes security capabilities.

February 26, 2020

Perforce Software released a free tool for organizations considering open source software - OpenLogic Stack Builder.

February 26, 2020

Applause announced a new partnership with Infosys to provide broader end-to-end digital experience testing services to clients.

February 26, 2020

RapidMiner announced the release of its platform enhancement, RapidMiner 9.6. This update prioritizes people – not technology – at the center of the enterprise AI journey, providing new, unique experiences to empower users of varying backgrounds and abilities.

February 25, 2020

JFrog announced the availability of the "JFrog Platform," a hybrid, multi-cloud, universal DevOps platform.

February 25, 2020

Nureva added new agile canvas templates to Span Workspace, including a heat map developed by Jeff Sutherland, the co-creator of Scrum and founder of Scrum Inc. and Scrum@Scale.

February 25, 2020

Agiloft announced the addition of its new Agiloft AI Engine, complete with prebuilt AI Capabilities for contract management and an open AI integration that allows customers to incorporate custom-built AI tools into the no-code platform.

February 24, 2020

Cloudify announced that its latest product update - Cloudify version 5 - features an Environment as a Service component, designed to achieve consistent delivery and management of hybrid-cloud services and network infrastructures across CI/CD pipelines - at scale.

February 24, 2020

Checkmarx announced new enhancements to its Software Security Platform to empower more seamless implementation and automation of application security testing (AST) in modern development and DevOps environments.

February 24, 2020

Rapid7 and Snyk announced a strategic partnership to deliver end-to-end application security to organizations developing cloud native applications.

February 20, 2020

The American Council for Technology and Industry Advisory Council (ACT-IAC), the premier public-private partnership dedicated to advancing government through the application of information technology, officially announced the release of the DevOps Primer.

It was produced through a collaborative, volunteer effort by a working group from government and industry, hosted by the ACT-IAC Emerging Technology Community of Interest (COI).

February 20, 2020

DLT Solutions, a subsidiary of Tech Data, launched the Secure Software Factory (SSF), a framework that provides the U.S. public sector with consistent development and deployment of high-quality, scalable, resilient and secure software throughout an application’s lifecycle.

February 20, 2020

Netography announced the general availability of the company’s Security Operations Platform.