Kong introduced a number of new performance, security and extensibility features across its entire product portfolio, including major new releases of Kong Gateway, Kong Konnect, Kong Mesh, Kong Insomnia and Kong Ingress Controller, as well as new projects from the Kong Incubator.
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale. Far from it. There are six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments, and there are also some best practices companies can use to address those pain points.
Pain Point 3 - Development teams are often distributed in multiple sites and geographies
Companies do not build a single huge Kubernetes cluster for all of their development teams spread around the world. Building such a cluster in one location has DR implications, not to mention latency and country-specific data regulation challenges. Typically, companies want to build out separate local clusters based on location, type of application, data locality requirements, and the need for separate development, test, and production environments. Having a central pane of glass for management becomes crucial in this situation for operational efficiency, simplifying deployment, and upgrading these clusters. Having strict isolation and role-based access control (RBAC) is often a security requirement.
IT administrators should implement a central way to manage diverse infrastructures in multiple sites, with the ability to deploy and manage multiple Kubernetes clusters within those sites. Access rights to each of these environments should be managed through strict BU-level and Project-level RBAC and security controls.
Pain Point 4 - Container Orchestration is just one part of running cloud-native applications and infrastructure operations
Developing, deploying, and operating large-scale enterprise cloud-native applications requires more than just container orchestration. For example, IT operations teams still need to set up firewalls, load balancers, DNS services, and possibly databases, to name a few. They still need to manage infrastructure operations such as physical host maintenance, disk additions/removals/replacements, and physical host additions/removals/replacements. They still need to do capacity planning, and they still need to monitor utilization, allocation, performance of compute, storage, and networking. Kubernetes does not help with any of this.
The IT team should have full manageability for all the underlying infrastructure that runs Kubernetes. IT operations teams should be provided with all the intelligence they need to optimize sizing, perform predictive capacity planning, and implement seamless failure management.
Pain Point 5 - Enterprises have policy-driven security and customization requirements
Enterprises have policies around using their specifically hardened and approved gold images of operating systems. The operating systems often need to have security configurations, databases, and other management tools installed before they can be used. Running these on public cloud may not be allowed, or they may run very slowly.
The solution is to enable an on-premises data center image store where enterprises can create customized gold images. Using fine-grained RBAC, the IT team can share these images selectively with various development teams around the world, based on the local security, regulatory, and performance requirements. The local Kubernetes deployments are then carried out using these gold images to provide the underlying infrastructure to run containers.
Pain Point 6 - Enterprises need a DR strategy for container applications
Any critical application and the data associated with it needs to be protected from natural disasters regardless of whether or not these apps are based on containers. None of the existing solutions provide an out-of-the-box disaster recovery feature for critical Kubernetes applications. Customers are left to cobble together their own DR strategy.
As part of a platform's multi-site capabilities, IT teams should be able to perform remote data replication and disaster recovery between remote geographically-separated sites. This protects persistent data and databases used by the Kubernetes cluster. In addition, the underlying VMs that are running Kubernetes clusters can also be brought up at another site to provide an active-passive failover scenario.
Kubernetes has been a godsend for developers of cloud-native applications, but IT teams can scramble to provision and manage a Kubernetes infrastructure. As we have seen, however, these pain points can be managed.