Optimizing Kubernetes Costs with Multi-Tenancy and Virtual Clusters
October 16, 2024

Cliff Malmborg
Loft Labs

The cost of running Kubernetes at scale with a large number of users quickly becomes untenable for cloud-native organizations. Monitoring costs, either via public cloud providers or with external tools such as Kubecost, is the first step to identifying important cost drivers and areas of improvement. Setting efficient resource limits with Resource Quotas and Limit Ranges, and enabling horizontal and vertical autoscaling, can also help reduce costs and inform optimization strategy.

However, these traditional methods are not enough given today's complex distributed systems, with many organizations spinning up huge numbers of underutilized clusters. To truly reduce Kubernetes costs and simplify management in the long-term, teams should consider a new approach: multi-tenancy with virtual Kubernetes clusters.

Reducing the Number of Clusters

Implementing multi-tenancy helps cut costs because the Kubernetes control plane and computing resources can be shared by several users or applications, which also reduces the management burden. Many organizations deploy too many clusters, even one for every developer, and stand to save significantly by relying on a multi-tenant architecture.

Reducing the number of clusters improves resource utilization and reduces redundancies, as API servers, etcd instances, and other components of the control plane will not be duplicated unnecessarily, but shared by workloads in the same cluster. Multi-tenancy also reduces cluster management fees, which are charged by public cloud providers. When running many small clusters, the management fee cost of about $70 per month per cluster can quickly become overwhelming.

In traditional multi-tenant architectures, engineers might receive self-service namespaces on a shared cluster. Given their limited utility and poor isolation between namespaces, opting for virtual clusters instead can preserve all the benefits of "real" clusters in a more efficient, secure multi-tenant setup. Virtual clusters are fully functional Kubernetes clusters running within an underlying host cluster. Unlike namespaces, virtual clusters have separate Kubernetes control planes and storage backends. Only core resources like pods and services are shared with the physical cluster, while all others such as statefulsets, deployments, and webhooks exist only in the virtual cluster.

Virtual clusters thus solve the "noisy neighbor" problem as they provide better workload isolation than namespaces, and developers can configure their virtual cluster independently tailored to their specific requirements. Because configurations and new installations can be carried out on virtual clusters themselves, the underlying host cluster can remain simple with only the basic components, which improves stability and reduces the chance for errors. While virtual clusters may not completely replace the need for separate regular clusters, implementing multi-tenancy with virtual clusters makes it possible to greatly reduce the number of real clusters needed to operate at scale.

The Case for Virtual Clusters to Reduce Cost

Virtual clusters are an exciting new alternative to both namespaces and separate clusters; cheaper and easier to deploy than regular clusters, with much better isolation than namespaces. Crucially, shifting to virtual clusters is a simple process that in most cases will not disrupt development workflows. For example, a large organization with developers distributed across 25 teams may choose to provision 25 separate Kubernetes clusters to test and develop the application. To switch to virtual clusters, they would instead simply create a single Kubernetes cluster and then deploy 25 virtual clusters within it. From the developers' viewpoint, nothing changes — teams can utilize all the necessary services within their virtual clusters, deploying their own resources like Prometheus and Istio without affecting the host cluster.

Further, since virtual clusters and their workloads are also pods in the host cluster, teams can take full advantage of the Kubernetes scheduler. If a team will not be using a virtual cluster for a period of time, there will not be pods scheduled in the host cluster using resources; overall, improved node resource utilization will drive down costs. Automating the process of scaling down unused resources can also eliminate costs created by idle virtual clusters. This "sleep mode" means the environment is stored and can be spun up quickly once a developer needs it again. Developers can implement a sleep mode via scripts or with tools that have built-in functionality.

Another key benefit is that infrastructure teams can centralize services like ingress controllers, service meshes, and logging tools, installing them just once in the host cluster and letting all virtual clusters share access. When organizations have trust in their tenants, like internal teams, CI/CD pipelines, and even select customers, replacing underutilized clusters with virtual ones can significantly cut down infrastructure and operational costs.

Future-Proofing Systems with Virtual Cluster Multi-Tenancy

Traditional Kubernetes cost management techniques, like autoscaling and monitoring tools, are a good first step to reducing runaway cloud spend tied to Kubernetes. But as companies rush to deploy artificial intelligence workloads, the associated complexity and resource demands will quickly render typical Kubernetes setups unmanageable and prohibitively expensive. Making the shift to virtual clusters now will provide the same levels of security and functionality, but will drastically reduce the operational and financial burden as organizations will need far fewer clusters. A virtualized, multi-tenant Kubernetes architecture is well-positioned to scale to the demands of modern applications.

Cliff Malmborg is Director of Product Marketing at Loft Labs
Share this

Industry News

November 06, 2024

Progress announced 10 years of partnership with emt Distribution — a leading cybersecurity distributor in the Middle East and Africa.

November 06, 2024

Port announced $35 million in Series B funding, bringing its total funding to $58M to date.

November 05, 2024

Parasoft has made another step in strategically integrating AI and ML quality enhancements where development teams need them most, such as using natural language for troubleshooting or checking code in real time.

November 05, 2024

MuleSoft announced the general availability of full lifecycle AsyncAPI support, enabling organizations to power AI agents with real-time data through seamless integration with event-driven architectures (EDAs).

November 05, 2024

Numecent announced they have expanded their Microsoft collaboration with the launch of Cloudpager's new integration to App attach in Azure Virtual Desktop.

November 04, 2024

Progress announced the completion of the acquisition of ShareFile, a business unit of Cloud Software Group, providing a SaaS-native, AI-powered, document-centric collaboration platform, focusing on industry segments including business and professional services, financial services, industrial and healthcare.

November 04, 2024

Incredibuild announced the acquisition of Garden, a provider of DevOps pipeline acceleration solutions.

October 31, 2024

The Open Source Security Foundation (OpenSSF) announced an expansion of its free course “Developing Secure Software” (LFD121).

October 31, 2024

Redgate announced that its core solutions are listed in Amazon Web Services (AWS) Marketplace.

October 30, 2024

LambdaTest introduced a suite of new features to its AI-powered Test Manager, designed to simplify and enhance the test management experience for software development and QA teams.

October 30, 2024

StackHawk launched Oversight to provide security teams with a birds-eye view of their API security program.

October 30, 2024

DataStax announced the enhancement of its GitHub Copilot extension with its AI Platform-as-a-Service (AI PaaS) solution.

October 30, 2024

Opsera partnered with Databricks to empower software and DevOps engineers to deliver software faster, safer and smarter through AI/ML model deployments and schema rollback capabilities.

October 29, 2024

GitHub announced the next evolution of its Copilot-powered developer platform.

October 29, 2024

Crowdbotics released an extension for GitHub Copilot, available now through the GitHub and Azure Marketplaces.