Progress announced the early-access release of Progress® MarkLogic® Server 12.
Kubernetes is a relatively new technology, and there is a limited pool of engineers capable of configuring and managing a deployment. A large percentage of Kubernetes deployments fail because organizations underestimate the complexity of Kubernetes and overestimate their ability to implement and manage a Kubernetes environment — a recent study showed that 100% of companies surveyed thought there were challenges with Kubernetes deployment.
As the adoption of Kubernetes continues to rise in popularity, so do the issues that can cause deployments to fail. Thankfully, there are a few early warning signs that can prevent your organization from wasting time, resources, and money, and help you achieve success rather than a failed project.
Let's dive into 5 early indicators that a Kubernetes project is destined to fail.
1. You're assembling your own Kubernetes distribution from open-source components
You're reinventing the wheel and are likely underestimating the amount of ongoing maintenance to keep the components working together and to keep them updated and secure. A production-grade Kubernetes stack consists of more than a dozen open-source components that need to be integrated, tested, secured, and updated. The cloud-native space is moving incredibly fast, and maintaining your own distribution means staying on top of hundreds of version upgrades every year, across disparate open-source projects that don't always maintain compatibility. In the same way that almost nobody is compiling their own Linux distribution anymore, most people shouldn't build their own Kubernetes distribution.
2. You're writing custom scripts to stand up and manage Kubernetes
There is a better way. Custom scripts that are run by humans are error-prone and can quickly become a maintenance nightmare, especially if you need to run in a hybrid/multi-cloud environment. Kubernetes championed declarative APIs, and with the addition of Cluster API, declarative APIs can be used to manage your entire stack, from apps at the top to the underlying cluster itself, giving you a portability and automation layer based on vendor-neutral open standards. Another highly recommended approach is GitOps, which combines Git for version control and continuous deployment to deploy and manage your entire cloud native stack.
GitOps + declarative APIs = Kubernetes done right.
Custom scripts run by humans = Kubernetes done wrong.
3. You've spent months standing up Kubernetes and still don't have a single app running
This is a clear sign that you've spent too much time in the wrong place, playing with technology instead of focusing on your project outcomes. It might be you are building your own Kubernetes stack or you went with a complex Kubernetes vendor platform. A Kubernetes distribution that focuses on simplifying and accelerating Kubernetes applications to Day 2 and beyond can get you up and running, production-ready, with your first app in minutes or hours. You're spending all of your time building a custom platform instead of getting apps to production.
4. Everyone on your team is a cluster admin
The Kubernetes cluster admin role is the equivalent of root in Unix and should be protected accordingly. Kubernetes has a sophisticated role-based access control (RBAC) system and combined with additional tools like Gatekeeper and Dex, Kubernetes can be made incredibly secure. But doing this properly requires advanced knowledge of all the components involved and many people get it wrong.
Kubernetes distributions come with all the components preconfigured, including a secure default configuration, and provide capabilities that enable a user experience that is both developer-friendly and secure.
5. Your Kubernetes vendor keeps adding more consulting hours to get your project back on track
When Kubernetes is done right and properly automated, it doesn't take a whole lot of people to operate it. The original idea behind Kubernetes (and its Google-internal ancestor Borg), was to use software automation to operate large-scale production systems in an efficient, secure, and resilient manner. So if your vendor tells you that you need to buy a lot of consulting hours to be successful, there's something fundamentally wrong with their approach.
The bottom line is that you can save your organization from wasting time, resources, and money on a failed Kubernetes infrastructure deployment by avoiding the paths described in this pre-mortem. The best way to achieve the agility you seek is to enable your DevOps team to focus on business results rather than configuring a complex Kubernetes infrastructure that is beyond their scope. This can best be achieved by deploying an automated, integrated Kubernetes platform that sets up in minutes and gives you production-ready capabilities and simplified management out of the box.
Industry News
Red Hat announced the general availability of Red Hat Enterprise Linux (RHEL) AI across the hybrid cloud.
Jitterbit announced its unified AI-infused, low-code Harmony platform.
Akuity announced the launch of KubeVision, a feature within the Akuity Platform.
Couchbase announced Capella Free Tier, a free developer environment designed to empower developers to evaluate and explore products and test new features without time constraints.
Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company, announced the general availability of AWS Parallel Computing Service, a new managed service that helps customers easily set up and manage high performance computing (HPC) clusters so they can run scientific and engineering workloads at virtually any scale on AWS.
Dell Technologies and Red Hat are bringing Red Hat Enterprise Linux AI (RHEL AI), a foundation model platform built on an AI-optimized operating system that enables users to more seamlessly develop, test and deploy artificial intelligence (AI) and generative AI (gen AI) models, to Dell PowerEdge servers.
Couchbase announced that Couchbase Mobile is generally available with vector search, which makes it possible for customers to offer similarity and hybrid search in their applications on mobile and at the edge.
Seekr announced the launch of SeekrFlow as a complete end-to-end AI platform for training, validating, deploying, and scaling trusted enterprise AI applications through an intuitive and simple to use web user interface (UI).
Check Point® Software Technologies Ltd. unveiled its innovative Portal designed for both managed security service providers (MSSPs) and distributors.
Couchbase officially launched Capella™ Columnar on AWS, which helps organizations streamline the development of adaptive applications by enabling real-time data analysis alongside operational workloads within a single database platform.
Mend.io unveiled the Mend AppSec Platform, a solution designed to help businesses transform application security programs into proactive programs that reduce application risk.
Elastic announced that it is adding the GNU Affero General Public License v3 (AGPL) as an option for users to license the free part of the Elasticsearch and Kibana source code that is available under Server Side Public License 1.0 (SSPL 1.0) and Elastic License 2.0 (ELv2).
Progress announced the latest release of Progress® Semaphore™, its metadata management and semantic AI platform.
Elastic, the Search AI Company, announced the Elasticsearch Open Inference API now integrates with Anthropic, providing developers with seamless access to Anthropic’s Claude, including Claude 3.5 Sonnet, Claude 3 Haiku and Claude 3 Opus, directly from their Anthropic account.