Progress announced the early-access release of Progress® MarkLogic® Server 12.
Driven by the demand for delivering applications at the locations of their choice, enterprises are increasingly deploying distributed infrastructure, with 67% of cloud deployments distributed across on-premises, hybrid, and edge clouds. Comparatively, our recent research found that developers are facing new challenges as they expand their use of cloud computing in ways that increasingly span multiple locations.
For example, public clouds are walled gardens, locking customers into a single vendor's ecosystem, while DIY solutions are time-consuming and increasingly complex. The best way for developers to address these concerns is to implement and operate an open distributed platform that offers the best of both worlds, combining the power of the public cloud on the infrastructure of your choice.
Still, there are some considerations to address in adopting an open distributed cloud solution — ranging from deployment delays to an increase in silos and operational complexity. Additionally, building a cloud environment rooted in diverse infrastructure requires expertise in technologies like virtualization, Kubernetes and cloud-native applications.
As you consider your options, there are four main components to a successful, open distributed cloud solution:
1. Recruit and train successfully
Talent-constrained IT teams have long struggled with the complexity of running large-scale private clouds. The ratio of servers managed to an admin or automation architect can be as little as 40:1 in private clouds. In contrast, hyperscale public-cloud providers have invested significantly in automating the management of their environments, which greatly improves their admin efficiency. It is not unheard of for the server/admin ratio in public clouds to be 4000:1 or more. So it's no surprise that as we look at 2022, we see DevOps, cloud-platform engineering, and cloud-native developers as the top hiring priorities for both advanced and early deployed users.
2. Deploy a SaaS control plane that will enable you to build and operate clouds anywhere
With cost optimization, data management, and achieving high availability among top concern for executives, this SaaS control plane provides operational automation for the consumption of infrastructure and is the heart of a distributed cloud service providing benefits such as:
■ Low management overhead using a highly automated hyperscale operational model.
■ Reduced maintenance costs by aggregating all distributed infrastructure behind a single management pane.
■ Rapid and repeatable remote deployments to 100s or 1000s of distributed cloud locations with consistent, template-based configuration and policy control.
■ An operational SLA through automated health monitoring, runbook-driven resolution of common problems, and streamlined upgrades.
3. Implement managed open-source services that use 100% open-source stacks and components to deliver bare-metal, container, virtualization, and supporting platform services
Modern cloud services such as Kubernetes are actually composite services that are themselves highly distributed and therefore will support more complex orchestration capability. Kubernetes is becoming a leading tool for enabling cloud-native transformation, with 85% either using Kubernetes today or planning to deploy Kubernetes in the next six months. However, deploying control plane components such as Kubernetes master node components (API server, etcd) requires addressing the redundancy and high availability capabilities they need.
4. Integrate support for diverse, distributed infrastructure
Deploy out-of-the-box plugins and integrations for public clouds (AWS, GCP, Azure), public-cloud Kubernetes services (EKS, AKS, GKE), and multiple operating systems such as CentOS and Ubuntu.
Monitoring is critical
Concerns of security and operational complexity, including challenges like high availability, observability, and troubleshooting, are felt by 91% of enterprises deploying cloud-native technologies, so it's imperative to ensure that you have a monitoring, diagnostic, and troubleshooting process in place at all times.
Since modern cloud services are highly distributed, it is imperative that you are constantly monitoring performance. Even a small degradation in certain components can lead to a larger, system-wide degradation in time. To simplify troubleshooting (whether automated or human), and to mitigate the likelihood of larger problems, these health probes must be highly granular.
Having good health metrics provides a basis to codify the resolution of common problems via automated runbooks. These runbooks can be built for common problems that occur during normal system operation such as a control plane going offline because of an infrastructure failure. These runbooks are also effective when there are problems in new versions of cloud services or interoperability issues that are found only in the field after deployment at some scale. S
ince the runbook can be implemented without requiring a new version of the cloud service, immediate mitigation can be provided while a bug fix or a new version of the cloud service in question is developed. In this way, customers can be operational despite the complex, ever-evolving nature of modern open-source cloud technologies.
Keep upgrading
The breadth of developers and vendors participating in modern open-source ecosystems means that new versions are constantly being developed both with bug fixes as well as with security and feature enhancements.
The SaaS control plane makes it easy for customers to stay up to date by fully automating the upgrade to a new version of various cloud services. These upgrades are typically offered on a granular basis (for example, upgrading Service A should be independent of upgrading Service B), which makes change control easier for large scale enterprise deployments.
Finally, these upgrades are ideally offered in a self-service manner that enables customers to schedule their own upgrades at a time that is convenient for them and at a scope of their choosing (for example, upgrade the Virginia datacenter at 3 a.m. on Saturday but leave Mumbai untouched for now).
Saas management will be the de facto standard
For the emerging categories of distributed and edge cloud computing, geographic distribution of infrastructure and workloads limit the reach of the public cloud. Similarly, distributed edge environments need to be managed centrally with little to no touch. It is clear that SaaS management will be the de facto standard for distributed cloud management.
Industry News
Red Hat announced the general availability of Red Hat Enterprise Linux (RHEL) AI across the hybrid cloud.
Jitterbit announced its unified AI-infused, low-code Harmony platform.
Akuity announced the launch of KubeVision, a feature within the Akuity Platform.
Couchbase announced Capella Free Tier, a free developer environment designed to empower developers to evaluate and explore products and test new features without time constraints.
Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company, announced the general availability of AWS Parallel Computing Service, a new managed service that helps customers easily set up and manage high performance computing (HPC) clusters so they can run scientific and engineering workloads at virtually any scale on AWS.
Dell Technologies and Red Hat are bringing Red Hat Enterprise Linux AI (RHEL AI), a foundation model platform built on an AI-optimized operating system that enables users to more seamlessly develop, test and deploy artificial intelligence (AI) and generative AI (gen AI) models, to Dell PowerEdge servers.
Couchbase announced that Couchbase Mobile is generally available with vector search, which makes it possible for customers to offer similarity and hybrid search in their applications on mobile and at the edge.
Seekr announced the launch of SeekrFlow as a complete end-to-end AI platform for training, validating, deploying, and scaling trusted enterprise AI applications through an intuitive and simple to use web user interface (UI).
Check Point® Software Technologies Ltd. unveiled its innovative Portal designed for both managed security service providers (MSSPs) and distributors.
Couchbase officially launched Capella™ Columnar on AWS, which helps organizations streamline the development of adaptive applications by enabling real-time data analysis alongside operational workloads within a single database platform.
Mend.io unveiled the Mend AppSec Platform, a solution designed to help businesses transform application security programs into proactive programs that reduce application risk.
Elastic announced that it is adding the GNU Affero General Public License v3 (AGPL) as an option for users to license the free part of the Elasticsearch and Kibana source code that is available under Server Side Public License 1.0 (SSPL 1.0) and Elastic License 2.0 (ELv2).
Progress announced the latest release of Progress® Semaphore™, its metadata management and semantic AI platform.
Elastic, the Search AI Company, announced the Elasticsearch Open Inference API now integrates with Anthropic, providing developers with seamless access to Anthropic’s Claude, including Claude 3.5 Sonnet, Claude 3 Haiku and Claude 3 Opus, directly from their Anthropic account.