Check Point® Software Technologies Ltd. has been recognized as a Leader in the latest GigaOm Radar Report for Security Policy as Code.
The software-defined data center (SDDC) is crucial to the long-term evolution of an agile digital business according to Gartner, Inc. It is not, however, the right choice for all IT organizations currently.
"Infrastructure and operations (I&O) leaders need to understand the business case, best use cases and risks of an SDDC," said Dave Russell, VP and Distinguished Analyst at Gartner. "Due to its current immaturity, the SDDC is most appropriate for visionary organizations with advanced expertise in I&O engineering and architecture."
An SDDC is a data center in which all the infrastructure is virtualized and delivered "as-a-service." This enables increased levels of automation and flexibility that will underpin business agility through the increased adoption of cloud services and enable modern IT approaches such as DevOps. Today, most organizations are not ready to begin adoption and should proceed with caution.
By 2020, however, Gartner predicts the programmatic capabilities of an SDDC will be considered a requirement for 75 percent of Global 2000 enterprises that seek to implement a DevOps approach and a hybrid cloud model.
"I&O leaders can't just buy a ready-made SDDC from a vendor,” said Russell. “First, they need to understand why they need it for the business. Second, they need to deploy, orchestrate and integrate numerous parts, probably from different vendors." Moreover, aside from a lot of deployment work – new skills and a cultural shift in the IT organization are needed to ensure this approach delivers results for the business.
Gartner recommends that I&O leaders take a realistic view of the risks and benefits, and make plans to mitigate the top risks of an SDDC project failure:
Assess skills and culture
Simply changing a legacy infrastructure for a set of software-defined products is unlikely to yield the desired benefits. Before an activity is automated and self-service is implemented, the process associated with the IT service needs to be completely rethought and optimized. This may require new skills and a different culture to what is currently available within certain IT organizations. " A broken process is still a broken process no matter how well it is automated," said Mr. Russell. "Build the right skills in your organization by enabling top infrastructure architects to experiment with public cloud infrastructure in small projects, as well giving them the opportunity to get out and learn what their peers in other organizations and visionaries in this field are doing."
Know when the time is right
The right time to move to an SDDC may be years away for most organizations, but for many it will come sooner than their preparations allow for. "The first step is understanding the core concepts of the SDDC," said Mr. Russell. "Then, I&O leaders should examine the available solutions starting with one component, process or software-defined domain that can benefit. The final stage is to plan a roadmap to full deployment if and when SDDC solutions are appropriate."
Moreover, I&O leaders must realize that the technology is still nascent. Even the more established software-defined areas like networking and storage are still gelling and are experiencing early stage adoption levels. Implementing in phases is recommended, once it's been established that the solutions in the market deliver enough functionality, interoperability and production-proven deployment history to be viable. "Storage can be a compelling starting point as the capabilities often stack up favorably against traditional solutions," said Mr. Russell.
Beware of vendor lock-in
Open-source standards or a cloud management platform may help IT organizations to reduce vendor lock-in, but it cannot be eliminated altogether. There are also no universal standards in place for infrastructure APIs, so adopting and coding to a particular API results in a degree of lock-in. It's vital to understand the trade-offs at work and the costs of migration or exit when choosing vendors and technologies.
"Recognize that adopting an SDDC means trading a hardware lock-in for a software lock-in," Russell concluded. "Choose the most appropriate kind of lock-in consciously and with all the facts at hand."
Industry News
JFrog announced the addition of JFrog Runtime to its suite of security capabilities, empowering enterprises to seamlessly integrate security into every step of the development process, from writing source code to deploying binaries into production.
Kong unveiled its new Premium Technology Partner Program, a strategic initiative designed to deepen its engagement with technology partners and foster innovation within its cloud and developer ecosystem.
Kong announced the launch of the latest version of Kong Konnect, the API platform for the AI era.
Oracle announced new capabilities to help customers accelerate the development of applications and deployment on Oracle Cloud Infrastructure (OCI).
JFrog and GitHub unveiled new integrations.
Opsera announced its latest platform capabilities for Salesforce DevOps.
Progress announced it has entered into a definitive agreement to acquire ShareFile, a business unit of Cloud Software Group, providing SaaS-native, AI-powered, document-centric collaboration, focusing on industry segments including business and professional services, financial services, healthcare and construction.
Red Hat announced the general availability of Red Hat Enterprise Linux (RHEL) AI across the hybrid cloud.
Jitterbit announced its unified AI-infused, low-code Harmony platform.
Akuity announced the launch of KubeVision, a feature within the Akuity Platform.
Couchbase announced Capella Free Tier, a free developer environment designed to empower developers to evaluate and explore products and test new features without time constraints.
Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company, announced the general availability of AWS Parallel Computing Service, a new managed service that helps customers easily set up and manage high performance computing (HPC) clusters so they can run scientific and engineering workloads at virtually any scale on AWS.
Dell Technologies and Red Hat are bringing Red Hat Enterprise Linux AI (RHEL AI), a foundation model platform built on an AI-optimized operating system that enables users to more seamlessly develop, test and deploy artificial intelligence (AI) and generative AI (gen AI) models, to Dell PowerEdge servers.