Wix.com announced the launch of the Wix Model Context Protocol (MCP) Server.
Cloud computing has become the cornerstone for businesses looking to scale infrastructure up or down quickly according to their needs with a lower total cost of ownership. But far from simplifying the IT executive's life, the expanded use of the cloud is introducing a whole new level of concept and complexity.
Research has found that the average enterprise uses 2.6 public clouds and 2.7 private clouds. Meanwhile, the average business is using 110 different cloud applications(link is external) in 2021, up from 80 in 2020. Digital transformation has exacerbated the problem, with organization's of all sizes now faced with an abundance of technology ‘choice' which is actually only serving to hinder cloud transformations.
It gets even more challenging when IT people need to communicate crucial aspects of an organization's cloud infrastructure to non-technical decision makers. And as more and more workloads are migrated to different clouds, the unique requirements of different processes, and the effects of configuration "drift" start to become clearer.
Put simply, cloud infrastructure is becoming harder to control manually.
The question is, how can organizations start to establish visibility into what are becoming more complex, harder to track environments, and how, with cloud configurations changing all the time, can teams develop repeatable processes that will enable them to take back control of their infrastructure and catch the drift before it gets out of hand?
The Rise of Automation and Infrastructure As Code
First, a brief history lesson. Those of a certain vintage might remember the halcyon days when you had to buy and maintain your own servers and machines. We evolved from this era of computing around 2005 with the widespread adoption of something called virtualization, a way of running multiple virtual machines on a single physical server.
Virtualization not only created infrastructure that was more efficient and easier to manage, it also allowed for the development of new technologies, such as cloud computing — something which has revolutionised the way businesses operate. But alongside all the benefits of the cloud — the flexibility, scalability, and cost efficiencies — organizations soon found themselves encountering scaling problems.
This is because provisioning, deploying and managing applications to the hybrid-cloud is costly, time-consuming and complex. For one thing, manually deploying instances of an application to multiple clouds with different configurations is prone to human error. Scale too quickly and you might end up missing key configurations and have to start all over again. Fail to configure an instance correctly and it could prove too costly to ever fix.
These problems necessitated the development of Infrastructure as Code (or IaC). With IaC, it is possible to provision and manage infrastructure using code instead of manual processes, allowing for greater speed, agility, and repeatability when provisioning infrastructure, enabling IT teams to automate cloud infrastructure deployment and processes further than ever before.
With IaC, you write a script that will automatically handle infrastructure tasks for you, saving not only time but also reducing the potential for human error. Simple, right? Problem solved! Well yes and no …
Managing IaC at Scale
Expectations often have a habit of not aligning with reality. If you've started using IaC to manage your infrastructure, you're already on your way to making cloud provisioning processes more manageable. But there's a second piece to the infrastructure lifecycle: how do you know what resources are not yet managed by IaC in your cloud? And of the managed resources, do they remain the same in the cloud as when you defined them in code?
Changes to cloud workloads happen all the time. Increasing the amount of workloads running in the cloud means an increasing number of people and authenticated services interacting with infrastructures, across several cloud environments. As IaC becomes more widely adopted and IaC codebases become larger, it becomes more and more difficult to manually track if configuration changes are being accounted for, which is why policies are required to keep on top of everything.
Misconfigurations are, of course, rife across cloud computing environments, but most organizations are not prepared to manually address the issue. If there are differences between the IaC configuration and the actual infrastructure, this is when cloud or configuration drift can occur.
Drift is when your IaC state, or configurations in code, differ from your cloud state, or configurations of running resources. For example, if you use Terraform — one of the leading IaC tools that allows users to define and manage IaC — to provision a database without encryption turned on and a site reliability engineer goes in and adds encryption using the cloud console, then the Terraform state file is no longer in sync with the actual cloud infrastructure. The code, in turn, becomes less useful and harder to manage.
Now imagine this volume of code, at scale, across multiple clouds. Tricky. So what can be done?
Prescriptive vs Declarative: Establishing an IaC baseline
The issue lies in the approach.
Today, the majority of organizations are operating in what could be described as a prescriptive way, building environments where processes are still manual, even with the introduction of IaC. People are still needed to connect to platforms to patch, enhance, and configure infrastructure. The issue, as we've seen, is managing the delta between the current automation of a platform and the rest of its life cycle.
A shift is needed to catch the drift. A shift towards a declarative approach that removes people from the equation completely so that what is versioned is what is in production. This removes one of the biggest concerns for organizations, which is how to react to problems when they occur and how to reproduce infrastructure. Establishing an IaC baseline of an environment that is deemed to be the known state means that what is versioned becomes the constant source of trust and the desired state of an organization's infrastructure.
Once this has been established it opens doors to bigger and better automation and improved all-round experience for developers who no longer need to worry about taking time away from developing to configure infrastructure and troubleshoot issues. Retro-engineering IaC in a way that enables organizations to define what infrastructure needs to look like in a declarative way would enable organizations to start taking back control of their IaC and create visibility into drift long before it gets out of hand.
What often first started in many organizations as just one cloud instance has now spiralled, ploomed, morphed and mushroomed into a hydra's head of cloud applications that often lacks coherence or defined procedures. The only two certainties are that cloud applications are here to stay and that companies that fail to manage their cloud portfolio are going to face some difficult times. The 100-plus cloud applications used by businesses today require active and planned management that eliminates human intervention and provides a consistent approach to the processes involved. The good news is through IaC the tools are at hand to do the job.
Industry News
Pulumi announced Pulumi IDP, a new internal developer platform that accelerates cloud infrastructure delivery for organizations at any scale.
Qt Group announced plans for significant expansion of the Qt platform and ecosystem.
Testsigma introduced autonomous testing capabilities to its automation suite — powered by AI coworkers that collaborate with QA teams to simplify testing, speed up releases, and elevate software quality.
Google is rolling out an updated Gemini 2.5 Pro model with significantly enhanced coding capabilities.
BrowserStack announced the acquisition of Requestly, the open-source HTTP interception and API mocking tool that eliminates critical bottlenecks in modern web development.
Jitterbit announced the evolution of its unified AI-infused low-code Harmony platform to deliver accountable, layered AI technology — including enterprise-ready AI agents — across its entire product portfolio.
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, and Synadia announced that the NATS project will continue to thrive in the cloud native open source ecosystem of the CNCF with Synadia’s continued support and involvement.
RapDev announced the launch of Arlo, an AI Agent for ServiceNow designed to transform how enterprises manage operational workflows, risk, and service delivery.
Check Point® Software Technologies Ltd.(link is external) announced that its Quantum Firewall Software R82 — the latest version of Check Point’s core network security software delivering advanced threat prevention and scalable policy management — has received Common Criteria EAL4+ certification, further reinforcing its position as a trusted security foundation for critical infrastructure, government, and defense organizations worldwide.
Postman announced full support for the Model Context Protocol (MCP), helping users build better AI Agents, faster.
Opsera announced new Advanced Security Dashboard capabilities available as an extension of Opsera's Unified Insights for GitHub Copilot.
Lineaje launched new capabilities including Lineaje agentic AI-powered self-healing agents that autonomously secure open-source software, source code and containers, Gold Open Source Packages and Gold Open Source Images that enable organizations to source trusted, pre-fixed open-source software, and a software crawling and analysis engine, SCA360, that discovers and contextualizes risks at all software development stages.
Check Point® Software Technologies Ltd.(link is external) launched its inaugural AI Security Report(link is external) at RSA Conference 2025.
Lenses.io announced the release of Lenses 6.0, enabling organizations to modernize applications and systems with real-time data as AI adoption accelerates.