Operant AI announced the launch of MCP Gateway, an expansion of its flagship AI Gatekeeper™ platform, that delivers comprehensive security for Model Context Protocol (MCP) applications.
Over the past two years, code assistants based on generative AI have transformed software coding, accelerating the generation of code on an unprecedented level. Developers are deploying more code than ever, but at a cost: exponential growth in security vulnerabilities. New research points to a 3X increase in repositories containing Personally Identifiable Information (PII) and payment data, a 10X increase in APIs without authorization and input validation, and more sensitive API endpoints exposed, all threats proliferated by AI-generated code. Though AI code assistants boost productivity, they possess no understanding of organizational risk, compliance policies, or security best practices, leaving companies more exposed.
The Velocity-Security Tradeoff
Ever since the launch of ChatGPT in the latter half of 2022, AI-supercharged development has taken off. Over 150 million developers are now using GitHub Copilot, up from a year ago, according to Microsoft, and that's 50% more than two years ago. Similarly, pull requests have grown by 70%, outpacing the 30% growth in repositories and the 20% increase in developers. While these figures indicate stupendous improvement in productivity, they also predict an imminent security concern: AI-generated code has no inherent security awareness.
This AI-driven development boom has created an unexpected side effect: widening disparity between security review capacity and development tempo. Organizations can't keep up, since security teams aren't growing nearly as fast as code output. The outcome? More sensitive data are in repositories, and insecure code finds its way into production unscathed.
The Security Implications of AI-Generated Code
Perhaps the most critical problem with AI-driven development is the explosion of sensitive information exposure. Developers using AI code assistants are inadvertently introducing sensitive information (such as personally identifiable data (PII), API keys, and payment data) into codebases at a growing pace. Without tight governance and automated monitoring in effect, these exposures could lead to worse security vulnerabilities. The sheer volume of code produced by AI makes it practically impossible for human security audits to identify all vulnerabilities and exposed data, exposing organizations to attacks and compliance breaches.
Another critical vulnerability is the lack of adequate authorization and input validation in APIs created by AI. A 10X increase in APIs lacking critical security controls has been reported through studies, making it easy for attackers to access data without permission or inject malicious payloads. While AI code can help write working code faster, they pay less attention to necessary security considerations, and so organizations are left with insecure APIs that throw open the gates for data breaches, fraud, and other cyber threats.
The rapid growth of AI-facilitated development has also resulted in an expanding attack surface. With more code, there are more endpoints, and more endpoints mean more opportunities for attackers to inject vulnerabilities. Applications developed by AI often have undetected risks since there is minimal systematic risk detection and governance. In the absence of such methods of detecting and mitigating these vulnerabilities, organizations subject themselves to increased operational risks, compliance risks, and financial losses. As the world embraces AI-based development, organizations must identify and address these risks so that security is an underlying foundation and not an afterthought.
Balancing AI Productivity Gains with Security
The challenge today is no longer about whether AI code assistants are to be used, it's how best to control their output. Organizations require automated, smart security tools that are easily integrated into development pipelines, with AI-powered code being subjected to rigorous risk assessment before deployment.
Development and security teams must work together to develop new AI governance policies, integrating security best practices in AI-enabled development pipelines. Placing AI risk detection and prevention at the forefront, organizations are able to enjoy the benefits of increased development speed without having to compromise on security.
As AI redefines the practice of software development, businesses must rethink traditional security approaches. The old way, such as manual examination and post-mortem security analysis, simply won't scale. By incorporating proactive security into AI-enabled workflows, businesses can identify security problems early on before they reach production and build a safer, more secure future for software development.
Industry News
Oracle has expanded its collaboration with NVIDIA to help customers streamline the development and deployment of production-ready AI, develop and run next-generation reasoning models and AI agents, and access the computing resources needed to further accelerate AI innovation.
Datadog launched its Internal Developer Portal (IDP) built on live observability data.
Azul and Chainguard announced a strategic partnership that will unite Azul’s commercial support and curated OpenJDK distributions with Chainguard’s Linux distro, software factory and container images.
SmartBear launched Reflect Mobile featuring HaloAI, expanding its no-code, GenAI-powered test automation platform to include native mobile apps.
ArmorCode announced the launch of AI Code Insights.
Codiac announced the release of Codiac 2.5, a major update to its unified automation platform for container orchestration and Kubernetes management.
Harness Internal Developer Portal (IDP) is releasing major upgrades and new features built to address challenges developers face daily, ultimately giving them more time back for innovation.
Azul announced an enhancement to Azul Intelligence Cloud, a breakthrough capability in Azul Vulnerability Detection that brings precision to detection of Java application security vulnerabilities.
ZEST Security announced its strategic integration with Upwind, giving DevOps and Security teams real-time, runtime powered cloud visibility combined with intelligent, Agentic AI-driven remediation.
Google announced an upgraded preview of Gemini 2.5 Pro, its most intelligent model yet.
iTmethods and Coder have partnered to bring enterprises a new way to deploy secure, high-performance and AI-ready Cloud Development Environments (CDEs).
Gearset announced the expansion of its new Observability functionality to include Flow and Apex error monitoring.
Check Point® Software Technologies Ltd. announced that U.S. News & World Report has named the company among its 2025-2026 list of Best Companies to Work For.
Postman announced new capabilities that make it dramatically easier to design, test, deploy, and monitor AI agents and the APIs they rely on.