The Rise of GenAI Code Assistants and the Security Risks Lurking Beneath the Surface
May 29, 2025

Itay Nussbaum
Apiiro

Over the past two years, code assistants based on generative AI have transformed software coding, accelerating the generation of code on an unprecedented level. Developers are deploying more code than ever, but at a cost: exponential growth in security vulnerabilities. New research points to a 3X increase in repositories containing Personally Identifiable Information (PII) and payment data, a 10X increase in APIs without authorization and input validation, and more sensitive API endpoints exposed, all threats proliferated by AI-generated code. Though AI code assistants boost productivity, they possess no understanding of organizational risk, compliance policies, or security best practices, leaving companies more exposed.

The Velocity-Security Tradeoff

Ever since the launch of ChatGPT in the latter half of 2022, AI-supercharged development has taken off. Over 150 million developers are now using GitHub Copilot, up from a year ago, according to Microsoft, and that's 50% more than two years ago. Similarly, pull requests have grown by 70%, outpacing the 30% growth in repositories and the 20% increase in developers. While these figures indicate stupendous improvement in productivity, they also predict an imminent security concern: AI-generated code has no inherent security awareness.

This AI-driven development boom has created an unexpected side effect: widening disparity between security review capacity and development tempo. Organizations can't keep up, since security teams aren't growing nearly as fast as code output. The outcome? More sensitive data are in repositories, and insecure code finds its way into production unscathed.

The Security Implications of AI-Generated Code

Perhaps the most critical problem with AI-driven development is the explosion of sensitive information exposure. Developers using AI code assistants are inadvertently introducing sensitive information (such as personally identifiable data (PII), API keys, and payment data) into codebases at a growing pace. Without tight governance and automated monitoring in effect, these exposures could lead to worse security vulnerabilities. The sheer volume of code produced by AI makes it practically impossible for human security audits to identify all vulnerabilities and exposed data, exposing organizations to attacks and compliance breaches.

Another critical vulnerability is the lack of adequate authorization and input validation in APIs created by AI. A 10X increase in APIs lacking critical security controls has been reported through studies, making it easy for attackers to access data without permission or inject malicious payloads. While AI code can help write working code faster, they pay less attention to necessary security considerations, and so organizations are left with insecure APIs that throw open the gates for data breaches, fraud, and other cyber threats.

The rapid growth of AI-facilitated development has also resulted in an expanding attack surface. With more code, there are more endpoints, and more endpoints mean more opportunities for attackers to inject vulnerabilities. Applications developed by AI often have undetected risks since there is minimal systematic risk detection and governance. In the absence of such methods of detecting and mitigating these vulnerabilities, organizations subject themselves to increased operational risks, compliance risks, and financial losses. As the world embraces AI-based development, organizations must identify and address these risks so that security is an underlying foundation and not an afterthought.

Balancing AI Productivity Gains with Security

The challenge today is no longer about whether AI code assistants are to be used, it's how best to control their output. Organizations require automated, smart security tools that are easily integrated into development pipelines, with AI-powered code being subjected to rigorous risk assessment before deployment.

Development and security teams must work together to develop new AI governance policies, integrating security best practices in AI-enabled development pipelines. Placing AI risk detection and prevention at the forefront, organizations are able to enjoy the benefits of increased development speed without having to compromise on security.

As AI redefines the practice of software development, businesses must rethink traditional security approaches. The old way, such as manual examination and post-mortem security analysis, simply won't scale. By incorporating proactive security into AI-enabled workflows, businesses can identify security problems early on before they reach production and build a safer, more secure future for software development.

Itay Nussbaum is a Product Manager at Apiiro
Share this

Industry News

June 16, 2025

Operant AI announced the launch of MCP Gateway, an expansion of its flagship AI Gatekeeper™ platform, that delivers comprehensive security for Model Context Protocol (MCP) applications.

June 12, 2025

Oracle has expanded its collaboration with NVIDIA to help customers streamline the development and deployment of production-ready AI, develop and run next-generation reasoning models and AI agents, and access the computing resources needed to further accelerate AI innovation.

June 12, 2025

Datadog launched its Internal Developer Portal (IDP) built on live observability data.

June 12, 2025

Azul and Chainguard announced a strategic partnership that will unite Azul’s commercial support and curated OpenJDK distributions with Chainguard’s Linux distro, software factory and container images.

June 11, 2025

SmartBear launched Reflect Mobile featuring HaloAI, expanding its no-code, GenAI-powered test automation platform to include native mobile apps.

June 11, 2025

ArmorCode announced the launch of AI Code Insights.

June 11, 2025

Codiac announced the release of Codiac 2.5, a major update to its unified automation platform for container orchestration and Kubernetes management.

June 10, 2025

Harness Internal Developer Portal (IDP) is releasing major upgrades and new features built to address challenges developers face daily, ultimately giving them more time back for innovation.

June 10, 2025

Azul announced an enhancement to Azul Intelligence Cloud, a breakthrough capability in Azul Vulnerability Detection that brings precision to detection of Java application security vulnerabilities.

June 10, 2025

ZEST Security announced its strategic integration with Upwind, giving DevOps and Security teams real-time, runtime powered cloud visibility combined with intelligent, Agentic AI-driven remediation.

June 09, 2025

Google announced an upgraded preview of Gemini 2.5 Pro, its most intelligent model yet.

June 09, 2025

iTmethods and Coder have partnered to bring enterprises a new way to deploy secure, high-performance and AI-ready Cloud Development Environments (CDEs).

June 09, 2025

Gearset announced the expansion of its new Observability functionality to include Flow and Apex error monitoring.

June 05, 2025

Postman announced new capabilities that make it dramatically easier to design, test, deploy, and monitor AI agents and the APIs they rely on.