Operant AI announced the launch of MCP Gateway, an expansion of its flagship AI Gatekeeper™ platform, that delivers comprehensive security for Model Context Protocol (MCP) applications.
Whether it's building an SDK, launching a new application title, or knocking out a version update with advanced capabilities, speed remains a primary competitive driver for development organizations and their customers. To that end, ultra-fast and frictionless mobile application development increasingly depends on automation.
More specifically, DevOps teams are readily embracing modern tools that utilize large language models (LLMs), generative AI (GenAI), and the very buzzy agentic AI to accelerate their continuous integration/continuous delivery (CI/CD) pipelines. An estimated 70% of professional developers will be using AI-powered coding tools by 2027; Google claims that more than a quarter of their new code is already generated by AI.
But AI's tremendous potential business value is currently outshining some very real risks to mobile applications and the broader software supply chain.
Code Flaws and Opaque Dependencies
To start with, AI tools are prone to making common mistakes in DevOps environments, including generating hardcoded secrets in code, misconfiguring infrastructure-as-code (IaC) with open permissions, and overlooking secure CI/CD pipeline configurations.
AI-based development tools also increase risks stemming from dependency chain opacity in mobile applications. Blind spots in the software supply chain will increase as AI agents and coding assistants are tasked with autonomously selecting and integrating dependencies. Since AI simultaneously pulls code from multiple sources, traditional methods of dependency tracking will prove insufficient.
To mitigate the risks to mobile applications, any AI-generated code should undergo rigorous review to identify potential security vulnerabilities and quality issues early on, before they lead to costly problems downstream. Unfortunately, the responsibility for ensuring this kind of review before a release is often overlooked, and these simple, unforced errors are just the first line of potential hazards.
Slopsquatting, Hallucinations, and Bad Vibes
Any tool that brings positive benefits can also be abused or misused, and GenAI is no different. The term "slopsquatting" has emerged to describe instances where a threat actor registers a software package that doesn't actually exist. Similar to "typosquatting" (where malicious actors count on human spelling errors), slopsquatting anticipates a developer's misplaced trust in AI suggestions. If a developer installs one of these fake packages without first verifying it, malicious code can be introduced into the project.
Another issue is that many large frontier LLMs are trained on open-source software rather than on proprietary databases of secure code. As such, these LLMs are susceptible to replicating common open-source vulnerabilities, as well as data poisoning and malware attacks by malicious actors. Researchers recently discovered a specific instance where threat actors exploited machine learning (ML) models using the Pickle file format to conceal malware inside seemingly legitimate AI-related software packages.
Perhaps even more concerning, LLMs may recommend vulnerable, insecure, or non-existent open-source libraries independently. These package hallucinations can lead to a novel form of package confusion attack for careless developers. The hallucination problem is also predictably pervasive. A recent university study of over 500,000 LLM-generated code samples found that nearly 1 in 5 packages suggested by AI didn't exist. They discovered 205,474 unique examples of hallucinated package names; commercial models were 5.2% likely to include at least one hallucinated package, and that rate jumped to 21.7% for open-source models.
While these vulnerabilities may seem isolated, they can have far-reaching downstream implications for software supply chains. A prompt injection vulnerability might allow an LLM to be manipulated through malicious inputs to generate incorrect or insecure code that spreads through connected systems. One such prompt injection vulnerability was discovered in OpenAI's ChatGPT late last year.
The developer trend of intuitive "vibe coding" may take package hallucinations into serious bad trip territory. The term refers to developers using casual AI prompts to generally describe a desired mobile app outcome; the AI tool then generates code to achieve it. Counter to the common wisdom of zero trust, vibe coding tends to lean heavily on trust; developers very often copy and paste code results without any manual review checks. Any hallucinated packages that get carried over can become easy entry points for threat actors.
Agentic AI Amplifies the Chances for Trouble
According to OWASP, agentic AI represents an advancement in autonomous systems. Integration with LLMs and GenAI has significantly expanded the scale and capabilities of using these tools, as well as the associated risks. Relying on these complex multi-agent systems not only intensifies dependency opacity and multiplies the chances for error generation, it also creates opportunities for code generation tool misuse by malicious actors. OWASP specifically calls out the potential for new attack vectors using Remote Code Execution (RCE) and other code attacks.
While some predict that agentic AI will disrupt the mobile application landscape by ultimately replacing traditional apps, other modes of disruption seem more immediate. For instance, researchers recently discovered an indirect prompt injection flaw in GitLab's built-in AI assistant Duo. This could allow attackers to steal source code or inject untrusted HTML into Duo's responses and direct users to malicious websites.
Build Security into the Mobile App SDLC
While the advertised efficiency, cost, and time-to-market advantages of AI-assisted development are all tantalizing, those savings would be only short-term gains if they ultimately lead to a security incident. The associated challenges and risks to development organizations are not going unnoticed. A recent Gartner survey of software engineering/application development leaders in the US and UK found that the use of AI tools to augment software engineering workflows was a significant or moderate pain point for 71% of respondents.
To actualize the potential value of AI in DevOps, organizations need to treat these powerful tools like any other user, device, or application within the Zero Trust framework. Developers need to de-risk AI adoption by embracing effective solutions for testing, protection, and monitoring. A secure software development lifecycle (SDLC) for mobile applications is one that integrates security across every phase, including solutions for:
■ Mobile application security testing (MAST) that maintains development speed without compromising security.
■ Code hardening and obfuscation tools to make reverse engineering significantly more difficult for threat actors.
■ Runtime application self-protection (RASP) to detect and block tampering attempts while the app is running.
■ App attestation to ensure that only legitimate, trusted apps can interact with your APIs and protect your application from bots, malware, fraud, and targeted attacks.
■ Real-time threat monitoring to continuously observe the app in the field as the threat landscape evolves.
Industry News
Oracle has expanded its collaboration with NVIDIA to help customers streamline the development and deployment of production-ready AI, develop and run next-generation reasoning models and AI agents, and access the computing resources needed to further accelerate AI innovation.
Datadog launched its Internal Developer Portal (IDP) built on live observability data.
Azul and Chainguard announced a strategic partnership that will unite Azul’s commercial support and curated OpenJDK distributions with Chainguard’s Linux distro, software factory and container images.
SmartBear launched Reflect Mobile featuring HaloAI, expanding its no-code, GenAI-powered test automation platform to include native mobile apps.
ArmorCode announced the launch of AI Code Insights.
Codiac announced the release of Codiac 2.5, a major update to its unified automation platform for container orchestration and Kubernetes management.
Harness Internal Developer Portal (IDP) is releasing major upgrades and new features built to address challenges developers face daily, ultimately giving them more time back for innovation.
Azul announced an enhancement to Azul Intelligence Cloud, a breakthrough capability in Azul Vulnerability Detection that brings precision to detection of Java application security vulnerabilities.
ZEST Security announced its strategic integration with Upwind, giving DevOps and Security teams real-time, runtime powered cloud visibility combined with intelligent, Agentic AI-driven remediation.
Google announced an upgraded preview of Gemini 2.5 Pro, its most intelligent model yet.
iTmethods and Coder have partnered to bring enterprises a new way to deploy secure, high-performance and AI-ready Cloud Development Environments (CDEs).
Gearset announced the expansion of its new Observability functionality to include Flow and Apex error monitoring.
Check Point® Software Technologies Ltd. announced that U.S. News & World Report has named the company among its 2025-2026 list of Best Companies to Work For.
Postman announced new capabilities that make it dramatically easier to design, test, deploy, and monitor AI agents and the APIs they rely on.