Check Point® Software Technologies Ltd.(link is external) announced that U.S. News & World Report has named the company among its 2025-2026 list of Best Companies to Work For(link is external).
As part of DEVOPSdigest's annual list of DevOps predictions, DevSecOps experts — from analysts and consultants to the top vendors — offer thoughtful, insightful, and often controversial predictions on how DevSecOps and related risks and tools will evolve in 2025. Part 3 covers AI security risks.
AI SECURITY THREAT IN 2025
AI as a Double-Edged Sword in Software Security: AI will increasingly help coders, defenders, and attackers accelerate their work. By integrating AI with automated tooling and CI/CD pipelines, developers will be able to quickly identify and fix coding flaws. Defenders can leverage AI's ability to analyze massive amounts of data and identify patterns, accelerating the work of SOC teams and other blue-team operations. Unfortunately, attackers may also use AI to craft sophisticated social engineering attacks, review public code for vulnerabilities, and employ other tactics that will complicate cybersecurity in the near future. We need to learn how to secure AI before broadly deploying it for security purposes.
Christopher Robinson
Chief Security Architect, OpenSSF(link is external)
Significant increase in software that's developed thanks to AI: By January 2023, 92% of US-based developers were using AI coding tools, so software AI generation is already here. Developers are becoming more comfortable with it and will be using it more. However, study after study has found that AI-generated code tends to have more vulnerabilities than human-generated code, which makes sense — it can't fully understand the code, and there's a lot of vulnerable code it's learning from. The most likely solution will be in two parts. First, automation: Projects like the AIxCC competition are working to develop AI tools to find and fix vulnerabilities. Second, we need humans to better understand how to develop secure software so that they can better supervise AI systems. We encourage software developers to take a course, such as our "Developing Secure Software(link is external)" (LFD121) course, to learn how to develop secure software.
David A. Wheeler
Director of Open Source Supply Chain Security, OpenSSF(link is external)
AI Governance Will Emerge as a Sprawling Security Challenge: With different regions enforcing cybersecurity regulations at varying speeds, such a complex global landscape will force software providers to invest heavily in compliance efforts. Existing AI regulations focus predominantly on ethical guidelines, bias, safety, and disinformation, rather than security. In the coming year, AI governance will become a critical concern for both cybersecurity professionals and regulators, particularly as US-based software regulators grapple with drafting standards for this ever-evolving technology.
Sohail Iqbal
Chief Information Security Officer, Veracode(link is external)
2025 will be the year where we really see the challenges of securing AI both from a technology perspective but also business risk management, forcing industry, and governments to address them. Right now, the industry has a baseline understanding of how to use AI safely and, therefore, lacks a full understanding of its risks. The most important action we'll need to take in the coming year is to gain a deeper understanding of AI/ML engines , and their journey in production usage. This could represent an organization's most vulnerable risk and attackers are exploring how they can be exploited.
Paul Davis
Field CISO, JFrog(link is external)
AI SECURITY THREAT: LLM-DRIVEN CODING
In 2025, the rise of LLM-driven development will fundamentally reshape decision-making in coding, prioritizing efficiency and functionality. Developers will rely on LLMs to provide the "best" answer to prompts, often overlooking vulnerabilities in favor of immediate usability. As a result, packages with known vulnerabilities may increasingly find their way into production, as security becomes a smaller factor in the decision-making process. This trend underscores the urgent need for AppSec solutions to proactively identify risks and ensure secure code paths without slowing down innovation. The challenge will be balancing AI-powered speed with robust security, preventing a surge in exploitable vulnerabilities.
Yossi Pik
CTO and Co-Founder, Backslash Security(link is external)
AI SECURITY THREAT: GENAI-DRIVEN CODING
GenAI-driven Coding Will Saddle Organizations with More Security Debt: As AI-fueled code velocity increases, the number of vulnerabilities and level of critical security debt will also grow. With more code created at a rapid pace, developers will become inundated with compliance risks, security alerts, and quality issues. Identifying a solution to help will be key. As security debt grows, so too will the demand for automated security remediation, however using GenAI to write code is still two years ahead of using the same technology for security hardening and remediation. This is why, in 2025, we can expect a rapid increase in the adoption of AI-powered remediation to fix vulnerabilities faster and materially reduce security debt.
Chris Wysopal
Co-Founder and Chief Security Evangelist, Veracode(link is external)
In 2025, the pressure to develop software faster will continue, but speed has become a serious security risk that is only being furthered by GenAI. The more we speed up development and release cycles with GenAI and otherwise, the more code vulnerabilities are being introduced. Next year, organizations must start focusing on balancing software development momentum with security. They will need to slow down enough to embed security at every stage of development, not just shifting left, to reduce risks and close potential attack entry points.
Karthik Swarnam
Chief Security and Trust Officer, ArmorCode(link is external)
AI SECURITY THREAT: INJECTION ATTACKS
Injection Attacks Resurface as AI-Generated Code Opens New Vulnerabilities: As AI-driven coding tools become mainstream in 2025, injection attacks are set to make a strong comeback. While AI accelerates development, it frequently generates code with security weaknesses, especially in input validation, creating new vulnerabilities across software systems. This resurgence of injection risks marks a step back to familiar threats, as AI-based tools produce code that may overlook best practices. Organizations must stay vigilant, reinforcing security protocols and validating AI-generated code to mitigate the threat of injection attacks in an increasingly AI-powered development environment.
Randall Degges
Head of Developer & Security Relations, Snyk(link is external)
AI SECURITY THREAT: OPEN SOURCE
The rise of AI-driven threats in open source: In 2025, open source software threats will shift from traditional vulnerabilities to AI-generated backdoors and malware embedded in open source packages. With attackers leveraging AI tools to develop and disguise malware within open source code, addressing these new threats will require a significant advancement in security tools to stay ahead of these quickly evolving challenges.
Idan Plotnik
Co-Founder and CEO, Apiiro(link is external)
AI SECURITY THREAT: API
The API economy is set to experience massive changes by 2025, with AI leading the charge. Simply put, there's no AI without APIs — they're the foundation that makes AI integration possible. As developers continue to explore AI and large language models (LLMs) for innovation, the number of APIs will grow exponentially. In fact, the value of APIs enabling AI is expected to skyrocket by 170% by 2030. But with this growth comes challenges, especially in security. The more advanced technologies like AI become, the more sophisticated attackers get. Over the past year, 55% of organizations dealt with API security incidents, and for 20% of them, remediation costs topped $500,000. What's more, 25% of companies have already faced AI-enhanced API threats, and 75% are worried about what's to come. Tackling these risks will require organizations to focus on complete visibility into their API endpoints and adopt centralized management platforms to stay ahead of attackers.
Marco Palladino
CTO and Co-Founder, Kong(link is external)
Hardening API Security Must Be a CISO Priority in 2025:
APIs form the backbone of computer-to-computer communications, powering nearly every generative AI application and workplace tool. But as APIs fuel this innovation, they also open the door to increasingly sophisticated cyberattacks. Gartner reports that API breaches result in 10 times more data exposure than the average security incident, underscoring the importance of securing APIs as a top priority, especially as organizations adopt generative AI into workflows.
Prioritizing secure APIs in DevOps not only ensures healthy software development but reduces the risk of reputational or financial damage. To keep up with the AI revolution, stay ahead of growing threats, and the rapidly expanding threat landscape, organizations must critically evaluate their API security strategy and ensure that it is a critical component of their DevSecOps mandate.
Rupesh Chokshi
SVP and GM of Application Security, Akamai(link is external)
AI SECURITY THREAT: KUBERNETES
With flexibility at the forefront, Kubernetes is quickly becoming the de facto platform in which GenAI applications are being deployed. Organizations can run Kubernetes for GenAI across various workloads including virtual machines (VMs), containers, or bare metal servers — or a mixture of all three. Against this backdrop, in 2025, there will be a heightened focus on Kubernetes security.
Ratan Tipirneni
President and CEO, Tigera(link is external)
Industry News
Postman announced new capabilities that make it dramatically easier to design, test, deploy, and monitor AI agents and the APIs they rely on.
Opsera announced the expansion of its partnership with Databricks.
Postman announced Agent Mode, an AI-native assistant that delivers real productivity gains across the entire API lifecycle.
Progress Software announced the Q2 2025 release of Progress® Telerik® and Progress® Kendo UI®, the .NET and JavaScript UI libraries for modern application development.
Voltage Park announced the launch of its managed Kubernetes service.
Cobalt announced a set of powerful product enhancements within the Cobalt Offensive Security Platform aimed at helping customers scale security testing with greater clarity, automation, and control.
LambdaTest announced its partnership with Assembla, a cloud-based platform for version control and project management.
Salt Security unveiled Salt Illuminate, a platform that redefines how organizations adopt API security.
Workday announced a new unified, AI developer toolset to bring the power of Workday Illuminate directly into the hands of customer and partner developers, enabling them to easily customize and connect AI apps and agents on the Workday platform.
Pegasystems introduced Pega Agentic Process Fabric™, a service that orchestrates all AI agents and systems across an open agentic network for more reliable and accurate automation.
Fivetran announced that its Connector SDK now supports custom connectors for any data source.
Copado announced that Copado Robotic Testing is available in AWS Marketplace, a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on Amazon Web Services (AWS).
Check Point® Software Technologies Ltd.(link is external) announced major advancements to its family of Quantum Force Security Gateways(link is external).
Sauce Labs announced the general availability of iOS 18 testing on its Virtual Device Cloud (VDC).