Why Secure Code Knowledge Matters for Developers
May 15, 2025

John Campbell
Security Journey

Artificial intelligence (AI) remains a transformative force in organizations, providing decision-makers with an efficient and cost-effective way to enhance daily operations and drive business growth. This disruptive technology is making waves across all business sectors, but its influence is especially pronounced in software and product development. Developers are leveraging AI to accelerate the software development lifecycle, enabling them to automate repetitive coding tasks and generate substantial amounts of code in a fraction of the usual time.

However, despite the numerous production advantages that AI has brought to organizations, it has simultaneously made it easier for less skilled hackers to infiltrate company systems with AI malicious code. This increased accessibility has drastically heightened security risks, requiring developers — who find themselves at the forefront of corporate innovation and responsibility — to fully understand the evolving security threats and know how to identify and "sniff out" insecure code. The need for this knowledge is more pressing than ever, as recent studies show that AI-driven attacks have affected 87% of organizations worldwide.

Developers play a pivotal role in designing and maintaining systems that are secure, ethical, and resilient. While AI is an incredible assistant, it is the developer who ensures systems are built with integrity and aligned with human values.

The Right Education Empowers Developers

Just 1 in 5 organizations are confident in their ability to detect a vulnerability before an application is released, meaning that the security knowledge in most development lifecycles is insufficient — in fact the number of developers who are actually taught how to code securely whilst in education is minimal. None of the top 50 undergraduate computer science programs in the US require it for majors.

Developers must adopt the principle of "trust no-one, verify everything" which requires a thorough understanding of AI-generated code and the tools they use to proactively interrogate vulnerabilities, validate source code pre-deployment and leverage AI responsibly.

This requires the right education and ongoing contextual based learning surrounding secure by design principles, common vulnerabilities and secure coding practices. Developers' secure code knowledge must be consistently updated and reinforced given the rapid technological evolution of AI so that they can continually stay one step ahead of the latest threats.

This approach also helps developers understand the ethical implications of AI and equips them to question biases and consider the broader societal impact of the technologies they create. Without this depth of education, AppSec and security teams are left with an unnecessary burden of security, which will only ultimately require more time, spend, and potential for business risk.

Tailored and Measurable Knowledge

Surface-level coding knowledge is insufficient if developers want to write code securely and there is no one-size-fits-all model. Training must go beyond the basics, be tailored to specific organizations and their daily operations and relevant to a developer's specific role and language they use every day. Hands-on practice spotting vulnerabilities in code and writing code securely also bridges the gap between theory and real-world application. By doing this, developers are more likely to embed vital architectural and technological knowledge, leading to more confident decision-making and applications that are hardened against attacks.

Developers' ability to write secure code and detect flaws should always be measured because of the potential damaging impact from malicious AI, and it is important to gather information that can be used to measure success. For example, one way to do this might be comparing the number of vulnerabilities present in a developer's code before and after training. Or, identifying the number of vulnerabilities that a developer can detect and fix. This information could highlight whether the developer is improving and ensures they stay engaged with any training.

We will most likely see a significant number of GenAI projects being abandoned after proof of concept by the end of 2025 due to inadequate risk control. However by taking the necessary steps to foster and maintain fundamental security principles through continuous security training and education, it is possible for development teams to successfully balance risk and reward, ensuring the secure deployment of AI in development and beyond.

John Campbell is Director of Content Engineering at Security Journey
Share this

Industry News

June 02, 2025

Pegasystems introduced Pega Agentic Process Fabric™, a service that orchestrates all AI agents and systems across an open agentic network for more reliable and accurate automation.

June 02, 2025

Fivetran announced that its Connector SDK now supports custom connectors for any data source.

June 02, 2025

Copado announced that Copado Robotic Testing is available in AWS Marketplace, a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on Amazon Web Services (AWS).

May 29, 2025

Sauce Labs announced the general availability of iOS 18 testing on its Virtual Device Cloud (VDC).

May 29, 2025

Infragistics announced the launch of Infragistics Ultimate 25.1, the company's flagship UX and UI product.

May 29, 2025

CIQ announced the creation of its Open Source Program Office (OSPO).

May 28, 2025

Check Point® Software Technologies Ltd.(link is external) announced the launch of its next generation Quantum(link is external) Smart-1 Management Appliances, delivering 2X increase in managed gateways and up to 70% higher log rate, with AI-powered security tools designed to meet the demands of hybrid enterprises.

May 28, 2025

Salesforce and Informatica have entered into an agreement for Salesforce to acquire Informatica.

May 28, 2025

Red Hat and Google Cloud announced an expanded collaboration to advance AI for enterprise applications by uniting Red Hat’s open source technologies with Google Cloud’s purpose-built infrastructure and Google’s family of open models, Gemma.

May 28, 2025

Mirantis announced Mirantis k0rdent Enterprise and Mirantis k0rdent Virtualization, unifying infrastructure for AI, containerized, and VM-based workloads through a Kubernetes-native model, streamlining operations for high-performance AI pipelines, modern microservices, and legacy applications alike.

May 28, 2025

Snyk launched the Snyk AI Trust Platform, an AI-native agentic platform specifically built to secure and govern software development in the AI Era.

May 28, 2025

Bit Cloud announced the general availability of Hope AI, its new AI-powered development agent that enables professional developers and organizations to build, share, deploy, and maintain complex applications using natural language prompts, specifications and design files.

May 27, 2025

AI-fueled attacks and hyperconnected IT environments have made threat exposure one of the most urgent cybersecurity challenges facing enterprises today. In response, Check Point® Software Technologies Ltd.(link is external) announced a definitive agreement to acquire Veriti Cybersecurity, the first fully automated, multi-vendor pre-emptive threat exposure and mitigation platform.

May 27, 2025

LambdaTest announced the launch of its Automation MCP Server, a solution designed to simplify and accelerate the process of triaging test failures.