Exploring the Power of AI in Software Development - Part 5: More Challenges
November 01, 2024

Pete Goldin
DEVOPSdigest

In Part 5 of this series, the experts warn of even more limitations, challenges and risks associated with using AI to help develop software.

BIAS

AI bias is a significant known issue, where biases in training data may paint inaccurate pictures that negatively impact application experiences.
Shomron Jacob
Head of Applied Machine Learning & Platform, Iterate.ai(link is external)

With AI models trained on biased datasets, existing biases in the code they generate can be perpetuated to create discriminatory or unfair outcomes for certain groups of users.
Dotan Nahum
Head of Developer-First Security, Check Point Software Technologies(link is external)

There is the risk of AI tools being trained on biased or incomplete datasets, which could, unfortunately, lead to biased or suboptimal code generation. Being able to ensure that AI tools are trained on diverse and comprehensive datasets is critical to mitigating this risk.
Jobin Kuruvilla
Head of the DevOps Practice, Adaptavist(link is external)

HALLUCINATIONS

The biggest risk is a hallucination or AI generally getting it wrong. If you ask a question, AI will always give an answer, but it's not always right and can be trained with biased data that will impact the code and decision-making. Since it's not always accurate and you need to be extremely specific, it's important to prep it with as much info as possible by fine tuning the prompt. You need to think of AI as a very smart person, but it's their first day at the company. It's up to you to give it the proper training and context to be successful.
Udi Weinberg
Director of Product Management, Research and Development, OpenText(link is external)

An LLM hallucination occurs when a large language model (LLM) generates a response that is either factually incorrect, nonsensical, or disconnected from the input prompt. Hallucinations are a byproduct of language models' probabilistic nature, which generates responses based on patterns learned from vast datasets rather than factual understanding.
Michael Webster
Principal Software Engineer, CircleCI(link is external)

Often unintentional, AI hallucinations involve AI models with access to large amounts of public information that will take every input as factual. This highlights one of the weak points of AI and accentuates why a human element is still required in the development of code. AI cannot easily differentiate between fact and opinion leading to compelling outputs that are factually incorrect.
Chetan Conikee
Co-Founder and CTO, Qwiet AI(link is external)

By definition, all large language models are always "hallucinating" the next word. The implication of this is that any leverage of AI is going to result in answers that have the confidence of a 16 year old who thinks they have the world figured out even if they have the knowledge of a 3 year old on the topic. The risk therefore is getting answers that look right but aren't at all. In the next few years we're going to see code that promises things it can't deliver, documentation claiming features exist that don't, and even companies claiming they meet compliance standards they don't actually meet — all because AI is going to be leveraged — poorly — in development.
Kendall Miller
Chief Extrovert, Axiom(link is external)

I wouldn't blindly trust the code. Just as Microsoft itself advises against using LLMs to create legally binding materials.
Jon Collins
Analyst, Gigaom(link is external)

AI DOESN'T SOLVE PROBLEMS

One misconception about software development is that writing code is the task or obstacle to overcome. However, software development is more about solving problems for very specific business goals and objectives. While AI boosts productivity of developers, it still requires people who understand a business's domain and how the software relates to its goals and the problems the software aims to solve.
Ed Charbeneau
Developer Advocate, Principal, Progress(link is external)

In the developer world, writing code isn't necessarily the hardest part of the job. It's the art of problem solving. What is my technical or business problem and what is the solution? How do I achieve a positive outcome? AI doesn't do that. AI can efficiently generate the code, but the developer still needs to understand what's been generated and "where" it's going.
Robert Rea
CTO, Graylog(link is external)

AI LACKS CREATIVITY

If we want to understand how AI is going to change software development, it's helpful to first recognize what AI can't do. Actually, writing code is a small part of an engineer's job — good software development involves far more thinking than doing. Sure, AI can generate code, but it can't think for you, which means it can't think before executing code, and that brings great risk. This is the fundamental gap between humans and machines, and humans will always have the edge here. Humans will increasingly focus on more novel and creative work, while AI takes on the routine, undifferentiated heavy lifting. AI isn't going to reinvent the wheel of software development.
Shub Jain
Co-Founder and CTO, Auquan(link is external)

AI DOESN'T UNDERSTAND INTENT BEHIND THE CODE

Using AI to support software development comes with several challenges like misinterpreting outliers for actual problems developers should care about. This is the nuance of being able to discern between "weird" and "bad," and something that still needs the attention of a human developer to do. This is a lack of contextual understanding of an AI, meaning it lacks the ability to fully understand intent behind the code.
Phil Gervasi
Director of Technical Evangelism, Kentik(link is external)

DEPENDENCY MANAGEMENT

In large applications, there is often a complex taxonomy of libraries and frameworks. AI in its current form is not very sophisticated at adapting to an organization's frameworks.
Chris Du Toit
Head of Developer Relations, Gravitee(link is external)

VOLUME OF OUTPUT

Code was never the biggest issue; rather, it was complexity. A significant risk is that we correctly create large quantities of code and applications, which will all need to be managed; they may solve for certain problems, but could duplicate effort, require securing, diverge from the original need and so on. We need to avoid becoming the sorcerer's apprentice from the outset. Success should be measured in outcomes, not in volume of outputs.
Jon Collins
Analyst, Gigaom(link is external)

LACK OF DOCUMENTATION

AI-generated code tends to lack proper documentation and readability, making development and debugging more challenging.
Todd McNeal
Director of Product Management, SmartBear(link is external)

ETHICAL QUESTIONS

Using AI to support development raises important ethical considerations. Ensuring AI-generated code is free from biases is crucial to prevent unintended consequences. Balancing AI's efficiency with ethical practices is essential to maintain trust and integrity in software development.
Pavan Belagatti
Technology Evangelist, SingleStore(link is external)

There is a huge question of professional responsibility and ethics with AI-generated code. We expect developers to be professional and, in some cases, legally liable for negligence in the software they build. However, this system depends a lot on human and organizational safeguards like separation of duties, two-person rules, and continuing education. With AI, it might seem tempting to use it to maximum capacity, but it is harder to scale these organizational safeguards. It opens a lot of potential issues when it comes to standards, reviews, and the safety and security of software.
Michael Webster
Principal Software Engineer, CircleCI(link is external)

INTELLECTUAL PROPERTY VIOLATIONS

Intellectual property violations pose a risk, as AI tools might inadvertently reproduce or suggest code that closely resembles proprietary work, which could result in legal complications.
Faiz Khan
CEO, Wanclouds(link is external)

One of the troubling aspects of generative AI is the potential reuse of intellectual property during the creation process, and code generation is no exception. Unknowingly, developers may use a coding assistant that generates code violating an intellectual property law, thus exposing the organization to legal risks.
David Brault
Product Marketing Manager, Mendix(link is external)

AI reproduces the same code that is already on the internet without identifying it as such, leading to concerns around copyright infringement or license violations. It's important to know what data and sources were used by the AI in code generation.
Shourabh Rawat
Senior Director, Machine Learning, SymphonyAI(link is external)

INCREASED COSTS

AI in development still needs human oversight, because LLMs specifically and AI generally can hallucinate and create code that might break or perform actions that create risk for the business. AI generated code can have bugs, security vulnerabilities, be inefficient, and not properly address the functional requirements of the task. Without proper software engineering practices and quality standards in place, increased usage of AI generated code can increase the overall long-term costs of maintainability of the projects.
Shourabh Rawat
Senior Director, Machine Learning, SymphonyAI(link is external)

SKILLS GAP

Integrating AI solutions into existing workflows may require significant adjustments and skill development, posing a potential barrier to adoption. This can stunt development cycles (when the intention was to speed production) so leaders should ensure the AI tools they invest in easily integrate with their existing tech stack and are compatible with existing team talent.
Rahul Pradhan
VP of Product and Strategy, Couchbase(link is external)

AI raises significant questions about skill gaps and job displacement. The widespread adoption of AI in development can concern employees, particularly developers performing repetitive or easily automatable tasks. Addressing the potential skill gap and providing opportunities for upskilling will be critical in integrating AI successfully with existing processes and teams.
Dotan Nahum
Head of Developer-First Security, Check Point Software Technologies(link is external)

DEMOCRATIZATION

As generative AI tools make code creation more accessible and software development more widespread to even non-technical users, there is a growing risk to the quality and security of our software systems. Non-technical users may not grasp the intricacies of coding, creating code without understanding its potential long-term consequences.
Todd McNeal
Director of Product Management, SmartBear(link is external)

TRAINING NEW DEVELOPERS

AI is a challenge for new developers who need training. Traditionally, new developers learned from seniors by doing all of the grunt work. If AI does all of the grunt work, then how will new developers learn?
David Brooks
SVP of Evangelism, Copado(link is external)

The overuse of these tools can hinder the growth and learning curve of junior engineers who are relying heavily on these tools instead of sitting and thinking through the problem at hand. While it's important for people earlier in their careers to use newer tools, it's important to go through the motions of doing many things manually to understand what automations can offer and, ultimately, where they might go wrong.
Phillip Carter
Principal Product Manager, Honeycomb(link is external)

An over-reliance on AI negatively impacts the learning curve for younger engineers because it prevents them from understanding the root causes of issues and from developing problem-solving skills. We must balance AI usage with traditional learning methods to ensure newer engineers gain a deep understanding of software development fundamentals.
Shub Jain
Co-Founder and CTO, Auquan(link is external)

What worries me more than wholesale automation is a scenario where senior developers learn to use AI well enough to automate beginner-level development tasks. If this happens in enough places, companies might not invest as much in junior talent. This would disrupt the mentorship model of senior developers to pass down knowledge to junior developers, creating a generational skills gap. Limits on current model capabilities and the number of use cases where AI doesn't operate make this a less imminent scenario, but it is something to consider. The current issues we have with COBOL developers entering retirement, but with many mission-critical systems that use the language and are getting harder to support, is a good recent analogy for this scenario.
Michael Webster
Principal Software Engineer, CircleCI(link is external)

I see AI widening the gap between junior and senior developers. Seniors will know how to use AI well; juniors won't — and in addition to becoming a competent programmer, they will also need to learn how to use AI well. And the difficulty of using AI well is almost always understated. Let's say that AI outputs a function you need. It works — but it's very inefficient. Do you just pass it on, or do you know enough to realize that it's inefficient? Do you know enough to know whether or not you care that it's inefficient? If you just check the function into your source repo, you're never going to learn. That's a big issue.
Mike Loukides
VP of Emerging Tech Content, O'Reilly Media(link is external)

Go to: Exploring the Power of AI in Software Development - Part 6: Security Challenges

Pete Goldin is Editor and Publisher of DEVOPSdigest
Share this

Industry News

May 12, 2025

LambdaTest, a unified agentic AI and cloud engineering platform, has announced its partnership with MacStadium(link is external), the industry-leading private Mac cloud provider enabling enterprise macOS workloads, to accelerate its AI-native software testing by leveraging Apple Silicon.

May 12, 2025

Tricentis announced a new capability that injects Tricentis’ AI-driven testing intelligence into SAP’s integrated toolchain, part of RISE with SAP methodology.

May 12, 2025

Zencoder announced the launch of Zen Agents, delivering two innovations that transform AI-assisted development: a platform enabling teams to create and share custom agents organization-wide, and an open-source marketplace for community-contributed agents.

May 08, 2025

AWS announced the preview of the Amazon Q Developer integration in GitHub.

May 08, 2025

The OpenSearch Software Foundation, the vendor-neutral home for the OpenSearch Project, announced the general availability of OpenSearch 3.0.

May 08, 2025

Jozu raised $4 million in seed funding.

May 07, 2025

Wix.com announced the launch of the Wix Model Context Protocol (MCP) Server.

May 07, 2025

Pulumi announced Pulumi IDP, a new internal developer platform that accelerates cloud infrastructure delivery for organizations at any scale.

May 07, 2025

Qt Group announced plans for significant expansion of the Qt platform and ecosystem.

May 07, 2025

Testsigma introduced autonomous testing capabilities to its automation suite — powered by AI coworkers that collaborate with QA teams to simplify testing, speed up releases, and elevate software quality.

May 06, 2025

Google is rolling out an updated Gemini 2.5 Pro model with significantly enhanced coding capabilities.

May 06, 2025

BrowserStack announced the acquisition of Requestly, the open-source HTTP interception and API mocking tool that eliminates critical bottlenecks in modern web development.

May 06, 2025

Jitterbit announced the evolution of its unified AI-infused low-code Harmony platform to deliver accountable, layered AI technology — including enterprise-ready AI agents — across its entire product portfolio.

May 05, 2025

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, and Synadia announced that the NATS project will continue to thrive in the cloud native open source ecosystem of the CNCF with Synadia’s continued support and involvement.

May 05, 2025

RapDev announced the launch of Arlo, an AI Agent for ServiceNow designed to transform how enterprises manage operational workflows, risk, and service delivery.