Spectro Cloud is a launch partner for the new Amazon EKS Hybrid Nodes feature debuting at AWS re:Invent 2024.
In Part 5 of this series, the experts warn of even more limitations, challenges and risks associated with using AI to help develop software.
BIAS
AI bias is a significant known issue, where biases in training data may paint inaccurate pictures that negatively impact application experiences.
Shomron Jacob
Head of Applied Machine Learning & Platform, Iterate.ai
With AI models trained on biased datasets, existing biases in the code they generate can be perpetuated to create discriminatory or unfair outcomes for certain groups of users.
Dotan Nahum
Head of Developer-First Security, Check Point Software Technologies
There is the risk of AI tools being trained on biased or incomplete datasets, which could, unfortunately, lead to biased or suboptimal code generation. Being able to ensure that AI tools are trained on diverse and comprehensive datasets is critical to mitigating this risk.
Jobin Kuruvilla
Head of the DevOps Practice, Adaptavist
HALLUCINATIONS
The biggest risk is a hallucination or AI generally getting it wrong. If you ask a question, AI will always give an answer, but it's not always right and can be trained with biased data that will impact the code and decision-making. Since it's not always accurate and you need to be extremely specific, it's important to prep it with as much info as possible by fine tuning the prompt. You need to think of AI as a very smart person, but it's their first day at the company. It's up to you to give it the proper training and context to be successful.
Udi Weinberg
Director of Product Management, Research and Development, OpenText
An LLM hallucination occurs when a large language model (LLM) generates a response that is either factually incorrect, nonsensical, or disconnected from the input prompt. Hallucinations are a byproduct of language models' probabilistic nature, which generates responses based on patterns learned from vast datasets rather than factual understanding.
Michael Webster
Principal Software Engineer, CircleCI
Often unintentional, AI hallucinations involve AI models with access to large amounts of public information that will take every input as factual. This highlights one of the weak points of AI and accentuates why a human element is still required in the development of code. AI cannot easily differentiate between fact and opinion leading to compelling outputs that are factually incorrect.
Chetan Conikee
Co-Founder and CTO, Qwiet AI
By definition, all large language models are always "hallucinating" the next word. The implication of this is that any leverage of AI is going to result in answers that have the confidence of a 16 year old who thinks they have the world figured out even if they have the knowledge of a 3 year old on the topic. The risk therefore is getting answers that look right but aren't at all. In the next few years we're going to see code that promises things it can't deliver, documentation claiming features exist that don't, and even companies claiming they meet compliance standards they don't actually meet — all because AI is going to be leveraged — poorly — in development.
Kendall Miller
Chief Extrovert, Axiom
I wouldn't blindly trust the code. Just as Microsoft itself advises against using LLMs to create legally binding materials.
Jon Collins
Analyst, Gigaom
AI DOESN'T SOLVE PROBLEMS
One misconception about software development is that writing code is the task or obstacle to overcome. However, software development is more about solving problems for very specific business goals and objectives. While AI boosts productivity of developers, it still requires people who understand a business's domain and how the software relates to its goals and the problems the software aims to solve.
Ed Charbeneau
Developer Advocate, Principal, Progress
In the developer world, writing code isn't necessarily the hardest part of the job. It's the art of problem solving. What is my technical or business problem and what is the solution? How do I achieve a positive outcome? AI doesn't do that. AI can efficiently generate the code, but the developer still needs to understand what's been generated and "where" it's going.
Robert Rea
CTO, Graylog
AI LACKS CREATIVITY
If we want to understand how AI is going to change software development, it's helpful to first recognize what AI can't do. Actually, writing code is a small part of an engineer's job — good software development involves far more thinking than doing. Sure, AI can generate code, but it can't think for you, which means it can't think before executing code, and that brings great risk. This is the fundamental gap between humans and machines, and humans will always have the edge here. Humans will increasingly focus on more novel and creative work, while AI takes on the routine, undifferentiated heavy lifting. AI isn't going to reinvent the wheel of software development.
Shub Jain
Co-Founder and CTO, Auquan
AI DOESN'T UNDERSTAND INTENT BEHIND THE CODE
Using AI to support software development comes with several challenges like misinterpreting outliers for actual problems developers should care about. This is the nuance of being able to discern between "weird" and "bad," and something that still needs the attention of a human developer to do. This is a lack of contextual understanding of an AI, meaning it lacks the ability to fully understand intent behind the code.
Phil Gervasi
Director of Technical Evangelism, Kentik
DEPENDENCY MANAGEMENT
In large applications, there is often a complex taxonomy of libraries and frameworks. AI in its current form is not very sophisticated at adapting to an organization's frameworks.
Chris Du Toit
Head of Developer Relations, Gravitee
VOLUME OF OUTPUT
Code was never the biggest issue; rather, it was complexity. A significant risk is that we correctly create large quantities of code and applications, which will all need to be managed; they may solve for certain problems, but could duplicate effort, require securing, diverge from the original need and so on. We need to avoid becoming the sorcerer's apprentice from the outset. Success should be measured in outcomes, not in volume of outputs.
Jon Collins
Analyst, Gigaom
LACK OF DOCUMENTATION
AI-generated code tends to lack proper documentation and readability, making development and debugging more challenging.
Todd McNeal
Director of Product Management, SmartBear
ETHICAL QUESTIONS
Using AI to support development raises important ethical considerations. Ensuring AI-generated code is free from biases is crucial to prevent unintended consequences. Balancing AI's efficiency with ethical practices is essential to maintain trust and integrity in software development.
Pavan Belagatti
Technology Evangelist, SingleStore
There is a huge question of professional responsibility and ethics with AI-generated code. We expect developers to be professional and, in some cases, legally liable for negligence in the software they build. However, this system depends a lot on human and organizational safeguards like separation of duties, two-person rules, and continuing education. With AI, it might seem tempting to use it to maximum capacity, but it is harder to scale these organizational safeguards. It opens a lot of potential issues when it comes to standards, reviews, and the safety and security of software.
Michael Webster
Principal Software Engineer, CircleCI
INTELLECTUAL PROPERTY VIOLATIONS
Intellectual property violations pose a risk, as AI tools might inadvertently reproduce or suggest code that closely resembles proprietary work, which could result in legal complications.
Faiz Khan
CEO, Wanclouds
One of the troubling aspects of generative AI is the potential reuse of intellectual property during the creation process, and code generation is no exception. Unknowingly, developers may use a coding assistant that generates code violating an intellectual property law, thus exposing the organization to legal risks.
David Brault
Product Marketing Manager, Mendix
AI reproduces the same code that is already on the internet without identifying it as such, leading to concerns around copyright infringement or license violations. It's important to know what data and sources were used by the AI in code generation.
Shourabh Rawat
Senior Director, Machine Learning, SymphonyAI
INCREASED COSTS
AI in development still needs human oversight, because LLMs specifically and AI generally can hallucinate and create code that might break or perform actions that create risk for the business. AI generated code can have bugs, security vulnerabilities, be inefficient, and not properly address the functional requirements of the task. Without proper software engineering practices and quality standards in place, increased usage of AI generated code can increase the overall long-term costs of maintainability of the projects.
Shourabh Rawat
Senior Director, Machine Learning, SymphonyAI
SKILLS GAP
Integrating AI solutions into existing workflows may require significant adjustments and skill development, posing a potential barrier to adoption. This can stunt development cycles (when the intention was to speed production) so leaders should ensure the AI tools they invest in easily integrate with their existing tech stack and are compatible with existing team talent.
Rahul Pradhan
VP of Product and Strategy, Couchbase
AI raises significant questions about skill gaps and job displacement. The widespread adoption of AI in development can concern employees, particularly developers performing repetitive or easily automatable tasks. Addressing the potential skill gap and providing opportunities for upskilling will be critical in integrating AI successfully with existing processes and teams.
Dotan Nahum
Head of Developer-First Security, Check Point Software Technologies
DEMOCRATIZATION
As generative AI tools make code creation more accessible and software development more widespread to even non-technical users, there is a growing risk to the quality and security of our software systems. Non-technical users may not grasp the intricacies of coding, creating code without understanding its potential long-term consequences.
Todd McNeal
Director of Product Management, SmartBear
TRAINING NEW DEVELOPERS
AI is a challenge for new developers who need training. Traditionally, new developers learned from seniors by doing all of the grunt work. If AI does all of the grunt work, then how will new developers learn?
David Brooks
SVP of Evangelism, Copado
The overuse of these tools can hinder the growth and learning curve of junior engineers who are relying heavily on these tools instead of sitting and thinking through the problem at hand. While it's important for people earlier in their careers to use newer tools, it's important to go through the motions of doing many things manually to understand what automations can offer and, ultimately, where they might go wrong.
Phillip Carter
Principal Product Manager, Honeycomb
An over-reliance on AI negatively impacts the learning curve for younger engineers because it prevents them from understanding the root causes of issues and from developing problem-solving skills. We must balance AI usage with traditional learning methods to ensure newer engineers gain a deep understanding of software development fundamentals.
Shub Jain
Co-Founder and CTO, Auquan
What worries me more than wholesale automation is a scenario where senior developers learn to use AI well enough to automate beginner-level development tasks. If this happens in enough places, companies might not invest as much in junior talent. This would disrupt the mentorship model of senior developers to pass down knowledge to junior developers, creating a generational skills gap. Limits on current model capabilities and the number of use cases where AI doesn't operate make this a less imminent scenario, but it is something to consider. The current issues we have with COBOL developers entering retirement, but with many mission-critical systems that use the language and are getting harder to support, is a good recent analogy for this scenario.
Michael Webster
Principal Software Engineer, CircleCI
I see AI widening the gap between junior and senior developers. Seniors will know how to use AI well; juniors won't — and in addition to becoming a competent programmer, they will also need to learn how to use AI well. And the difficulty of using AI well is almost always understated. Let's say that AI outputs a function you need. It works — but it's very inefficient. Do you just pass it on, or do you know enough to realize that it's inefficient? Do you know enough to know whether or not you care that it's inefficient? If you just check the function into your source repo, you're never going to learn. That's a big issue.
Mike Loukides
VP of Emerging Tech Content, O'Reilly Media
Go to: Exploring the Power of AI in Software Development - Part 6: Security Challenges
Industry News
Couchbase unveiled Capella AI Services to help enterprises address the growing data challenges of AI development and deployment and streamline how they build secure agentic AI applications at scale.
Veracode announced innovations to help developers build secure-by-design software, and security teams reduce risk across their code-to-cloud ecosystem.
Traefik Labs unveiled the Traefik AI Gateway, a centralized cloud-native egress gateway for managing and securing internal applications with external AI services like Large Language Models (LLMs).
Generally available to all customers today, Sumo Logic Mo Copilot, an AI Copilot for DevSecOps, will empower the entire team and drastically reduce response times for critical applications.
iTMethods announced a strategic partnership with CircleCI, a continuous integration and delivery (CI/CD) platform. Together, they will deliver a seamless, end-to-end solution for optimizing software development and delivery processes.
Progress announced the Q4 2024 release of its award-winning Progress® Telerik® and Progress® Kendo UI® component libraries.
Check Point® Software Technologies Ltd. has been recognized as a Leader and Fast Mover in the latest GigaOm Radar Report for Cloud-Native Application Protection Platforms (CNAPPs).
Spectro Cloud, provider of the award-winning Palette Edge™ Kubernetes management platform, announced a new integrated edge in a box solution featuring the Hewlett Packard Enterprise (HPE) ProLiant DL145 Gen11 server to help organizations deploy, secure, and manage demanding applications for diverse edge locations.
Red Hat announced the availability of Red Hat JBoss Enterprise Application Platform (JBoss EAP) 8 on Microsoft Azure.
Launchable by CloudBees is now available on AWS Marketplace, a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on Amazon Web Services (AWS).
Kong closed a $175 million in up-round Series E financing, with a mix of primary and secondary transactions at a $2 billion valuation.
Tricentis announced that GTCR, a private equity firm, has signed a definitive agreement to invest $1.33 billion in the company, valuing the enterprise at $4.5 billion and further fueling Tricentis for future growth and innovation.
Check Point® Software Technologies Ltd. announced the new Check Point Quantum Firewall Software R82 (R82) and additional innovations for the Infinity Platform.
Sonatype and OpenText are partnering to offer a single integrated solution that combines open-source and custom code security, making finding and fixing vulnerabilities faster than ever.