Developers: Pick Your LLM Carefully
March 25, 2024

Peter Schneider
Qt Group

Software developers probably don't need to worry as much as they think about GenAI taking their jobs. But they do need to think twice about which language model they use. In fact, the Large Language Model (LLM) space is seeing something of a code generation arms race.

How do you know which one's right for you?

The size of your LLM matters

The hope behind LLMs is that they might help transform coders into architects. I say "hope" because mainstream models like GPT-4 can barely solve 5% of real-world development issues.

My own personal experience with chatbots for AI-assisted coding has been a frustrating endeavour. From imagining fake variables to concepts that were deprecated a decade ago, there's a lot of nonsense that might go unnoticed by the untrained eye. Even a meticulous amount of "prompt engineering" can sometimes only do so much. There's a sweet spot to how much context actually helps before it just creates more confused and random results at the cost of more processing power.

The pool that mainstream LLMs draw data from has typically been too large, which should be a huge concern for developers and organisations, and not just out of concern for quality. It's about trust. If the LLM you're using functions like a digital vacuum cleaner, without telling you where it's sourcing data from, that's a problem. You don't want to ship a product, only to then find out that a chunk of the code you generated is actually from another organization's copyrighted code. Even a small bit of code of code that's been accidentally generated by a LLM as a copy of the training data could land a company in extremely hot legal waters.

Want to use an LLM for coding? Use one that was built for coding

We're finally seeing LLMs from both Big Tech and small tech players that clearly demonstrate an effort to acknowledge the challenge developers face with AI-generated coding. Some are even trained on billions of tokens that pertain to specific languages like Python.

It's an exciting hint at where LLMs could yet go in terms of hyper-specialised relevancy to coders. Looking more broadly at LLMs beyond code generation, we're seeing models as small as two billion parameters — so small you can run them locally on a laptop. Such granular fine tuning is great, but based on how some developers are responding to some market offerings, we need even more fine tuning. Ask developers about their pet peeves for LLMs and you'll still hear a familiar pattern: complicated prompt formats, strict guardrails, and hallucinations — a reminder that any model is only as good as the data it's trained on.

Still, this tailored approach has drawn important attention to the fact that large language models are not the only way to succeed in AI-assisted code generation. There's more momentum than ever for smaller LLMs that focus exclusively on coding. Some are better at certain tasks than others, but if you want safety, go small. If you're just programming in C++, do you need extraneous "guff" knowledge on German folklore like, "who was the Pied Piper of Hamelin?" When you have a small data pool, it's easier for data to stay relevant, cheaper to train the model, and you're also far less likely to accidentally use another company's copyrighted data.

Research all your LLM options thoroughly, because there will no doubt be even more choice next year, and even more than that in five years. Don't pick what's popular because it's popular.

Development Means More Than Just Coding

Unless models reach an accuracy of coding answers within a 98-100% margin of error, I don't suspect GenAI will wholly replace humans for coding. But if it did, some are questioning whether software engineers will transition into becoming "code reviewers" who simply verify AI-generated code instead of writing it.

Would they, though? They might if an organization has poor internal risk control processes. Good risk control involves using the four-eyes principle, which says that any activity of material risk (like shipping software) should be reviewed and double-checked by a second, independent, and competent individual. For the time being at least, I think we're a long way off from AI being reclassified as an independent and competent lifeform.

There's also the fact that end-to-end development, and things like building Human-Machine Interfaces, involve so much more than just coding. LLMs can respectably interact with text and elements in an image, with more tools popping up that can convert web designs into frontend code. But AI single-handedly assuming competent control of design that relates to graphical and UI/UX workflows? That's much harder than coding, though perhaps not impossible. And coding is one part of development. The rest is investing in something novel, figuring out who the audience is, translating ideas into something buildable, and polishing. That's where the human element comes in.

Regardless of how good LLMs ever get, every programmer should always treat every code like it's their own. Always do the peer review and ask your colleague, "is my good code?" Blind trust gets you nowhere.

Peter Schneider is Senior Product Manager at Qt Group
Share this

Industry News

April 11, 2024

Check Point® Software Technologies Ltd. announced new email security features that enhance its Check Point Harmony Email & Collaboration portfolio: Patented unified quarantine, DMARC monitoring, archiving, and Smart Banners.

April 11, 2024

Automation Anywhere announced an expanded partnership with Google Cloud to leverage the combined power of generative AI and its own specialized, generative AI automation models to give companies a powerful solution to optimize and transform their business.

April 11, 2024

Jetic announced the release of Jetlets, a low-code and no-code block template, that allows users to easily build any technically advanced integration use case, typically not covered by alternative integration platforms.

April 10, 2024

Progress announced new powerful capabilities and enhancements in the latest release of Progress® Sitefinity®.

April 10, 2024

Buildkite signed a multi-year strategic collaboration agreement (SCA) with Amazon Web Services (AWS), the world's most comprehensive and broadly adopted cloud, to accelerate delivery of cloud-native applications across multiple industries, including digital native, financial services, retail or any enterprise undergoing digital transformation.

April 10, 2024

AppViewX announced new functionality in the AppViewX CERT+ certificate lifecycle management automation product that helps organizations prepare for Google’s proposed 90-day TLS certificate validity policy.

April 09, 2024

Rocket Software is addressing the growing demand for integrated security, compliance, and automation in software development with its latest release of Rocket® DevOps, formerly known as Aldon®.

April 09, 2024

Wind River announced the latest release of Wind River Studio Developer, an edge-to-cloud DevSecOps platform that accelerates development, deployment, and operation of mission-critical systems.

April 09, 2024

appCD announced its generative infrastructure from code solution now supports Azure Kubernetes Service (AKS).

April 09, 2024

Synopsys announced the availability of Black Duck® Supply Chain Edition, a new software composition analysis (SCA) offering that enables organizations to mitigate upstream risk in their software supply chains.

April 09, 2024

DataStax announced innovative integrations with API extensions to Google Cloud’s Vertex AI Extension and Vertex AI Search, offering developers an easier time leveraging their own data.

April 08, 2024

Parasoft introduced C/C++test CT, a comprehensive solution tailored for large teams engaged in the development of safety- and security-critical C and C++ products.

April 08, 2024

Endor Labs announced a strategic partnership with GuidePoint Security.

April 08, 2024

Hasura announced the V3 of its platform, providing on-demand API composability with a new domain-centric supergraph modeling framework, a distributed supergraph execution engine and a rich and extensible ecosystem of open source connectors to address the challenges faced during integration of data and APIs.

April 04, 2024

DataStax has entered into a definitive agreement to acquire AI startup, Logspace, the creators of Langflow, an open source visual framework for building retrieval-augmented generation (RAG) applications.1