Developers: Pick Your LLM Carefully
March 25, 2024

Peter Schneider
Qt Group

Software developers probably don't need to worry as much as they think about GenAI taking their jobs. But they do need to think twice about which language model they use. In fact, the Large Language Model (LLM) space is seeing something of a code generation arms race.

How do you know which one's right for you?

The size of your LLM matters

The hope behind LLMs is that they might help transform coders into architects. I say "hope" because mainstream models like GPT-4 can barely solve 5% of real-world development issues(link is external).

My own personal experience with chatbots for AI-assisted coding has been a frustrating endeavour. From imagining fake variables to concepts that were deprecated a decade ago, there's a lot of nonsense that might go unnoticed by the untrained eye. Even a meticulous amount of "prompt engineering" can sometimes only do so much. There's a sweet spot to how much context actually helps before it just creates more confused and random results at the cost of more processing power.

The pool that mainstream LLMs draw data from has typically been too large, which should be a huge concern for developers and organisations, and not just out of concern for quality. It's about trust. If the LLM you're using functions like a digital vacuum cleaner, without telling you where it's sourcing data from, that's a problem. You don't want to ship a product, only to then find out that a chunk of the code you generated is actually from another organization's copyrighted code. Even a small bit of code of code that's been accidentally generated by a LLM as a copy of the training data could land a company in extremely hot legal waters.

Want to use an LLM for coding? Use one that was built for coding

We're finally seeing LLMs from both Big Tech and small tech players that clearly demonstrate an effort to acknowledge the challenge developers face with AI-generated coding. Some are even trained on billions of tokens that pertain to specific languages like Python.

It's an exciting hint at where LLMs could yet go in terms of hyper-specialised relevancy to coders. Looking more broadly at LLMs beyond code generation, we're seeing models as small as two billion parameters — so small you can run them locally on a laptop. Such granular fine tuning is great, but based on how some developers are responding to some market offerings(link is external), we need even more fine tuning. Ask developers about their pet peeves for LLMs and you'll still hear a familiar pattern: complicated prompt formats, strict guardrails, and hallucinations — a reminder that any model is only as good as the data it's trained on.

Still, this tailored approach has drawn important attention to the fact that large language models are not the only way to succeed in AI-assisted code generation. There's more momentum than ever for smaller LLMs that focus exclusively on coding. Some are better at certain tasks than others, but if you want safety, go small. If you're just programming in C++, do you need extraneous "guff" knowledge on German folklore like, "who was the Pied Piper of Hamelin?" When you have a small data pool, it's easier for data to stay relevant, cheaper to train the model, and you're also far less likely to accidentally use another company's copyrighted data.

Research all your LLM options thoroughly, because there will no doubt be even more choice next year, and even more than that in five years. Don't pick what's popular because it's popular.

Development Means More Than Just Coding

Unless models reach an accuracy of coding answers within a 98-100% margin of error, I don't suspect GenAI will wholly replace humans for coding. But if it did, some are questioning whether software engineers will transition into becoming "code reviewers" who simply verify AI-generated code instead of writing it.

Would they, though? They might if an organization has poor internal risk control processes. Good risk control involves using the four-eyes principle(link is external), which says that any activity of material risk (like shipping software) should be reviewed and double-checked by a second, independent, and competent individual. For the time being at least, I think we're a long way off from AI being reclassified as an independent and competent lifeform.

There's also the fact that end-to-end development, and things like building Human-Machine Interfaces, involve so much more than just coding. LLMs can respectably interact with text and elements in an image, with more tools popping up that can convert web designs into frontend code. But AI single-handedly assuming competent control of design that relates to graphical and UI/UX workflows? That's much harder than coding, though perhaps not impossible. And coding is one part of development. The rest is investing in something novel, figuring out who the audience is, translating ideas into something buildable, and polishing. That's where the human element comes in.

Regardless of how good LLMs ever get, every programmer should always treat every code like it's their own. Always do the peer review and ask your colleague, "is my good code?" Blind trust gets you nowhere.

Peter Schneider is Senior Product Manager at Qt Group
Share this

Industry News

May 08, 2025

AWS announced the preview of the Amazon Q Developer integration in GitHub.

May 08, 2025

The OpenSearch Software Foundation, the vendor-neutral home for the OpenSearch Project, announced the general availability of OpenSearch 3.0.

May 08, 2025

Jozu raised $4 million in seed funding.

May 07, 2025

Wix.com announced the launch of the Wix Model Context Protocol (MCP) Server.

May 07, 2025

Pulumi announced Pulumi IDP, a new internal developer platform that accelerates cloud infrastructure delivery for organizations at any scale.

May 07, 2025

Qt Group announced plans for significant expansion of the Qt platform and ecosystem.

May 07, 2025

Testsigma introduced autonomous testing capabilities to its automation suite — powered by AI coworkers that collaborate with QA teams to simplify testing, speed up releases, and elevate software quality.

May 06, 2025

Google is rolling out an updated Gemini 2.5 Pro model with significantly enhanced coding capabilities.

May 06, 2025

BrowserStack announced the acquisition of Requestly, the open-source HTTP interception and API mocking tool that eliminates critical bottlenecks in modern web development.

May 06, 2025

Jitterbit announced the evolution of its unified AI-infused low-code Harmony platform to deliver accountable, layered AI technology — including enterprise-ready AI agents — across its entire product portfolio.

May 05, 2025

The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, and Synadia announced that the NATS project will continue to thrive in the cloud native open source ecosystem of the CNCF with Synadia’s continued support and involvement.

May 05, 2025

RapDev announced the launch of Arlo, an AI Agent for ServiceNow designed to transform how enterprises manage operational workflows, risk, and service delivery.

May 01, 2025

Check Point® Software Technologies Ltd.(link is external) announced that its Quantum Firewall Software R82 — the latest version of Check Point’s core network security software delivering advanced threat prevention and scalable policy management — has received Common Criteria EAL4+ certification, further reinforcing its position as a trusted security foundation for critical infrastructure, government, and defense organizations worldwide.

May 01, 2025

Postman announced full support for the Model Context Protocol (MCP), helping users build better AI Agents, faster.

May 01, 2025

Opsera announced new Advanced Security Dashboard capabilities available as an extension of Opsera's Unified Insights for GitHub Copilot.