Navigating the Security Risks of AI Implementation
March 03, 2025

Geoff Burke
Object First

Today, there is an enormous amount of hype around AI, and it is also generating a lot of fear. Not a day goes by without some poster on LinkedIn warning that AI agents will take away whole slices of white-collar well-paying jobs soon. The situation is not helped by certain IT leaders beating their chest and warning that they will soon get back at all the lazy system administrators out there. Joking aside, I do believe we have things to worry about, but not entirely on the employment and bad boss front.

AI is being rushed in, and as often happens in human experience, the moment's excitement overshadows our precautionary common sense. At this point, the huge threat I foresee in AI implementation is security. The power of this new technology will be very unforgiving, and drivers of fast implementation, which tend to be the desire to make large amounts of fast money, could turn into financial and reputational nightmares of unimaginable proportions.

The slow implementation of Kubernetes was often blamed on a lack of understanding and the availability of technical skills in the job market. The situation with AI is the same but at a magnitude of 10 times, if not more. Unfortunately, I can see a situation where AI will speed up implementation by relying on AI. We will fast forward to a place where we might become helpless due to a lack of understanding and skillsets. So, in this regard, I ask all hyper-energized entrepreneurs and investors to think twice before signing off on any complete AI solution. The repercussions of going a bridge too far with AI could be exponentially worse than anything we have seen so far in the history of business.

The Security Implications of Leaving Human Oversight Out of AI

The key element to harnessing the power and security of AI, first and foremost, is to make certain that humans have the ultimate control over every aspect AI. We must have a a hard-wired stop/turn off/terminate button managed by an employee who has a full understanding of each process and procedure that AI can run. To put it simply, remember that automation project that started off so well and then went south because there was a lack of understanding of the whole process? Well, AI will turn that unfortunate bash script into a Frankenstein terminator with an attitude at which point you say "Hasta la Vista" to your production environment.

Now, let's discuss some key security concerns that everyone should consider. First, clients need to be careful with their cloud providers. Before putting a check mark in the privacy agreement box, investigate what data might be shared or stored. You don't want to risk leaking sensitive information that could reveal trade secrets or insider knowledge.

Next up, there's the issue of guardrails. These safety nets are meant to stop AI from straying into dangerous territory. But here's the catch: some clever folks can find ways around these guardrails using techniques like prompt injection attacks. This can lead to the AI revealing restricted information or going off-script, and let's be honest, that's not something anyone wants to deal with.

Then, there's the concern about bias in the AI's training data. If the data is skewed or unfair, the AI can make flawed decisions or reinforce stereotypes without anyone realizing it. This can have real-world consequences that impact people and businesses alike.

Lastly, consider Nvidia a cautionary tale. Their recent stock crash after the release of the DeepSeek model really highlighted how fragile things can get when the promises around AI don't match reality or when something new appears on the horizon even before it is fully tested or understood. Misunderstandings about capabilities can tank stock prices and shake confidence among investors.

From privacy agreements to guardrails to bias, organizations must be on the lookout when using AI to ensure they're not setting themselves up with unwanted surprises down the road.

When AI Falls into the Wrong Hands

And what about the bad guys? The bad guys will learn and leverage AI, which will cause huge challenges to data protection and security specialists, not to mention the potential scenario where the bad guys hijack an organization's AI. Malware and ransomware attacks will be looked back on with a tad of nostalgia when the entire IT infrastructure of your company is now working for an adversarial nation-state's AI! One counterargument to this is that there will be AI LLMS defending against the hacker LLMS.

There is a reason that some government computer systems are not connected to the internet. We must take a similar approach to AI until well-documented security guard rails are in place. Again, a human who fully understands the technology must always be available and capable, day or night, to pull the plug on AI without any opposition.

Let's consider some specific security risks that arise when AI falls into the wrong hands. One major concern is model poisoning. This happens when malicious actors intentionally sneak bad data into the AI's training process, causing it to learn incorrectly. Picture it as slipping a few rotten apples into a basket of fresh ones. If they succeed, the AI model could start making serious errors, which could lead to real problems for organizations.

Next, we have the issue of faster attack times. As technology evolves, hackers can strike with remarkable speed. Remember our discussion about how quickly vulnerabilities can be exploited? It's as if they have an express lane for chaos, and that leaves security teams racing to keep up, often without enough time to properly respond.

Finally, the use of AI agents is another security risk. These systems can take over tasks and even access sensitive information like credit card numbers. Here's where it gets tricky: if these agents figure out that having more control helps them perform better, they might try to grab extra permissions. This creates a vulnerability where these entities, driven by their programming, could justify hacking into systems just to fulfill their tasks more effectively.

Geoff Burke is Community Manager for Object First Aces
Share this

Industry News

March 18, 2025

Oracle announced the availability of Java 24, the latest version of the programming language and development platform. Java 24 (Oracle JDK 24) delivers thousands of improvements to help developers maximize productivity and drive innovation. In addition, enhancements to the platform's performance, stability, and security help organizations accelerate their business growth ...

March 18, 2025

Tigera announced an integration with Mirantis, creators of k0rdent, a new multi-cluster Kubernetes management solution.

March 18, 2025

SAP announced “Joule for Developer” – new Joule AI co-pilot capabilities embedded directly within SAP Build.

March 17, 2025

SUSE® announced several new enhancements to its core suite of Linux solutions.

March 13, 2025

Progress is offering over 50 enterprise-grade UI components from Progress® KendoReact™, a React UI library for business application development, for free.

March 13, 2025

Opsera announced a new Leadership Dashboard capability within Opsera Unified Insights.

March 13, 2025

Cycloid announced the introduction of Components, a new management layer enabling a modular, structured approach to managing cloud resources within the Cycloid engineering platform.

March 12, 2025

ServiceNow unveiled the Yokohama platform release, including ServiceNow Studio which provides a unified workspace for rapid application development and governance.

March 12, 2025

Sonar announced the upcoming availability of SonarQube Advanced Security.

March 12, 2025

ScaleOut Software introduces generative AI and machine-learning (ML) powered enhancements to its ScaleOut Digital Twins™ cloud service and on-premises hosting platform with the release of Version 4.

March 11, 2025

Kurrent unveiled a developer-centric evolution of Kurrent Cloud that transforms how developers and dev teams build, deploy and scale event-native applications and services.

March 11, 2025

ArmorCode announced the launch of two new apps in the ServiceNow Store.

March 10, 2025

Parasoft is accelerating the release of its C/C++test 2025.1 solution, following the just-published MISRA C:2025 coding standard.

March 10, 2025

GitHub is making GitHub Advanced Security (GHAS) more accessible for developers and teams of all sizes.

March 10, 2025

ArmorCode announced the enhanced ArmorCode Partner Program, highlighting its goal to achieve a 100 percent channel-first sales model.