Red Hat announced jointly-engineered, integrated and supported images for Red Hat Enterprise Linux across Amazon Web Services (AWS), Google Cloud and Microsoft Azure.
Ever experience a serverless nightmare?
Hacker News contributor "huksley" has, and it's been a pricey wakeup call about the need to understand the complexities of parameters in a serverless environment.
According to the tale of woe(link is external) they posted earlier this year, huksley wound up DDoSing him- or herself. They had accidentally created a serverless function that called itself in a recursive loop that ran for 24 hours before it was caught — a function that was using over 70 million Gbps and ran up a "shocking" bill of $4,600.
That's just one of the top serverless mistakes — one caused by not knowing that AWS Billing alertss don't work on the AWS CloudFront content delivery network, which collects information on charges from all regions. That information collection takes time and thus delays the release of the billing alert, as huksley detailed.
Read on for what we see as the top three serverless mistakes that can similarly get you into trouble.
Serverless: The new buzzword
First, some background about why the word "serverless" is becoming a buzzword in the application world. The term "serverless" refers to a cloud-native development model that allows organizations to build and run their applications without the burdens of physical server infrastructure.
Serverless applications offer instant scalability, high availability, greater business agility and improved cost efficiency. This dynamic flexibility helps save time and money across the entire software development life cycle (SDLC). An August 2022 (link is external) report on the global serverless apps market is forecasting that between 2022 to 2031, the market will record ~23% compound annual growth rate (CAGR).
Still, serverless application security (AppSec) remains a serious issue. As it is, traditional application security testing (AST) tools cannot provide adequate coverage, speed or accuracy to keep pace with the demands of serverless applications. In fact, as of April 2021, concerns about the dangers of configured or quickly spun-up cloud-native (serverless or container-based) workloads had increased nearly 10%(link is external) year-over-year.
There's good reason for growing concern: For one, malicious actors are already targeting AWS. What we see as the biggest serverless mistakes:
Mistake No. 1: Not understanding security gaps
Organizations think that AWS manages the security, but that is not fully true. Once you write your own code, that code — including the AWS Lambda infrastructure — falls us under your responsibility as a developer or organization.
As such, you have to consider code and configuration, given that the code is always under the customer's responsibility.
Put plainly, in the AWS shared-responsibility model(link is external), organizations cannot just use the parameter security. Rather, they need to protect themselves.
AWS is responsible for securing underlying structure, but developers must make sure they secure serverless workloads or functions themselves, given that in serverless, there's no perimeter to secure. Rather, lambda must secure itself by using the "zero-trust(link is external)" model, which entails:
1. Thorough, continuous authentication and authorization based on all available data.
2. The use of least-privilege access.
3. The assumption that a breach exists: an assumption that supports the visibility provided by end-to-end encryption and use analytics — visibility that leads to improoved defenses and threat detection.
Mistake No. 2: Using traditional tools
Whereas serverless applications are gaining traction due to their benefits, traditional AST tools cause workflow inefficiencies that ultimately bottleneck serverless release cycles.
Traditional security tools — Static Analysis Secuurity Testing (SAST) and Dynamic Analysis Security Testing (DAST) — just aren't made to scan modern appliications.
For example, with the complexity of modern application programming interface (API) code, the frameworks that support them and the complex interconnections between them is simply too much for static tools. Such tools produce an onslaught of false positives, and they miss serious vulnerabilities.
As well, in serverless-based applications, where the architecture is event-based as opposed to synchronous (as is a monolithic application), code can be executed via numerous types of events, like files, logs, code commits, notifications and even voice commands. Traditional tools just aren't built for that and cannot see beyond a simple REST API.
Given their lack of visibility and accuracy, legacy tools depend on expert staff to do manual security triage as they attempt to diagnose and interpret the results before handing recommendations (with limited context) back to developers to fix the problems. After weeding out the high number of false positives, security teams are left to figure out which vulnerabilities should be addressed first. This inefficiency inhibits SDLCs, increases costs and often fails to eliminate many vulnerabilities that can be exploited by cyberattacks.
Static and dynamic tools don't scale well, typically requiring experts to set up and run the tool as well as to interpret the results.
All these reasons are why organizations are opting instead for purpose-built, context-based solutions. Serverless applications are a mix of code and infrastructure, and it is therefore essential to understand both. Organizations need a serverless solution that understands both the code of the functions and its configurations — such as entry points (i.e., triggers) and Identity and Access Management (IAM) policies — and that provides custommers with context-based insight into serverless risks.
Mistake No. 3: The Dangers of Misconfigurations
As "huksley" found out, serverless presents the potential for large overages with incorrect parameters. Without setting a limit on the number of requests allowed by a serverless function, the code could accidentally rack up numerous requests and create a large AWS charge.
Configuration of function at the permission level is another major issue in serverless. Usually, developers use generic permission levels, which give functions far too many permissions. This can result in vulnerable or stolen keys, which can have a big impact in cloud computing. In such a scenario, malicious actors may be able to steal information from databases/buckets, given that the lambda permission has been set at a very broad level. Developers must instead apply the least permissions needed.
That isn't an easy task. Or, to be more precise, this task can be easy — if you write only one function, with 1,000 liines of code. But with dependencies, it becomes a little crazy. The code needs to run in order to understand what function is being carried out, and it must have enough permissions to execute that function.
Conclusion
In April 2022, Cado Security discovered Denonia(link is external), the first ever malware to specifically target AWS Lambda. More threats are sure to follow. Avoiding these top mistakes can help to secure your organization when they do.
To fend off such attacks, keep an eye out for free, open-source tools — they can help to secure youur serverless without breaking the bank.
Industry News
Komodor announced the integration of the Komodor platform with Internal Developer Portals (IDPs), starting with built-in support for Backstage and Port.
Operant AI announced Woodpecker, an open-source, automated red teaming engine, that will make advanced security testing accessible to organizations of all sizes.
As part of Summer '25 Edition, Shopify is rolling out new tools and features designed specifically for developers.
Lenses.io announced the release of a suite of AI agents that can radically improve developer productivity.
Google unveiled a significant wave of advancements designed to supercharge how developers build and scale AI applications – from early-stage experimentation right through to large-scale deployment.
Red Hat announced Red Hat Advanced Developer Suite, a new addition to Red Hat OpenShift, the hybrid cloud application platform powered by Kubernetes, designed to improve developer productivity and application security with enhancements to speed the adoption of Red Hat AI technologies.
Perforce Software announced Perforce Intelligence, a blueprint to embed AI across its product lines and connect its AI with platforms and tools across the DevOps lifecycle.
CloudBees announced CloudBees Unify, a strategic leap forward in how enterprises manage software delivery at scale, shifting from offering standalone DevOps tools to delivering a comprehensive, modular solution for today’s most complex, hybrid software environments.
Azul and JetBrains announced a strategic technical collaboration to enhance the runtime performance and scalability of web and server-side Kotlin applications.
Docker, Inc.® announced Docker Hardened Images (DHI), a curated catalog of security-hardened, enterprise-grade container images designed to meet today’s toughest software supply chain challenges.
GitHub announced that GitHub Copilot now includes an asynchronous coding agent, embedded directly in GitHub and accessible from VS Code—creating a powerful Agentic DevOps loop across coding environments.
Red Hat announced its integration with the newly announced NVIDIA Enterprise AI Factory validated design, helping to power a new wave of agentic AI innovation.
JFrog announced the integration of its foundational DevSecOps tools with the NVIDIA Enterprise AI Factory validated design.
GitLab announced the launch of GitLab 18, including AI capabilities natively integrated into the platform and major new innovations across core DevOps, and security and compliance workflows that are available now, with further enhancements planned throughout the year.