JFrog announced a new machine learning (ML) lifecycle integration between JFrog Artifactory and MLflow, an open source software platform originally developed by Databricks.
According to a recent SANS Institute Survey titled Secure DevOps - Fact or Fiction?, only 10 percent of organizations report repairing critical vulnerabilities satisfactorily and in a timely manner. In a world where application vulnerabilities are the leading source of security breaches, that is a scary statistic, and something has to change.
However, to understand how to address this problem, we first need to understand the current state of application security. Application security operates in the Development (dev) and Production (prod) phase of the Software Development Lifecycle (SDLC). In dev, the goal is to find and fix vulnerabilities before releasing insecure code. In prod, the goal is to protect the application from all of its vulnerabilities. Theoretically, software providers only need one or the other, but since neither is foolproof, most companies employ both approaches. According to Gartner, there are three available code analysis techniques:
1. SAST
SAST (Static Application Security Testing) analyzes the application from the inside-out and is considered highly thorough because it leverages fundamental knowledge of vulnerabilities to inspect the source code. It can be used for any code as long as the programming language is supported, and is performed closest to dev, making it the least expensive way to find and fix vulnerabilities. However, traditional SAST scan times are slow, requiring hours or even days to complete, which doesn't work well in increasingly automated CI/CD environments. False positives are also an inherent part of the SAST process. Moreover, traditional SAST does not analyze an entire application (such as open source software, frameworks, etc.).
2. DAST
DAST (Dynamic Application Security Testing) probes the application from outside in, treating the application as a black box and testing exposed interfaces for vulnerabilities. DAST generally results in low false positives and can be performed even when the source code of the application is not available (for instance with 3rd party applications). Unfortunately, DAST requires test scripts to test everything, which requires heavy reliance on experts to write tests, making it difficult to scale. More importantly, by definition it only analyzes exposed interfaces, which presumes an attacker only has external access – yet insider threats and complex attacks are some of the most dangerous. DAST also provides insufficient information to the developer on why and where a vulnerability exists.
3. IAST
IAST (Interactive Application Security Testing) aims to improve on DAST by instrumenting the application to allow deeper analysis (beyond just exposed interfaces) and can be termed a superset of DAST. Its advantages and disadvantages are similar to those of DAST, with the added drawback that the application instrumentation means it needs to support the application programming language. In particular it can only be performed on languages such as Java, C#, Python, and NodeJS that have a virtual runtime environment.
A New Approach
Clearly each approach has advantages and disadvantages. If we were to develop a better AST solution from scratch, what would it look like? First, its analysis would mirror the more comprehensive inside-out paradigm of SAST, but be much, much faster. Like DAST, it should analyze the entire application, including dependencies, 3rd party APIs and frameworks. After all, a hacker only needs one vulnerability in an entire application to wreak havoc.
The analysis also shouldn't be generic. Developers should be able to leverage their application knowledge to write new custom queries or edit existing ones. For instance, if a team has written a custom API to escape inputs, the tool needs to take this API into account. A better approach also recognizes that DevOps and CI/CD is the future, and that any AST solution should integrate into CI/CD seamlessly, and display its results within minutes of each new build.
In addition to finding vulnerabilities, a better approach must understand the flow of an application so even when no clear vulnerability is identified (a false negative), monitoring of runtime behavior compared to the inherent flow of the application can identify when an application has been successfully exploited.
As noted earlier, SAST in itself is incomplete, so it should be combined with the ability to take data from the production environment to address otherwise inherent reachability challenges. This would require a microagent that deeply instruments the application (like IAST) and is designed around the stringent performance and stability requirements of a production environment. Because this microagent is designed for production, it should easily run in QA where if tested with QA/security test scripts, it can function as an "enhanced" IAST. Yet unlike IAST, the microagent should learn from SAST where it needs to instrument the application. So for example, if the application is not vulnerable to SQL injection, why instrument the application and alert on SQL injection patterns?
Given that so few organization report satisfactory and timely repair of critical vulnerabilities, there will be unfixed vulnerabilities deployed into the production environment regardless of how good an AST tool chain is. Yet today's typical security approach is to deploy a tool or appliance that continuously alerts on threats, regardless of whether the application itself is vulnerable to that particular threat.
In contrast, the better approach instruments the application based on SAST findings, ensuring the protection is high-performant and accurate. It will also tell the developer about the vulnerability and the specific location in code that needs to be fixed. That's continuous improvement — code analysis informs runtime and runtime traffic informs code analysis — the key goal of every application security program.
Industry News
Copado announced the general availability of Test Copilot, the AI-powered test creation assistant.
SmartBear has added no-code test automation powered by GenAI to its Zephyr Scale, the solution that delivers scalable, performant test management inside Jira.
Opsera announced that two new patents have been issued for its Unified DevOps Platform, now totaling nine patents issued for the cloud-native DevOps Platform.
mabl announced the addition of mobile application testing to its platform.
Spectro Cloud announced the achievement of a new Amazon Web Services (AWS) Competency designation.
GitLab announced the general availability of GitLab Duo Chat.
SmartBear announced a new version of its API design and documentation tool, SwaggerHub, integrating Stoplight’s API open source tools.
Red Hat announced updates to Red Hat Trusted Software Supply Chain.
Tricentis announced the latest update to the company’s AI offerings with the launch of Tricentis Copilot, a suite of solutions leveraging generative AI to enhance productivity throughout the entire testing lifecycle.
CIQ launched fully supported, upstream stable kernels for Rocky Linux via the CIQ Enterprise Linux Platform, providing enhanced performance, hardware compatibility and security.
Redgate launched an enterprise version of its database monitoring tool, providing a range of new features to address the challenges of scale and complexity faced by larger organizations.
Snyk announced the expansion of its current partnership with Google Cloud to advance secure code generated by Google Cloud’s generative-AI-powered collaborator service, Gemini Code Assist.
Kong announced the commercial availability of Kong Konnect Dedicated Cloud Gateways on Amazon Web Services (AWS).
Pegasystems announced the general availability of Pega Infinity ’24.1™.