The Linux Foundation, the nonprofit organization enabling mass innovation through open source, announced the launch of the Cybersecurity Skills Framework, a global reference guide that helps organizations identify and address critical cybersecurity competencies across a broad range of IT job families; extending beyond cybersecurity specialists.
DevOps leaders are engaged in an all-out effort to "shift left" so they can deliver better software faster and at lower cost. Much of this effort entails fairly dramatic re-engineering of the dev/test process. And, if we're honest, much of it also entails a management culture of extreme demands on the development and test team.
But while we're building our state-of-the-art DevOps toolchains and pumping our people full of Hint Kick, there may be another smart way to shift even further left:
Re-think build distribution infrastructure.
The Physics of Process
Process and management culture can't overcome the laws of physics. And if you have to share massive builds or artifacts across multiple dev and test teams worldwide—as financial institutions, game developers, and other organizations often do — physics definitely gets in your way.
That's because you have to keep shipping these massive files over the network to those multiple locations again and again as you cycle through your dev/test processes. On the typical enterprise network, that code distribution can take hours.
Consider a 10GB build distributed to five different locations from your primary facility over a 10Mbps MPLS connection. You don't have to be a network expert to do the math. If a couple of your remote locations have 5Mbps connections, it takes them 5 hours to get each build. If three of those locations have 2Mbps connections (as is likely the case in Asia), code distribution takes 12.5 hours.
So the physics of code distribution costs you a day. Repeatedly.
The irony is that the more agile and iterative you try to get in this scenario, the more you pay this distribution time-tax. Conventional code distribution is therefore a primary enemy of the Agile/DevOps "shift left" imperative.
The Shift-Enabling Alternative
The alternative to conventional ship-it-over-the-network-and-wait-a-day approach to build distribution is a hub-and-spoke model that allows you to lets you maintain a "gold copy" of your current codebase(s) in the cloud — while providing all your remote locations with their own local copies that get continuously and automatically updated with any changes as they occur.
This model eliminates network-related bottlenecks while allowing your geographically dispersed teams to collaborate without tripping over each other's work.
The result: You can shift left much more aggressively, without the constant counter-productive impediment of a network that can't deliver your builds fast enough to your entire team.
Of course, if you're leading the shift-left efforts at your company, you probably don't own your company's IT infrastructure. So you'll have to make your case to whoever does.
But it's a worthwhile effort. Hub-and-spoke code distribution gives software-intensive businesses competitive advantage by dramatically accelerating time-to-market for digital deliverables—while ensuring that test/QA rigor doesn't unnecessarily delay that delivery. It also saves infrastructure owners lots of money in storage, bandwidth costs and network acceleration hardware.
So if you want to shift left — but keep running into a chronic network bottleneck — have that conversation today. It'll be a win-win for your business and your budget!
Barry Phillips is CMO of Panzura.
Industry News
CodeRabbit is now available on the Visual Studio Code editor.
The integration brings CodeRabbit’s AI code reviews directly into Cursor, Windsurf, and VS Code at the earliest stages of software development—inside the code editor itself—at no cost to the developers.
Chainguard announced Chainguard Libraries for Python, an index of malware-resistant Python dependencies built securely from source on SLSA L2 infrastructure.
Sysdig announced the donation of Stratoshark, the company’s open source cloud forensics tool, to the Wireshark Foundation.
Pegasystems unveiled Pega Predictable AI™ Agents that give enterprises extraordinary control and visibility as they design and deploy AI-optimized processes.
Kong announced the introduction of the Kong Event Gateway as a part of their unified API platform.
Azul and Moderne announced a technical partnership to help Java development teams identify, remove and refactor unused and dead code to improve productivity and dramatically accelerate modernization initiatives.
Parasoft has added Agentic AI capabilities to SOAtest, featuring API test planning and creation.
Zerve unveiled a multi-agent system engineered specifically for enterprise-grade data and AI development.
LambdaTest, a unified agentic AI and cloud engineering platform, has announced its partnership with MacStadium, the industry-leading private Mac cloud provider enabling enterprise macOS workloads, to accelerate its AI-native software testing by leveraging Apple Silicon.
Tricentis announced a new capability that injects Tricentis’ AI-driven testing intelligence into SAP’s integrated toolchain, part of RISE with SAP methodology.
Zencoder announced the launch of Zen Agents, delivering two innovations that transform AI-assisted development: a platform enabling teams to create and share custom agents organization-wide, and an open-source marketplace for community-contributed agents.
AWS announced the preview of the Amazon Q Developer integration in GitHub.
The OpenSearch Software Foundation, the vendor-neutral home for the OpenSearch Project, announced the general availability of OpenSearch 3.0.