Backslash introduced a new, free resource for vibe coders, developers and security teams - the Backslash MCP Server Security Hub.
Oracle has expanded its collaboration with NVIDIA to help customers streamline the development and deployment of production-ready AI, develop and run next-generation reasoning models and AI agents, and access the computing resources needed to further accelerate AI innovation.
As part of the initiative, NVIDIA AI Enterprise, an end-to-end, cloud-native software platform, is now available natively through the Oracle Cloud Infrastructure (OCI) Console. In addition, NVIDIA GB200 NVL72 systems on OCI Supercluster are now generally available with up to 131,072 NVIDIA Blackwell GPUs. Oracle has also become one of the first hyperscalers to integrate with NVIDIA DGX Cloud Lepton, an AI platform with a compute marketplace that connects developers with a global network of GPU compute.
“Oracle has become the platform of choice for AI training and inferencing, and our work with NVIDIA boosts our ability to support customers running some of the world’s most demanding AI workloads,” said Karan Batta, senior vice president, Oracle Cloud Infrastructure. “Combining NVIDIA’s full-stack AI computing platform with OCI’s performance, security, and deployment flexibility enables us to deliver AI capabilities at scale to help advance AI efforts globally.”
“Developers need the latest AI infrastructure and software to rapidly build and launch innovative solutions,” said Ian Buck, vice president of hyperscale and HPC, NVIDIA. “With OCI and NVIDIA, they get the performance and tools to bring ideas to life, wherever their work happens.”
Unlike other NVIDIA AI Enterprise offerings that are available through a marketplace, OCI is making it natively available through the OCI Console and enabling customers to purchase it with their existing Oracle Universal Credits. This reduces the time it takes to deploy the service and allows customers to benefit from direct billing and support. In addition, with NVIDIA AI Enterprise on OCI, customers can quickly and easily access 160+ AI tools for training and inference, including NVIDIA NIM microservices, a set of optimized, cloud-native inference microservices designed to simplify the deployment of generative AI models. With an end-to-end set of training and inference capabilities on OCI, customers can combine them with OCI services for building applications and managing data across a range of distributed cloud deployment options.
By making NVIDIA AI Enterprise available through the OCI Console, Oracle is helping customers to deploy it across OCI’s distributed cloud, which includes OCI’s public regions, Government Clouds, OCI sovereign cloud solutions, OCI Dedicated Region, Oracle Alloy, OCI Compute Cloud@Customer, and OCI Roving Edge Devices. This helps customers address security, regulatory, and compliance requirements when developing, deploying, and operating their enterprise AI stack.
To help meet the increasing need for AI training and inference, Oracle and NVIDIA continue to evolve AI infrastructure with new NVIDIA GPU types across Oracle’s distributed cloud. For example, OCI now offers liquid-cooled NVIDIA GB200 NVL72 systems on OCI Supercluster that can scale to up to 131,072 NVIDIA GPUs. In addition, customers can now use thousands of NVIDIA Blackwell GPUs on NVIDIA DGX Cloud and OCI to develop and run next-generation reasoning models and AI agents.
Oracle’s distributed cloud, AI infrastructure, and generative AI services, combined with NVIDIA accelerated computing and generative AI software, are enabling governments and enterprises to deploy AI factories. These new AI factories leverage the NVIDIA GB200 NVL72 platform, a rack-scale system that combines 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs, to help deliver exceptional performance and energy efficiency for agentic AI accelerated by advanced AI reasoning models.
To help developers easily access the advanced GPU resources they need to further accelerate AI development and deployment, Oracle is one of the first hyperscalers to integrate with NVIDIA DGX Cloud Lepton. This integration enables developers to access OCI’s high-performance GPU clusters and the scalable compute needed for AI training and inference, digital twins, and massively parallel HPC applications. It also helps developers support strategic and sovereign AI goals by allowing them to tap into GPU compute capacity in specific regions for both on-demand and long-term computing.
Industry News
Google's Gemma 3n is the latest member of Google's family of open models. Google is announcing that Gemma 3n is now fully available for developers with the full feature set including supporting image, audio, video and text.
Google announced that Imagen 4, its latest text-to-image model, is now available in paid preview in Google AI Studio and the Gemini API.
Payara announced the launch of Payara Qube, a fully automated, zero-maintenance platform designed to revolutionize enterprise Java deployment.
Google released its new AI-first Colab to all users, following a successful early access period that had a very positive response from the developer community.
Salesforce announced new MuleSoft AI capabilities that enable organizations to build a foundation for secure, scalable AI agent orchestration.
Harness announced the General Availability (GA) of Harness AI Test Automation – an AI-native, end-to-end test automation solution, that's fully integrated across the entire CI/CD pipeline, built to meet the speed, scale, and resilience demanded by modern DevOps.
With AI Test Automation, Harness is transforming the software delivery landscape by eliminating the bottlenecks of manual and brittle testing and empowering teams to deliver quality software faster than ever before.
Wunderkind announced the release of Build with Wunderkind — an API-first integration suite designed to meet brands and developers where they are.
Jitterbit announced the global expansion of its partner program and new Jitterbit University partner curricula.
Tricentis unveiled two innovations that aim to redefine the future of software testing for the enterprise.
Snyk announced the acquisition of Invariant Labs, an AI security research firm and early pioneer in developing safeguards against emerging AI threats.
ActiveState expanded support of secure open source to include free and customized low-to-no vulnerability containers that facilitate modern software development.
Pythagora launched an all-in-one AI development platform that enables users to build and deploy full-stack applications from a single prompt.
Cloudflare announced that Containers is in public beta.
The Linux Foundation, the nonprofit organization enabling mass innovation through open source, announced the launch of the Agent2Agent (A2A) project, an open protocol created by Google for secure agent-to-agent communication and collaboration.