AWS announced the preview of the Amazon Q Developer integration in GitHub.
Couchbase announced that its Capella AI Model Services have integrated NVIDIA NIM microservices, part of the NVIDIA AI Enterprise software platform, to streamline deployment of AI-powered applications, providing enterprises a powerful solution for privately running generative (GenAI) models.
Capella AI Model Services, which were recently introduced as part of a comprehensive Capella AI Services offering for streamlining the development of agentic applications, provide managed endpoints for LLMs and embedding models so enterprises can meet privacy, performance, scalability and latency requirements within their organizational boundary. Capella AI Model Services, powered by NVIDIA AI Enterprise, minimize latency by bringing AI closer to the data, combining GPU-accelerated performance and enterprise-grade security to empower organizations to seamlessly operate their AI workloads. The collaboration enhances Capella’s agentic AI and retrieval-augmented generation (RAG) capabilities, allowing customers to efficiently power high-throughput AI-powered applications while maintaining model flexibility.
“Enterprises require a unified and highly performant data platform to underpin their AI efforts and support the full application lifecycle – from development through deployment and optimization,” said Matt McDonough, SVP of product and partners at Couchbase. “By integrating NVIDIA NIM microservices into Capella AI Model Services, we’re giving customers the flexibility to run their preferred AI models in a secure and governed way, while providing better performance for AI workloads and seamless integration of AI with transactional and analytical data. Capella AI Services allow customers to accelerate their RAG and agentic applications with confidence, knowing they can scale and optimize their applications as business needs evolve.”
Capella AI Model Services streamline agent application development and operations by keeping models and data colocated in a unified platform, facilitating agentic operations as they happen. For example, agent conversation transcripts must be captured and compared in real time to elevate model response accuracy. Capella also delivers built-in capabilities like semantic caching, guardrail creation and agent monitoring with RAG workflows.
Capella AI Model Services with NVIDIA NIM provides Couchbase customers a cost-effective solution that accelerates agent delivery by simplifying model deployment while maximizing resource utilization and performance. The solution leverages pre-tested LLMs and tools including NVIDIA NeMo Guardrails to help organizations accelerate AI development while enforcing policies and safeguards against AI hallucinations. NVIDIA’s rigorously tested, production-ready NIM microservices are optimized for reliability and fine-tuned for specific business needs.
“Integrating NVIDIA AI software into Couchbase’s Capella AI Model Services enables developers to quickly deploy, scale and optimize applications,” said Anne Hecht, senior director of enterprise software at NVIDIA. “Access to NVIDIA NIM microservices further accelerates AI deployment with optimized models, delivering low-latency performance and security for real-time intelligent applications.”
Industry News
The OpenSearch Software Foundation, the vendor-neutral home for the OpenSearch Project, announced the general availability of OpenSearch 3.0.
Wix.com announced the launch of the Wix Model Context Protocol (MCP) Server.
Pulumi announced Pulumi IDP, a new internal developer platform that accelerates cloud infrastructure delivery for organizations at any scale.
Qt Group announced plans for significant expansion of the Qt platform and ecosystem.
Testsigma introduced autonomous testing capabilities to its automation suite — powered by AI coworkers that collaborate with QA teams to simplify testing, speed up releases, and elevate software quality.
Google is rolling out an updated Gemini 2.5 Pro model with significantly enhanced coding capabilities.
BrowserStack announced the acquisition of Requestly, the open-source HTTP interception and API mocking tool that eliminates critical bottlenecks in modern web development.
Jitterbit announced the evolution of its unified AI-infused low-code Harmony platform to deliver accountable, layered AI technology — including enterprise-ready AI agents — across its entire product portfolio.
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, and Synadia announced that the NATS project will continue to thrive in the cloud native open source ecosystem of the CNCF with Synadia’s continued support and involvement.
RapDev announced the launch of Arlo, an AI Agent for ServiceNow designed to transform how enterprises manage operational workflows, risk, and service delivery.
Check Point® Software Technologies Ltd.(link is external) announced that its Quantum Firewall Software R82 — the latest version of Check Point’s core network security software delivering advanced threat prevention and scalable policy management — has received Common Criteria EAL4+ certification, further reinforcing its position as a trusted security foundation for critical infrastructure, government, and defense organizations worldwide.
Postman announced full support for the Model Context Protocol (MCP), helping users build better AI Agents, faster.
Opsera announced new Advanced Security Dashboard capabilities available as an extension of Opsera's Unified Insights for GitHub Copilot.