IBM announced IBM Test Accelerator for Z, a solution designed to revolutionize testing on IBM Z, a tool that expedites the shift-left approach, fostering smooth collaboration between z/OS developers and testers.
One third (34%) of organizations are either already using or implementing artificial intelligence (AI) application security tools to mitigate the accompanying risks of generative AI (GenAI), according to a new survey from Gartner, Inc.
Over half (56%) of respondents said they are also exploring such solutions.
A quarter (26%) of survey respondents said they are currently implementing or using privacy-enhancing technologies (PETs), ModelOps (25%) or model monitoring (24%).
"IT and security and risk management leaders must, in addition to implementing security tools, consider supporting an enterprise-wide strategy for AI TRiSM (trust, risk and security management)," said Avivah Litan, Distinguished VP Analyst at Gartner. "AI TRiSM manages data and process flows between users and companies who host generative AI foundation models, and must be a continuous effort, not a one-off exercise to continuously protect an organization."
IT Is Ultimately Responsible for GenAI Security
While 93% of IT and security leaders surveyed said they are at least somewhat involved in their organization's GenAI security and risk management efforts, only 24% said they own this responsibility.
Among the respondents that do not own the responsibility for GenAI security and/or risk management, 44% reported that the ultimate responsibility for GenAI security rested with IT. For 20% of respondents, their organization's governance, risk, and compliance departments owned the responsibility.
Top-of-Mind Risks
The risks associated with GenAI are significant, continuous and will constantly evolve. Survey respondents indicated that undesirable outputs and insecure code are among their top-of-mind risks when using GenAI:
■ 57% of respondents are concerned about leaked secrets in AI-generated code.
■ 58% of respondents are concerned about incorrect or biased outputs.
"Organizations that don't manage AI risk will witness their models not performing as intended and, in the worst case, can cause human or property damage," said Litan. "This will result in security failures, financial and reputational loss, and harm to individuals from incorrect, manipulated, unethical or biased outcomes. AI malperformance can also cause organizations to make poor business decisions."
Methodology: The Gartner Peer Community survey was conducted from April 1 to April 7 among 150 IT and information security leaders at organizations where GenAI or foundational models are in use, in plans for use, or being explored.
Industry News
StreamNative launched Ursa, a Kafka-compatible data streaming engine built on top of lakehouse storage.
GitKraken acquired code health innovator, CodeSee.
ServiceNow introduced a new no‑code development studio and new automation capabilities to accelerate and scale digital transformation across the enterprise.
Security Innovation has added new skills assessments to its Base Camp training platform for software security training.
CAST introduced CAST Highlight Extensions Marketplace — an integrated marketplace for the software intelligence product where users can effortlessly browse and download a diverse range of extensions and plugins.
Red Hat and Elastic announced an expanded collaboration to deliver next-generation search experiences supporting retrieval augmented generation (RAG) patterns using Elasticsearch as a preferred vector database solution integrated on Red Hat OpenShift AI.
Traceable AI announced an Early Access Program for its new Generative AI API Security capabilities.
StackHawk announced a new integration with Microsoft Defender for Cloud to help organizations build software more securely.
MacStadium announced that it has obtained Cloud Security Alliance (CSA) Security, Trust & Assurance Registry (STAR) Level 1, meaning that MacStadium has publicly documented its compliance with CSA’s Cloud Controls Matrix (CCM), and that it joined the Cloud Security Alliance (CSA), the world’s leading organization dedicated to defining and raising awareness of best practices to help ensure a secure cloud computing environment.
The Cloud Native Computing Foundation® (CNCF®) released the two-day schedule for CloudNativeSecurityCon North America 2024 happening in Seattle, Washington from June 26-27, 2024.
Sumo Logic announced new AI and security analytics capabilities that allow security and development teams to align around a single source of truth and collect and act on data insights more quickly.
Red Hat is announcing an optional additional 12-month EUS term for OpenShift 4.14 and subsequent even-numbered Red Hat OpenShift releases in the 4.x series.
HAProxy Technologies announced the launch of HAProxy Enterprise 2.9.
ArmorCode announced the general availability of AI Correlation in the ArmorCode ASPM Platform.