Can We Move Forward with the Open Source AI Definition?
December 04, 2024

Ben Cotton
Kusari

You can't observe what you can't see. It's why application developers and DevOps engineers collect performance metrics, and it's why developers and security engineers like open source software. The ability to inspect code increases trust in what the code does. With the increased prevalence of generative AI, there's a desire to have the same ability to inspect the AI models. Most generative AI models are black boxes, so some vendors are using the term "open source" to set their offerings apart.

But what does "open source AI" mean? There's no generally-accepted definition.

The Open Source Initiative (OSI) defines what "open source" means for software. Their Open Source Definition (OSD) is a broadly-accepted set of criteria for what makes a software license open source. New licenses are reviewed in public by a panel of experts to evaluate their compliance with the OSD. OSI focuses on software, leaving "open" in other domains to other bodies, but since AI models are, at the simplest level, software plus configuration and data, OSI is a natural home for creating a definition of open source AI.


What's Wrong with the OSAID?

The OSI attempted to do just that. The Open Source AI Definition (OSAID), released at the end of October, represents a collaborative attempt to craft a set of rules for what makes an AI model "open source." But while the OSD is generally accepted as an appropriate definition of "open source" for software, the OSAID has received mixed reviews.

The crux of the criticism is that the OSAID does not require that the training data be available, only "data information." Version 1.0, and its accompanying FAQ require that training data be made available if possible, but still permits providing only a description when the data is "unshareable." OSI's argument is that the laws that cover data are more complex, and have more jurisdictional variability, than the laws governing copyrightable works like software. There's merit to this, of course. The data used to train AI models includes copyrightable works like blog posts, paintings, and books, but it can also include sensitive and protected information like medical histories and other personal information. Model vendors that train on sensitive data couldn't legally share their training data, OSI argues, so a definition that requires it is pointless.

I appreciate the merits of that argument, but I — and others — don't find it compelling enough to craft the definition of "open source" around it, especially since model vendors can find plausible reasons to claim they can't share training data. The OSD is not defined based on what is convenient, it's defined based on what protects certain rights for the consumers of software. The same should be true for a definition of open source AI. The fact that some models cannot meet the definition should mean that those models are not open source; it should not mean that the definition is changed to be more convenient. If no models could possibly meet a definition of open, that's one thing. But many existing models do, and more could if the developers chose.

Any Definition Is a Starting Point

Despite the criticisms of the OSI's definition, a flawed definition is better than no definition. Companies use AI in many ways: from screening job applicants to writing code to creating social media images to customer service chatbots. Any of these uses pose the risk for reputational, financial, and legal harm. Companies who use AI need to know exactly what they're getting — and not getting. A definition for "open source AI" doesn't eliminate the need to carefully examine an AI model, but it does give a starting point.

The current OSD has evolved over the last two decades; it is currently on version 1.9. It stands to reason that the OSAID will evolve as people use it to evaluate real-world AI models. The criticisms of the initial version may inform future changes that result in a more broadly-accepted definition. In the meantime, other organizations have announced their own efforts to address deficiencies in the OSAID. The Digital Public Goods Alliance — a UN-endorsed initiative — will continue to require published training data in order to grant Digital Public Good status to AI systems.

It is also possible that we'll change how we speak about openness. Just like OSD-noncompliant movements like Ethical Source have introduced a new vocabulary, open source AI may force us to recognize that openness is a spectrum on several axes, not simply a binary attribute.

Ben Cotton is Head of Community at Kusari
Share this

Industry News

January 16, 2025

Mendix, a Siemens business, announced the general availability of Mendix 10.18.

January 16, 2025

Red Hat announced the general availability of Red Hat OpenShift Virtualization Engine, a new edition of Red Hat OpenShift that provides a dedicated way for organizations to access the proven virtualization functionality already available within Red Hat OpenShift.

January 16, 2025

Contrast Security announced the release of Application Vulnerability Monitoring (AVM), a new capability of Application Detection and Response (ADR).

January 15, 2025

Red Hat announced the general availability of Red Hat Connectivity Link, a hybrid multicloud application connectivity solution that provides a modern approach to connecting disparate applications and infrastructure.

January 15, 2025

Appfire announced 7pace Timetracker for Jira is live in the Atlassian Marketplace.

January 14, 2025

SmartBear announced the availability of SmartBear API Hub featuring HaloAI, an advanced AI-driven capability being introduced across SmartBear's product portfolio, and SmartBear Insight Hub.

January 14, 2025

Azul announced that the integrated risk management practices for its OpenJDK solutions fully support the stability, resilience and integrity requirements in meeting the European Union’s Digital Operational Resilience Act (DORA) provisions.

January 14, 2025

OpsVerse announced a significantly enhanced DevOps copilot, Aiden 2.0.

January 13, 2025

Progress received multiple awards from prestigious organizations for its inclusive workplace, culture and focus on corporate social responsibility (CSR).

January 13, 2025

Red Hat has completed its acquisition of Neural Magic, a provider of software and algorithms that accelerate generative AI (gen AI) inference workloads.

January 13, 2025

Code Intelligence announced the launch of Spark, an AI test agent that autonomously identifies bugs in unknown code without human interaction.

January 09, 2025

Checkmarx announced a new generation in software supply chain security with its Secrets Detection and Repository Health solutions to minimize application risk.

January 08, 2025

SmartBear has appointed Dan Faulkner, the company’s Chief Product Officer, as Chief Executive Officer.

January 07, 2025

Horizon3.ai announced the release of NodeZero™ Kubernetes Pentesting, a new capability available to all NodeZero users.

January 06, 2025

GitHub announced GitHub Copilot Free.