What does ‘open source AI’ mean, anyway?
The struggle between open source and proprietary software is well understood. But the tensions permeating software circles for decades have shuffled into the burgeoning artificial intelligence space, with controversy in hot pursuit.
The New York Times recently published a gushing appraisal of Meta CEO Mark Zuckerberg, noting how his âopen source AIâ embrace had made him popular once more in Silicon Valley. The problem, though, is that Metaâs Llama-branded large language models arenât really open source.
Or are they?
By most estimations, they arenât. But it highlights how the notion of âopen source AIâ is only going to stir more debate in the years to come. This is something that the Open Source Initiative (OSI) is trying to address, led by executive director Stefano Maffulli (pictured above), who has been working on the problem for over two years through a global effort spanning conferences, workshops, panels, webinars, reports and more.
AI ainât software code
The OSI has been a steward of the Open Source Definition (OSD) for more than a quarter of a century, setting out how the term âopen sourceâ can, or should, be applied to software. A license that meets this definition can legitimately be deemed âopen source,â though it recognizes a spectrum of licenses ranging from extremely permissive to not quite so permissive.
But transposing legacy licensing and naming conventions from software onto AI is problematic. Joseph Jacks, open source evangelist and founder of VC firm OSS Capital, goes as far as to say that there is âno such thing as open-source AI,â noting that âopen source was invented explicitly for software source code.â
In contrast, âneural network weightsâ (NNWs) â a term used in the world of artificial intelligence to describe the parameters or coefficients through which the network learns during the training process â arenât in any meaningful way comparable to software.
âNeural net weights are not software source code; they are unreadable by humans, nor are they debuggable,â Jacks notes. âFurthermore, the fundamental rights of open source also donât translate over to NNWs in any congruent manner.â
This led Jacks and OSS Capital colleague Heather Meeker to come up with their own definition of sorts, around the concept of âopen weights.â
So before weâve even arrived at a meaningful definition of âopen source AI,â we can already see some of the inherent tensions in trying to get there. How can we agree on a definition if we canât agree that the âthingâ weâre defining exists?
Maffulli, for what itâs worth, agrees.
âThe point is correct,â he told TechCrunch. âOne of the initial debates we had was whether to call it open source AI at all, but everyone was already using the term.â
This mirrors some of the challenges in the broader AI sphere, where debates abound on whether the thing that weâre calling âAIâ today really is AI or just powerful systems taught to spot patterns among vast swathes of data. But naysayers are mostly resigned to the fact that the âAIâ nomenclature is here, and thereâs no point fighting it.
Founded in 1998, the OSI is a not-for-profit public benefit corporation that works on a myriad of open source-related activities around advocacy, education and its core raison dâĂȘtre: the Open Source Definition. Today, the organization relies on sponsorships for funding, with such esteemed members as Amazon, Google, Microsoft, Cisco, Intel, Salesforce and Meta.
Metaâs involvement with the OSI is particularly notable right now as it pertains to the notion of âopen source AI.â Despite Meta hanging its AI hat on the open-source peg, the company has notable restrictions in place regarding how its Llama models can be used: Sure, they can be used gratis for research and commercial use cases, but app developers with more than 700 million monthly users must request a special license from Meta, which it will grant purely at its own discretion.
Put simply, Metaâs Big Tech brethren can whistle if they want in.
Metaâs language around its LLMs is somewhat malleable. While the company did call its Llama 2 model open source, with the arrival of Llama 3 in April, it retreated somewhat from the terminology, using phrases such as âopenly availableâ and âopenly accessibleâ instead. But in some places, it still refers to the model as âopen source.â
âEveryone else that is involved in the conversation is perfectly agreeing that Llama itself cannot be considered open source,â Maffulli said. âPeople Iâve spoken with who work at Meta, they know that itâs a little bit of a stretch.â
On top of that, some might argue that thereâs a conflict of interest here: a company that has shown a desire to piggyback off the open source branding also provides finances to the stewards of âthe definitionâ?
This is one of the reasons why the OSI is trying to diversify its funding, recently securing a grant from the Sloan Foundation, which is helping to fund its multi-stakeholder global push to reach the Open Source AI Definition. TechCrunch can reveal this grant amounts to around $250,000, and Maffulli is hopeful that this can alter the optics around its reliance on corporate funding.
âThatâs one of the things that the Sloan grant makes even more clear: We could say goodbye to Metaâs money anytime,â Maffulli said. âWe could do that even before this Sloan Grant, because I know that weâre going to be getting donations from others. And Meta knows that very well. Theyâre not interfering with any of this [process], neither is Microsoft, or GitHub or Amazon or Google â they absolutely know that they cannot interfere, because the structure of the organization doesnât allow that.â
Working definition of open source AI
The current Open Source AI Definition draft sits at version 0.0.8, constituting three core parts: the âpreamble,â which lays out the documentâs remit; the Open Source AI Definition itself; and a checklist that runs through the components required for an open source-compliant AI system.
As per the current draft, an Open Source AI system should grant freedoms to use the system for any purpose without seeking permission; to allow others to study how the system works and inspect its components; and to modify and share the system for any purpose.
But one of the biggest challenges has been around data â that is, can an AI system be classified as âopen sourceâ if the company hasnât made the training dataset available for others to poke at? According to Maffulli, itâs more important to know where the data came from, and how a developer labeled, de-duplicated and filtered the data. And also, having access to the code that was used to assemble the dataset from its various sources.
âItâs much better to know that information than to have the plain dataset without the rest of it,â Maffulli said.
While having access to the full dataset would be nice (the OSI makes this an âoptionalâ component), Maffulli says that itâs not possible or practical in many cases. This might be because there is confidential or copyrighted information contained within the dataset that the developer doesnât have permission to redistribute. Moreover, there are techniques to train machine learning models whereby the data itself isnât actually shared with the system, using techniques such as federated learning, differential privacy and homomorphic encryption.
And this perfectly highlights the fundamental differences between âopen source softwareâ and âopen source AIâ: The intentions might be similar, but they are not like-for-like comparable, and this disparity is what the OSI is trying to capture in its definition.
In software, source code and binary code are two views of the same artifact: They reflect the same program in different forms. But training datasets and the subsequent trained models are distinct things: You can take that same dataset, and you wonât necessarily be able to re-create the same model consistently.
âThere is a variety of statistical and random logic that happens during the training that means it cannot make it replicable in the same way as software,â Maffulli added.
So an open source AI system should be easy to replicate, with clear instructions. And this is where the checklist facet of the Open Source AI Definition comes into play, which is based on a recently published academic paper called âThe Model Openness Framework: Promoting Completeness and Openness for Reproducibility, Transparency, and Usability in Artificial Intelligence.â
This paper proposes the Model Openness Framework (MOF), a classification system that rates machine learning models âbased on their completeness and openness.â The MOF demands that specific components of the AI model development be âincluded and released under appropriate open licenses,â including training methodologies and details around the model parameters.
Stable condition
The OSI is calling the official launch of the definition the âstable version,â much like a company will do with an application that has undergone extensive testing and debugging ahead of prime time. The OSI is purposefully not calling it the âfinal releaseâ because parts of it will likely evolve.
âWe canât really expect this definition to last for 26 years like the Open Source Definition,â Maffulli said. âI donât expect the top part of the definition â such as âwhat is an AI system?â â to change much. But the parts that we refer to in the checklist, those lists of components depend on technology? Tomorrow, who knows what the technology will look like.â
The stable Open Source AI Definition is expected to be rubber stamped by the Board at the All Things Open conference at the tail end of October, with the OSI embarking on a global roadshow in the intervening months spanning five continents, seeking more âdiverse inputâ on how âopen source AIâ will be defined moving forward. But any final changes are likely to be little more than âsmall tweaksâ here and there.
âThis is the final stretch,â Maffulli said. âWe have reached a feature complete version of the definition; we have all the elements that we need. Now we have a checklist, so weâre checking that there are no surprises in there; there are no systems that should be included or excluded.â