Is the term Open source LLM now just  a marketing slogan?

Is the term Open source LLM now just a marketing slogan?

This week, with the announcement of Mistral, we have acknowledged what many of us already knew

Open Source LLMs are nothing but a marketing slogan

Let me explain more

The views expressed here are mine alone as usual

The term Open Source LLM is nothing but a marketing slogan because Open Source has (wrongly) painted itself in the corner of being always 'good' - when at best, its neutral.

As has been widely reported, the next version of Mistral will not be open source

"Unlike some of Mistral’s previous models, it won’t be open source. “Mistral Large achieves strong results on commonly used benchmarks, making it the world’s second-ranked model generally available through an API (next to GPT-4),” says the Mistral AI team."

let's take a step back to the beginning

Open source as in copyleft licences are most pure from an open standpoint. an example of this is the GPT license

Then you have the permissive licences . The best example is MIT

The main difference is in the nature of changes.

The copyleft license essentially mandate that you are obligated to give your changes to the community in the public domain. I personally prefer the permissive licences so that you can use an existing code base to build a community and the receiver can still build unique IP on the code which they are not obligated to relinquish

Then came the cloud - and that changed the Open source game

As early as 2008, Richard Stallman warned that Cloud computing was a trap (to open source) 

I think he also (correctly) realised that Cloud computing makes Open source irrelevant since the locus of power has shifted elsewhere. Now, the same story repeats with Open source LLMs

In essence, open source LLMs release their weights and architecture but not the training data.

That means, the problems which open source LLMs face are the same as the closed source from the standpoint of transparency, bias, fairness, copyright, privacy, knowledge gaps, propensity to hallucinate etc etc.

Gary Marcus, who I do not agree with mostly, sees Open source LLMs as a threat 

But the specific threats apart, as I discuss above, sharing weights is only part of the solution. In that sense, open source and closed source are the same and open source is tangential to LLMs (just as it was to the Cloud)

Now how does this apply to the AI act  ? In my view, apart from the lobbying efforts by some companies claiming open source LLMs are 'good' - nothing

I am actually cautiously optimistic about the AI act.

As it stands, its not regulating on the model parameter size(I do not believe it should regulate on model parameter size) and it has lot of good things on protection of individuals. Companies also need the AI act for some procurement guidelines

All this, in my view, is equal for closed or open source

I thus believe the propensity to take the moral high ground has painted open source LLMs in a corner

In another parallel, we saw the same with Google last week with the Gemini images

The problem here is not the specific issue

That can happen and its all a new area for all The problem here is - for months Google has been saying that they are testing their AI, it will be safer, it will be better/ more reliable etc etc. And that's the real damage (the moral high ground) That position, in my view, is fundamentally untenable

As it is for Open Source LLMs to be 'good'

It's the same idea of 'doing no evil' slogan I see some of the same activism/ altruism/ in the open source LLM movement as well

Its not possible to sustain it as we saw this week with the mistral U turn.

When companies / initiatives take a moral high ground - on what is essentially a commercial venture by definition, they paint themselves into a corner - from which there is no easy way out.

To conclude, what I am saying is

1)   At best, open source LLMs are a tangential issue. Not the central issue by any means

2)   Open source LLMs are neither good nor bad. - its best to simply see them as a part of the ecosystem

3)   The AI act has many benefits as it stands - the only reason open source got mixed in it was lobbying - at best its tangential again

4)  I am referring here only to Open source LLM not open source itself 

5) This does not apply to tools which are open sourced ex langchain and llamaindex. specifically to the open source LLM itself

6) Developing LLMs are expensive and its interesting to see if this is a broader trend to recover the commercial investments (or else the closed source will always be better than open source LLMs)

7) Over time, cloud platforms are forming partnerships with multiple LLMs. If this trend continues, then we see a move towards open source tools that are platform agnostic such as langchain and llamaindex

The views expressed above are my own and not associated with any organization I am associated with 

You can meet us at the The Oxford AI summit.

We also announced two of our well known courses: low code AI course at the university of oxford - open to non developers and digital twins for AI course

Image source: OpenAI / Dall-e


Marek Kulbacki, PhD

Entrepreneur & Solutionist | Principal Scientist | CEO, CSO, CTO | Expert in Human Augmentation, R&D, and Innovation | Leading Technology at PJATK, WUT-FM, and DIVEINAI

9mo

I agree with you Ajit Jaokar. While intended to foster transparency and collaboration, open-source LLMs face ethical and legal hurdles from the origin due to complex IP rights and data sources. The vast datasets used for training, resulting in models with a "mixture of broken laws" subject to diverse regulations, challenge the alignment with open-source ideals. The absence of explainable AI also leaves the open-source promise of transparency and accountability to be met, indicating a need for careful, ethical, and legal consideration in LLM development and deployment. If the creation process of an LLM weakens or undermines existing regulations, calling such a creation 'open' is fraudulent. It attempts to justify actions that violate rights under the guise of a supposed higher necessity. For such a concept to make sense, new rules must be modified and established for all other activities, which would turn the world upside down.

Like
Reply

Since the late 90's one could argue open source has been a marketing term. While it changes with the license and decade, one common theme is the general ability of the public to contribute to the project, which is an ongoing and dynamic effort. To my understanding, this isn't happening in the LLM space yet.

Open sourcing LLMs may not be economically sustainable in the current ecosystem.

Paul Golding

Hands-on R&D Multidisciplinary AI Leader | 30 patents in AI/ML | Enterprise AI | AI Chip Design | Quantum AI

9mo

I agree. "Open" has been used as a marketing grift for some time -- e.g. Android, React etc. Often there is no open contribution. There is no technical reason why a model could not be fully open source. I believe that Falcon is the closest: open source weights and open dataset (refinedweb), about which they wrote a separate paper describing how they prepared it to achieve certain performance levels -- i.e. they also openly published their performance insights. (Apache Lic. 2, but with restrictions on hosting use -- I guess so they can reserve the right to monetize API calls to the native model.) Regarding my previous post, the author's POV was about who gets to gate-keep AI generative outputs by deciding upon the imposition of opaque human-alignment rules to prevent "misuse". With some open models, licensees could modify the system -- i.e. remove alignment. The author thinks this should be banned lest modifiers do something nefarious.

Raymond Sun this is my view on open source LLM also Paul Golding - my comment was overdue from a previous post

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics