4 min

Tags in this article

, , , ,

Meta and IBM’s newly formed AI Alliance immediately starts with big names already on board: Intel, Dell and Red Hat. However, the initiative has less appeal in the pool of influential AI developers. What can the AI Alliance accomplish without support from OpenAI, Google and Anthropic?

Arguing that it “benefits security,” the AI Alliance is trying to convince more AI developers of the value of open-source. Improved cybersecurity must result from a mechanism of self-control. The mechanism must be successful through a combination of a variety of profiles. The more than 50 members include developers, researchers, academic institutions and companies.

IBM says it launched the initiative together with Meta. With a list of fifty members from different regions and interests, the initiative does not look like an empty box. Moreover, several large companies with commercial interests are joining: AMD, Dell Technologies, Hugging Face, Intel, Oracle, Red Hat, ServiceNow, Sony and Stability AI. The non-profit organization Linux Foundation is also listed.

‘Self-control’ and ‘open source’ frighten

Still, there are important names missing from the list. For example, academic and scientific knowledge will come mainly from non-European academics. Universities from our region may be inhibited from participating by the upcoming European regulation of AI. This act still seems to favour obligations and wants to carry out consequences for non-compliance. The necessity of these penalties is currently being debated among member states. Universities may fear that participating in an alliance of self-control also shows their political stance.

Read also: Europe divided: what will remain of the AI ​​Act?

In addition, no one can have failed to notice that the list lacks some influential AI companies. OpenAI, its investment partner Microsoft and its competitors Anthropic and Google will have been put off by the open-source part. These parties reason that they risk copies are being made once a model is open-source. That, in turn, would hurt commercial interests.

To avoid having to open up models to foster self-control, these parties have long advocated regulation from governments. The proposed regulation would include a list of security and testing requirements. Requirements that the parties are only too happy to come up with themselves. In that way, they can intervene if regulations threaten to dictate that the models or training data behind ChatGPT, for example, must be released.

Also read: Are OpenAI, Microsoft and Google lobbying a way out the AI Act?

Double standards at Microsoft

At least Meta is not hiding that it is happy to have an open-source policy of its own. Just before announcing the alliance, Yann LeCun, an AI researcher at Meta, talked with Bloomberg about the benefits of sharing AI technology. In the announcement, about the same words were put in the mouth of Nick Clegg, president of global affairs at Meta: “We believe it’s better when development of AI happens openly, more people enjoy the benefits, build innovative products and work to ensure its safety.”

Llama 2, a competitor to the GPT and Claude models of competitors OpenAI and Anthropic, is perhaps Meta’s best-known open-source model. The model is free to use for research and commercial use.

Developing and running Llama 2 is no small task Meta is doing all by itself. In fact, Microsoft has been a helping hand in open-sourcing the technology since Llama 2 was announced. Through the partnership, Microsoft has been able to put itself in the role of preferred partner for Llama 2. As a result, the model runs best on the Windows operating system and in the Azure cloud. The tech giant gets this preferential treatment through a long-standing partnership that has already brought to fruition an open ecosystem for interchangeable AI frameworks and research papers on the state of AI.

“Now, with this expanded partnership, Microsoft and Meta are supporting an open approach to provide increased access to foundational AI technologies to the benefits of businesses globally,” the companies said at the launch of Llama 2. That same preferred partner now did not sign to participate in the AI Alliance. Microsoft, therefore, seems to have positioned itself in a duel by making such statements at the launch of Llama 2. With its name as OpenAI’s largest funder, it can never put itself fully behind the idea of making all AI technology open-source.

Goals repeat ‘global’ and ‘responsible’

Judging by the goals of the AI Alliance, IBM and Meta themselves think they can influence the field of AI development even without these players. Two ambitions repeated several times in the goals show the ambitions of the AI Alliance. It is clear the influence of the Alliance’s work should not be limited to a specific region. That would also threaten the second objective: responsible and secure AI development.

In general, the alliance seems to become the face of a group that counterbalances the opinions of OpenAI and Google. The AI Alliance gives away the group’s political usefulness at the end, where it says it plans to ‘collaborate with key existing government initiatives’. It will have mixed success with that, as not all politicians are keen on self-policing. Then again, a different opinion on the importance of an open-source approach in AI development could spark interesting political debates.