The technology allows the users of an AI model to contribute hardware capacity. Operating costs are distributed as a result.

The world was introduced to natural language processing (NLP) over the past few weeks. The branch of artificial intelligence focuses on speech and text. NLP models have been available for years, but OpenAI’s ChatGPT is one of the most user-friendly options to date. Users visit the website, ask a question and receive credible text or code.

The cost of large NLP models is one of the biggest reasons for the multi-year gap between the technology’s initial availability and recent breakthrough. Tom Goldstein, associate professor at the University of Maryland, recently estimated that OpenAI spends nearly €100,000 a day to keep ChatGPT running. That amounts to €3 million a month.

The model depends on pricey GPUs. As OpenAI offers ChatGPT for free, the organization largely bears the cost. IT professionals in the open-source AI community are working on an alternative. Dubbed Petals, the project allows large NLP models to be hosted on a distributed network that lets users contribute hardware capacity.


The technology is already in use. You install an open-source library and follow the project website’s instructions to connect to the Petals network. Upon connecting, users gain access to Bloom, an open-source NLP model with functionality similar to ChatGPT. The technology answers questions with credible texts.

TechCrunch tested the model and reported that short questions (i.e. translations) were processed within seconds. Longer questions, such as an essay on the universe or the meaning of life, took up to three minutes.

ChatGPT is much faster, but Petals runs on a distributed network. All connected users can contribute hardware capacity. If enough users contribute, the model runs at lightning speed without a single user noticing drops in their system’s performance.

Petals is completely open-source. The implementation we discuss in this article focuses on Bloom, but in theory, the code can be adapted for any NLP model, including OpenAI’s ChatGPT, Meta’s OPT and MT-NLG from Microsoft and Nvidia. Petals’ developers want to make NLP technology more accessible by reducing model operating costs.


Though the system is promising, Petals has a long way to go. According to its developers, the technology is only suitable for research purposes at present. One reason is security. The public implementation of Petals sends data over a public network. You have no way of preventing other users from listening in on the questions you ask the model.

Organizations can circumvent the problem by modifying the code and implementing Petals in a private environment. Consider a group of ten companies looking to host an NLP model together. Users can still eavesdrop, but in this case, the network isn’t shared with strangers.