Hugging Face discovered a possible breach of its AI hub platform last week. More specifically, it involved misusing Spaces secrets, or login keys, to access personal accounts, tools, and development spaces. Extensive measures have since been taken.
According to Hugging Face, unauthorized access was recently discovered to the Spaces platform, a hosting platform for AI models. Someone allegedly granted themselves access to a subset of secrets. This access was allegedly forced by an unnamed “third party.
The platform provider for creating, sharing and hosting AI models immediately intervened to prevent damage or further breaches. In the process, several so-called HF tokens for these specific secrets were revoked.
Additional measures
Additional measures have also been taken to strengthen the infrastructure’s security of Spaces further. These include completely removing org tokens, implementing key management services (KMS) for Spaces secrets, strengthening and expanding Hugging Face’s ability to identify leaked tokens and proactively invalidate them, and improving overall security. Furthermore, the AI hub plans to soon phase out classic read-and-write tokens entirely when more detailed access tokens offer the same functionality.
Hugging Face has emailed customers whose Spaces tokens have been revoked. It urges all customers to refresh any key or token and recommends switching to more detailed tokens, which offer greater security.
Previous security incidents
This is not the first time Hugging Face’s security has been compromised. In April of this year, researchers discovered a -since resolved- vulnerability that allowed hackers to run arbitrary code during a build-time of applications hosted on the platform. This gave them insight into the network connections of the affected machines
Even earlier in the year, security company JFrog discovered that code uploaded to the platform installed undiscovered backdoors and other malware on end users’ machines. Furthermore, security specialist HiddenLayer still flagged opportunities for exploiting the apparently very secure “serialization format” Safetensors to create sabotaged AI models.
Also read: Large-scale attack on Ray framework exposes AI security risks