Hugging Face says it detected ‘unauthorized access’ to its AI model hosting platform
Late Friday afternoon, a time window companies usually reserve for unflattering disclosures, AI startup Hugging Face said that its security team earlier this week detected âunauthorized accessâ to Spaces, Hugging Faceâs platform for creating, sharing and hosting AI models.
In a blog post, Hugging Face said that the intrusion related to Spaces secrets, or the private pieces of information that act as keys to unlock protected resources like accounts, tools and dev environments, and that it has âsuspiciousâ some secrets couldâve been accessed by a third party without authorization.
As a precaution, Hugging Face has revoked a number of tokens in those secrets. (Tokens are used to verify identities.) Hugging Face says that users whose tokens have been revoked have already received an email notice and is recommending that all users ârefresh any key or tokenâ and consider switching to fine-grained access tokens, which Hugging Face claims are more secure.
It wasnât immediately clear how many users or apps were impacted by the potential breach. Weâve reached out to Hugging Face for more information and will update this post if we hear back.
âWe are working with outside cyber security forensic specialists, to investigate the issue as well as review our security policies and procedures. We have also reported this incident to law enforcement agencies and Data [sic] protection authorities,â Hugging Face wrote in the blog post. âWe deeply regret the disruption this incident may have caused and understand the inconvenience it may have posed to you. We pledge to use this as an opportunity to strengthen the security of our entire infrastructure.â
The possible hack of Spaces comes as Hugging Face, which is among the largest platforms for AI, machine learning and data science with over one million models, data sets and AI-powered apps, faces increasing scrutiny over its security practices.
In April, researchers at cloud security firm Wiz found an exploit â since fixed â that would allow attackers to execute arbitrary code during a Hugging Face-hosted appâs build time thatâd let them examine network connections from their machines. Earlier in the year, security firm JFrog uncovered evidence that code uploaded to Hugging Face covertly installed backdoors and other types of malware on end-user machines. And security startup HiddenLayer identified ways Hugging Faceâs ostensibly safer serialization format, Safetensors, could be abused to create sabotaged AI models.
Hugging Face recently said that it would partner with Wiz to use the companyâs vulnerability scanning and cloud environment configuration tools âwith the goal of improving security across our platform and the AI/ML ecosystem at large.â