“A fake OpenAI repository on Hugging Face distributed infostealer malware to approximately 244,000 users before being removed by HiddenLayer researchers. This incident highlights growing security risks in AI model repositories and the need for stronger verification mechanisms to prevent supply chain attacks in the AI ecosystem.”
Key Takeaways
- A malicious Hugging Face repository impersonating OpenAI delivered infostealer malware to Windows machines before removal.
- The repository received approximately 244,000 downloads, though numbers may have been artificially inflated by attackers.
- HiddenLayer's discovery underscores urgent security vulnerabilities in popular AI model hosting platforms.
Malicious software posing as OpenAI model infected 244,000 Windows machines via Hugging Face.
trending_upWhy It Matters
This incident reveals critical vulnerabilities in trusted AI repositories that developers rely on daily, making supply chain attacks easier for malicious actors. As AI adoption accelerates, compromised model repositories could affect thousands of organizations, making security verification essential. The incident demonstrates that even established platforms require stronger safeguards to prevent malware distribution disguised as legitimate AI releases.



