A malicious Hugging Face repository that posed as an OpenAI release delivered infostealer malware to Windows machines and recorded about 244,000 downloads before removal, according to research from AI security firm HiddenLayer. The number of downloads may have been artificially inflated by the attackers to make the model seem more popular, so the extent of the effects of the attack is unknown.
‘Open-OSS/privacy-filter’ imitated OpenAI’s Privacy Filter release. HiddenLayer said the original model card had been copied nearly exactly, and the bad actors included a malicious loader.py file that fetched and ran credential-stealing malware on Windows hosts.
The repos reached the top of the ‘trending’ list on Hugging Face with 667 likes accrued in less than 18 hours – again, this figure may have been changed by the attackers.
Public AI model registries may be becoming risks in the software supply chain as developers and data scientists clone models directly into corporate environments, environments that have access to source code, cloud credentials, and internal systems. That situation alone makes a compromised model repository more than a nuisance.
The README file for the fake model closely resembled that of the legitimate project, but it departed from the original in that it instructed users to run start.bat on Windows or execute python loader.py on Linux and macOS, instructions central to the infection chain HiddenLayer described.
Researchers have previously warned that malicious code can be hidden inside AI model files or related setup scripts on Hugging Face and other public registries. Previous cases involved Pickle-serialised model files that bypassed platform scanners.
Malicious loader disguised as setup code
HiddenLayer said loader.py began with decoy code that resembled a normal AI model loader, moving quickly to a concealed infection chain. A script disabled SSL verification, decoded a base64-encoded URL linked to jsonkeeper.com, retrieved a remote payload instruction, and passed commands to PowerShell on Windows machines. HiddenLayer said the use of the command-and-control channel jsonkeeper.com allowed the attacker to rotate the payload without changing the repo’s contents.
The PowerShell command then downloaded an additional batch file from an attacker-controlled domain, and the malware established persistence by creating a scheduled task designed to resemble a legitimate Microsoft Edge update process.
The final payload was a Rust-based infostealer. According to HiddenLayer, it targeted Chromium and Firefox-derived browsers, Discord local storage, cryptocurrency wallets, FileZilla configurations, and host system information. The malware also tried to disable Windows Antimalware Scan Interface and Event Tracing.
Wider campaigns
HiddenLayer also said it found six further Hugging Face repositories containing virtually identical loader logic that shared infrastructure with the cited attack.
The case follows other warnings about malicious AI models on Hugging Face, including poisoned AI SDKs and fake OpenClaw installers. The common thread is that attackers are treating AI development workflows as a route into normally secure environments. AI repositories often contain executable code, setup instructions, dependency files, notebooks, and scripts, and its these peripheral elements that cause the problems, rather than the models themselves.
Sakshi Grover, senior research manager for cybersecurity services at IDC, said traditional SCA was designed to inspect dependency manifests, libraries, and container images. It is less effective at identifying malicious loader logic in AI repositories. They also cited IDC’s November 2025 FutureScape report, which contained the call that by 2027, 60% of agentic AI systems should have a bill of materials. This would help companies track which AI artefacts they use, their source, which versions were approved, and whether they contain executable components.
Response and mitigation
HiddenLayer advised anyone who cloned Open-OSS/privacy-filter and ran start.bat, python loader.py or any file from the repository on a Windows host to treat the system as compromised, and recommends re-imaging systems. Browser sessions should considered compromised even if passwords are not held locally, as session cookies let attackers bypass MFA in some circumstances.
Hugging Face has confirmed the repo has been removed.
(Image source: Pixabay, under licence.)
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
The post Hugging Face hosted malicious software masquerading as OpenAI release appeared first on AI News.
