• Book Dewayne Hart
  • Dewaynehart@dewaynehart.com
  • (470) 409 8316
  • Speaker Bio
  • Home
  • About
  • Speaker
  • Books
  • Podcast
  • Contact
  • Blog
  • Home
  • About
  • Speaker
  • Books
  • Podcast
  • Contact
  • Blog
Facebook-f Linkedin-in Youtube X-twitter Globe
Order books

Malicious ML Models on Hugging Face Leverage Broken Pickle Format to Evade Detection

Posted on February 8, 2025 by admin

[ad_1]

Feb 08, 2025Ravie LakshmananArtificial Intelligence / Supply Chain Security

Malicious ML Models

Cybersecurity researchers have uncovered two malicious machine learning (ML) models on Hugging Face that leveraged an unusual technique of “broken” pickle files to evade detection.

“The pickle files extracted from the mentioned PyTorch archives revealed the malicious Python content at the beginning of the file,” ReversingLabs researcher Karlo Zanki said in a report shared with The Hacker News. “In both cases, the malicious payload was a typical platform-aware reverse shell that connects to a hard-coded IP address.”

Cybersecurity

The approach has been dubbed nullifAI, as it involves clearcut attempts to sidestep existing safeguards put in place to identify malicious models. The Hugging Face repositories have been listed below –

  • glockr1/ballr7
  • who-r-u0000/0000000000000000000000000000000000000

It’s believed that the models are more of a proof-of-concept (PoC) than an active supply chain attack scenario.

The pickle serialization format, used common for distributing ML models, has been repeatedly found to be a security risk, as it offers ways to execute arbitrary code as soon as they are loaded and deserialized.

Malicious ML Models

The two models detected by the cybersecurity company are stored in the PyTorch format, which is nothing but a compressed pickle file. While PyTorch uses the ZIP format for compression by default, the identified models have been found to be compressed using the 7z format.

Consequently, this behavior made it possible for the models to fly under the radar and avoid getting flagged as malicious by Picklescan, a tool used by Hugging Face to detect suspicious Pickle files.

“An interesting thing about this Pickle file is that the object serialization — the purpose of the Pickle file — breaks shortly after the malicious payload is executed, resulting in the failure of the object’s decompilation,” Zanki said.

Cybersecurity

Further analysis has revealed that such broken pickle files can still be partially deserialized owing to the discrepancy between Picklescan and how deserialization works, causing the malicious code to be executed despite the tool throwing an error message. The open-source utility has since been updated to rectify this bug.

“The explanation for this behavior is that the object deserialization is performed on Pickle files sequentially,” Zanki noted.

“Pickle opcodes are executed as they are encountered, and until all opcodes are executed or a broken instruction is encountered. In the case of the discovered model, since the malicious payload is inserted at the beginning of the Pickle stream, execution of the model wouldn’t be detected as unsafe by Hugging Face’s existing security scanning tools.”

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.



[ad_2]

Recent Posts

  • No Blind Spots: A Veteran’s Blueprint to Protect Critical Infrastructure
  • Cybersecurity as a Growth Lever: A Board-Ready Playbook for CIOs and CTOs
  • From Reaction to Readiness: Building a Cybersecurity Mindset for Proactive Defense
  • Cybersecurity Leadership in 2026: Executive Decisions that Drive Resilience and Growth
  • Implementing a Hacker’s Mindset: Build a Security Culture That Hunts, Learns, and Wins

Recent Comments

No comments to show.

Archives

  • March 2026
  • February 2026
  • July 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023

Categories

  • Cyber News
  • Uncategorized

Book Dewayne Hart for your next event

  • Dewaynehart@dewaynehart.com
  • (470) 409 8316
Facebook-f Linkedin-in Youtube X-twitter Globe
© 2025 Dewayne Hart | Cybersecurity Leadership & Innovation