GGUF File Format Vulnerabilities: A Guide for Hackers

Introduction

As machine learning continues to rise in prominence, so does the need for secure file formats and libraries to store and load model weights. One such format, GGUF, has gained popularity due to its use in the GGML library for storing model weights. Recently, one of our huntrs, retr0reg, discovered several critical vulnerabilities in this format, shedding light on how even commonly used file formats can be exploited.

This blog walks through retr0reg’s journey of finding vulnerabilities in GGUF— a file format essential for distributing models like Llama-2. We’ll break down those vulnerabilities, show you how they were found, and—here’s the fun part—explain how you can get in on the action and start hunting for model file vulnerabilities yourself.

What is GGUF, and Why Does It Matter?

GGUF is a binary file format designed for fast loading and saving of machine learning models, particularly those stored using the GGML library. It’s essential for distributing trained models like Llama-2, especially in low-level contexts. But, like any file format handling complex data, improper validation can lead to exploitable vulnerabilities. In GGUF’s case, insufficient validation during file parsing opens the door to a range of potential attacks, including heap overflows and memory corruption.

These vulnerabilities are especially dangerous because they allow attackers to execute arbitrary code on the victim’s machine through a crafted GGUF file. Let’s dive into the details of some key vulnerabilities and why they’re significant.

Heap Overflow in Key-Value Parsing

One of the most critical vulnerabilities in GGUF stems from how the library processes key-value pairs. When a model is loaded, the gguf_init_from_file() function reads the file header, which contains the number of key-value pairs (n_kv). However, the library does not validate this value. A malicious GGUF file can supply an arbitrarily large number of key-value pairs, causing a heap overflow.

Why is this a problem? When GGML allocates memory for the key-value pairs, it bases the size of the buffer on the unchecked n_kv value. If this value is too large, the allocation can wrap around, leading to an overflow. This allows an attacker to overwrite adjacent memory, potentially gaining control over the program’s execution.

Unchecked user input like this is a major security risk. In the world of AI/ML, where models are frequently loaded across systems, vulnerabilities like this one can have serious consequences.

Unchecked String Lengths

Another critical vulnerability involves how GGUF handles strings. The function gguf_fread_str() reads string data from the file, but it fails to properly validate the length of the string. This can result in a wraparound error during memory allocation.

Here’s how it works: the function reads a length value from the file and allocates memory based on that value. If the length provided by the file is too large (e.g., 0xffffffffffffffff), the allocation wraps around to a smaller size. Then, when the string data is written into this buffer, a heap overflow occurs.

This vulnerability highlights the importance of validating every piece of input data, especially in file formats designed for quick loading and parsing. When strings are handled improperly, it can lead to devastating consequences for the system parsing the file.

Tensor Count Overflow

Similar to the key-value overflow, GGUF also suffers from a tensor count overflow. The tensor count is stored in the file header, but again, the library does not validate this value before using it to allocate memory. By supplying a malicious tensor count, an attacker can force GGML to allocate an insufficient buffer, leading to another heap overflow.

This vulnerability emphasizes the need for proper bounds checking. Whether it’s key-value pairs or tensor counts, every element of a file format must be thoroughly validated to prevent exploitation. A small oversight in handling a seemingly benign value like tensor count can lead to significant vulnerabilities in memory allocation.

Earn Big with Huntr’s Top Bounties: Model Format Vulnerabilities

We’re stepping up our game—model format vulnerabilities are now the highest-paid bounties on huntr, with rewards up to double the previous rate.

After digging into vulnerabilities in GGUF files, it’s clear there’s a lot more to uncover. Whether it’s heap overflows, unchecked inputs, or parsing errors, these issues aren’t just isolated—they’re out there, waiting to be found across a range of model file formats.

Sound like your kind of challenge? Dive in and start hunting today.