Hunting Vulnerabilities in Keras Model Deserialization

The ability to save (serialize) and load (deserialize) trained models is fundamental to...

Spotlight on taiphung217: Five-Month Climb to Huntr Leaderboard Glory

Introduction Some researchers dip their toes into AI/ML security. Phung Van Tai (aka...

Spotlight on Lyutoon: From Black Hat to Bug Bounties

Introduction Some Ph.D. candidates stay up late fine-tuning models. Tong Liu (aka...

Pivoting Archive Slip Bugs into High-Value AI/ML Bounties

Many ML model files— .nemo, .keras, .gguf, even trusty .pth— are just zip/tar archives in...

Inside CVE-2025-1550: Remote Code Execution via Keras Models

Before Google even filed CVE-2025-1550, one of our Huntr researchers, Mevlüt Akçam (aka...

Spotlight on winters0x64: Leveraging CTF Skills for AI/ML Bug Bounty Success

Introduction Some people skipped online classes during lockdown to binge Netflix. Arun...

Pkl Rick’d: How Loading a .pkl File Can Lead to RCE

Sometimes the simplest bugs are the most dangerous — especially when they’ve been hiding...

Exposing Keras Lambda Exploits in TensorFlow Models

In this blog, we’re breaking down one of our example Model File Vulnerabilities (MFVs) to...

Don’t Trust Your Model: How a Malicious Pickle Payload in PyTorch Can Execute Code

In this blog, we're breaking down one of our example Model File Vulnerabilities (MFVs) to...

Unlocking Bug Bounty Success: Expert Tips from Dan McInerney

What’s the secret sauce behind consistent bug bounty success? Well, the answer lies in a...