Spotlight on Lyutoon: From Black Hat to Bug Bounties
May 22, 2025
By Madison Vorbrich

Introduction
Some Ph.D. candidates stay up late fine-tuning models. Tong Liu (aka Lyutoon) stays up late trying to break them.
At huntr, we’ve got a thing for spotlighting hackers. This month, the beam lands on Lyutoon, a Ph.D. student at the Institute of Information Engineering, Chinese Academy of Sciences. So grab your beverage of choice and let’s dive into the mindset, methods, and mayhem behind this fast-rising hunter.
Tell us a bit about yourself—what’s your background or story?
Hi everyone! My name is Tong Liu, also known as Lyutoon. I am a second-year Ph.D. student at the Institute of Information Engineering, Chinese Academy of Sciences. My primary research interests lie in AI security and software security.I have discovered hundreds of bugs in both open-source and proprietary software, and have been acknowledged by many well-known companies such as Apple, Google, Huawei, Microsoft, and Baidu. My research has been published at top academic and industry conferences, including USENIX Security, CCS, TOSEM, and Black Hat.In my spare time, I am also a core CTF player with teams Nu1L and Straw Hat. With my teammates, I have won numerous national and international CTF competitions, and we have consistently qualified for the DEF CON CTF finals from 2022 to 2025.
How did you first get into AI/ML bug bounty hunting?
I was first introduced to AI/ML bug hunting during my senior year of undergraduate study, when I was accepted into my current Ph.D. program and started interning under my advisor ahead of time. Prior to that, my main focus had been on AI security, such as adversarial examples. During the internship, both my research topic and undergraduate thesis shifted toward testing deep learning libraries, marking my first deep dive into the intersection of AI/ML-related software security.I found this to be a novel and exciting perspective. These libraries naturally combine AI and software security, and testing them requires a solid understanding of both fields. This work eventually led to a publication at USENIX Security 2023. In the process, we used fuzzing and differential testing techniques to uncover hundreds of bugs in major deep learning libraries such as TensorFlow, PyTorch, Paddle, and MindSpore.Since then, my academic focus has largely shifted toward systems that integrate AI and software, specifically analyzing their security and potential vulnerabilities. What I enjoy most is discovering new attack surfaces. For example, in 2023, many LLM integration frameworks emerged—such as LangChain and LlamaIndex. However, these also introduced a range of new vulnerabilities. One such vulnerability and attack surface I identified and coined is “LLM4Shell.”If you're interested, I discussed it in detail in my talk at Black Hat Asia 2024, titled LLM4Shell: Discovering and Exploiting RCE Vulnerabilities in Real-World LLM-Integrated Frameworks and Apps. In this talk, I explained the story about how I managed to find lots of LLM4Shell vulnerabilities in many famous LLM-integrated frameworks and how I exploited them in real-world apps.This experience made me realize that in the LLM era, the integration of large language models introduces an unpredictable third-party element, which brings with it a wide array of new security risks.
What’s your general approach when hunting for vulnerabilities?
When it comes to vulnerability discovery, I primarily rely on two approaches: fuzzing and code auditing. Of course, the choice depends on the context and the specific software I'm analyzing.For most targets, my first instinct is usually to apply fuzzing. It allows us to automate the process, freeing up time and quickly identifying crashes or vulnerabilities. For example, recent ollama vulnerabilities I published on Huntr was discovered based on fuzzing by crafting malformed GGUF model files to trigger crashes or unexpected behaviors.However, if writing an effective fuzzer isn’t feasible or practical, I turn to manual code auditing. This is particularly useful for uncovering logic flaws that fuzzing is unlikely to catch. Although code auditing can be more exhausting, the vulnerabilities it uncovers often have greater impact or value.To me, bug hunting is a process that requires both luck and moments of insight. There are times when I may go one or two weeks without finding anything, and other times when I might discover several valuable bugs within just a couple of days. So, the most important thing is to stay patient.
How did you come across huntr, and what’s your experience been like so far?
I first learned about the Huntr platform around 2022 from one of my senior labmates. At the time, I thought it was an awesome platform—Huntr was, to my knowledge, the first platform specifically focused on AI/ML-related vulnerabilities.So far, my experience with Huntr has been excellent. The vulnerabilities I discover can be submitted through Huntr, and in return, I’ve received CVEs and even bounties. What's more, Huntr continuously evolves with the times by updating its bounty programs—for example, the recent MFV initiative. These updates not only align with current trends, but also provide valuable insights and direction for security researchers like me, helping us identify new and meaningful research targets.
Join the hunt
Pretty inspiring story right? Kick off your own huntr journey by reading our MFV Beginner’s Guide for a quick crash course, then skim the participation guidelines for scope, tips on what we're looking for, and report rules. Once you’re set, ship us your first PoC, and we’ll see your handle in the queue. Happy hunting!
.jpg?width=100&name=Madison-2%20(3).jpg)