Spotlight on mnqazi: Critical Findings in ChuanhuChatGPT and AI/ML Security

Introduction

At huntr, we love to celebrate the incredible talent working with us to build a safer AI-powered world. Our community of over 15,000 hackers and threat researchers are constantly uncovering and fixing AI/ML vulnerabilities. Today, we're putting the spotlight on Mo Nadeem, also known as mnqazi on huntr. From his beginnings in cybersecurity R&D to his current focus on AI/ML bug bounty hunting, mnqazi is making significant strides in the field. In this blog, we’ll explore his journey and delve into his recent discovery of a critical vulnerability in the ChuanhuChatGPT project.

Meet mnqazi 

"Hi! My name is Mo Nadeem, also known as M Nadeem Qazi. I'm currently working in Cyber Security - R&D at `Confidential`. During this time, I have developed a robust foundation in cyber security, which is supported by my educational background in Master of Computer Applications from Maulana Azad National Urdu University Hyderabad. Over the years, I have achieved significant recognition in the field, including winning the Award for Young Innovator from `REDACTED` and being featured in multiple Hall of Fames for my contributions to various security programs. I have also been acknowledged by the NCIIPC, Government of India, for my work. Furthermore, I have contributed to the academic community with my publication titled Remote Malware Detection Using Pattern Based Analysis For Android Devices."

 

How did you get into AI/ML bug bounty hunting? What parts of it do you enjoy?

"It started when some of my developer friends were working on an AI/ML project that got compromised. They asked for my help to identify the root cause and eliminate the backdoor. Successfully resolving their issue sparked my interest in the security challenges unique to AI/ML systems.
 
Coincidentally, around that time, huntr shifted its focus to AI/ML, which immediately caught my attention. I decided to dive deeper into this field, dedicating myself entirely to understanding the intricacies and potential vulnerabilities of AI/ML technologies. As a newcomer to this niche, I invested considerable time in researching and comprehending the deep functionalities of AI/ML systems. This dedication paid off as I began to discover multiple vulnerabilities in various open-source projects.
 
The continuous learning process, the thrill of uncovering hidden flaws, and the satisfaction of making AI/ML systems more secure are what I enjoy most about this field."

 

Discovering Huntr: mnqazi’s Journey in AI/ML Bug Bounty Hunting 


"I found huntr while looking for a platform to publish my first CVE. I was impressed by its easy-to-use interface and focus on security research—it was unlike other platforms I'd tried. After starting with CVE hunting, huntr shifted to AI/ML bug bounty, which matched my interests perfectly. Since then, I've used huntr exclusively for bug hunting. The supportive community and huntr's focus on AI/ML vulnerabilities have been crucial in my growth and learning in this field.
 
I also want to give a shout-out to Dan McInerney, Protect AI's Threat Researcher. He's been incredibly helpful, always understanding my questions and providing valuable insights. His support has been invaluable to me."

 

Uncovering mnqazi's Improper Access Control Vulnerability

Recently, mnqazi discovered a critical vulnerability in the ChuanhuChatGPT project. ChuanhuChatGPT, a project with over 15k stars on GitHub, serves as a GUI for the ChatGPT API and numerous large language models (LLMs). The issue mnqazi identified was an improper access control vulnerability that allowed any user on the server to access the chat history of any other user without any interaction. This serious security flaw meant that User A could easily view the chat history of User B. You can actually find the project here on GitHub .

 

Now, let's have mnqazi take the wheel and show you his step-by-step process in discovering and exploiting this vulnerability.


Step-by-Step Process

Discovery Phase: Identifying the Vulnerability 

While examining the project’s codebase, I noticed that the chat history was not properly secured. Specifically, I found the vulnerability in the `utils.js` file, in the following function:

 

In this function, the `fileUrl` is constructed based on whether the `gradioUsername` is provided. If it is not provided or is an empty string, the URL defaults to a path where the chat history is stored without any user-specific directory:

 

This allows any user to access any chat history file simply by not providing a `gradioUsername` , thereby bypassing any authentication or authorization checks. Additionally, if a user changes the `gradioUsername` to another username, such as changing `user1` to `user2` , they can access the chat history of the other user ( `user2` ).

Example: Demonstrating the Security Flaw 

For instance, consider two users, User A ( `user1` ) and User B ( `user2` ). If User A wants to access User B's chat history, they could simply modify the `gradioUsername` parameter in the `downloadHistory` function as follows:
 
This would set the `fileUrl` to `/file=./history/user2/chatHistoryFileName.json` , allowing User A to download and view User B's chat history.

Proof of Concept: Validating the Vulnerability 

I developed a PoC to demonstrate the vulnerability, which showed how an unauthorized user could access another user's chat history. You can view the proof of concept in my video below.
 
 

Impact Analysis: Understanding the Consequences

 
The potential impacts of this vulnerability were significant:
  1. Data Breaches: Unauthorized access to chat histories could lead to widespread data breaches, exposing sensitive information such as personal details, financial data, or confidential conversations.
  2. Identity Theft: Malicious actors could use the information from chat histories to impersonate users or commit identity theft, causing financial loss and damage to reputations.
  3. Manipulation and Fraud: Access to chat histories could provide insights into users' behaviors, preferences, and relationships. Malicious actors could exploit this information for social engineering attacks or phishing scams.
The vulnerability has since been published, highlighting the importance of securing user data and ensuring proper access controls are in place to prevent unauthorized access.
 

Conclusion

Feeling inspired by mnqazi's journey? Become a part of huntr’s dynamic community of hackers, researchers, and tech enthusiasts who are committed to securing the future of AI. Whether you're an experienced professional or a newcomer to the field, we have a place for you.

Explore our resources, like the Beginner's Guide to AI/ML Bug Hunting, and start your adventure with huntr today!