April 1, 2025

Is the hacker’s claim that “20 million ChatGPT logins” were breached true?

OpenAI has shifted into inquiry mode after a hacker named “EmirKing” claimed to be selling 20 million ChatGPT users’ login information on a hackers’ forum, Breach Forums. Cybercriminals often use these forums to sell malware, recruit operators, and exchange resources. Despite being taken down and confiscated by the FBI, the forum continued to operate.

Cybersecurity experts believe the claims of stolen OpenAI credentials may not be accurate due to unvalidated EmirKing’s sample data. OpenAI has opened an investigation and is taking the situation seriously. Techopedia examines the data associated with the incident and consults experts to discuss risks for businesses sending data back and forth to cloud-based AI models.

How a Post on a Breach Forum Caused Serious AI Concerns

A hacker named “EmirKing” claimed to be selling the personal information of 20 million OpenAI users on Breach Forums, charging a few dollars for the data. Reporter Mikael Thalan discovered two incorrect email addresses after verifying the sample data. Cybersecurity experts, including Thalan, advised against taking the Breach Forums data leak article too seriously.

According to OpenAI, it takes claims of leaks “very seriously” and launches an investigation.

OpenAI is investigating claims of a hacker compromising its systems, stating that the company is taking the claims seriously and has not found any evidence to support such claims. If proven true, this would be a significant problem and cost for OpenAI.

The company is deeply involved in global AI competition with Chinese competitors, particularly DeepSeek, which has become a popular app globally. Several controversies have emerged, including privacy, cost, and copyright issues.

Getting to the Bottom of the Big Question: Was My OpenAI Password Stolen?

Techopedia is unable to confirm whether EmirKing’s claim is accurate. We think the purported leak is probably untrue, considering the volume, cost, and criminal trends and activity on BreachForums. The Breach Forums post might have been made for a variety of reasons, including market manipulation, honeypot creation, or as the starting point for a fraud.

As cybercriminals shift their focus to AI companies, we advise consumers and businesses to take this incident as an early warning to stay ahead of the curve.

AI developers and AI firms are also urged by Techopedia to act and implement fundamental login security methods, such switching to multi-factor authentication, allowing biometrics and passkeys, and protecting user data using tried-and-true technology.

We Post a Lot of Data Online: Expert Opinion

Techopedia’s Mayur Upadhyaya, CEO of APIContext, highlighted the issue of inadequate access controls in many machine-to-machine communication systems due to the rapid adoption of AI.

OpenAI’s cybersecurity has been a concern since a hacker breached its internal messaging systems in 2023, allowing data to be extracted on its technologies. Although the incident did not cause significant damage, it raised national security concerns, potentially exposing the possibility of foreign threat actors also hacking OpenAI.

APIContext warns of significant security risks in AI models, as they rely on programmatic messages for sensitive information. Lack of strong authentication and standardized security measures could lead to more frequent breaches, such as the alleged OpenAI incident.

Users should strengthen their OpenAI password, enable MFA, and review chat history options for cyber hygiene. Enterprises using AI APIs must scan their digital attack surface and hire ethical hackers for vulnerability scans and remediation reports. Companies should also consider changing their OpenAI password, enabling MFA, and reviewing chat history options.

Vulnerability and remediation reports are crucial for companies to identify vulnerabilities before hacking. End-point risk management, traffic monitoring, and anti-bot security tools can strengthen digital attack surfaces and improve AI operations. Configuration checks are also crucial, as non-conformance to security standards is a bigger risk than lack of tools.

AI phishing is a growing concern, with cybercriminals targeting users and companies. Tools for proactive testing are available, but it’s crucial to be aware of companies like OpenAI’s practices, which ensure payments are managed through official links, and any suspicious activity should be considered a red flag.

The Bottom Line

Cybercriminals are leveraging AI to develop cyberattacks, as millions of users’ data is stored on AI company services. AI is expected to be a highly targeted industry in 2025, with similar posts promoting AI user data leaks. It is crucial for everyone to act accordingly.

Leave a Reply

Your email address will not be published. Required fields are marked *