Top News

AI may unmask anonymous internet users
NewsBytes | March 10, 2026 9:42 PM CST



AI may unmask anonymous internet users
09 Mar 2026


A recent study has raised concerns over the ability of generative artificial intelligence (AI) to unmask anonymous social media users.

The research, conducted by AI experts Simon Lermen and Daniel Paleka, shows that large language models (LLMs) can effectively match anonymous online profiles with real-world identities by analyzing seemingly innocuous information shared across platforms.

This development poses a major threat to digital privacy and security.


AI can match anonymous users with real-world identities
Attack strategy


The study highlights how the technology behind platforms like ChatGPT has made advanced privacy attacks much cheaper.

In their experiment, the researchers fed anonymous accounts into an AI model and asked it to scrape all available online information.

The model successfully matched, in a fictional example, an anonymous user who shared details about their school struggles and dog walks with a real-world identity, demonstrating its potential for de-anonymization.


The skill level needed to carry out attacks is lower
Accessibility


The study also points out that the skill level needed to carry out these sophisticated attacks is now much lower. All a hacker needs is an internet connection and access to publicly available language models.

However, Lermen and Paleka caution that while LLMs can de-anonymize records in many cases, there are instances where there isn't enough information for the model to make a match or where potential matches could be too numerous.


Potential misuse scenarios
Misuse scenarios


The researchers also highlight several potential misuse scenarios for these new AI capabilities.

These include governments using AI to monitor anonymous dissidents and activists, or hackers launching highly personalized scams.

Lermen warns that publicly available information can be easily misused for scams like spear-phishing, where a hacker impersonates a trusted friend to trick victims into clicking malicious links.


What can be done to mitigate risks?
Mitigation strategies


To tackle this growing threat, Lermen suggests social media platforms should take proactive measures by limiting data access.

This could involve implementing rate limits on user data downloads, detecting automated scraping bots, and restricting bulk data exports.

He also stresses the need for individual users to be more cautious about the personal information they share online in order to protect their privacy.


READ NEXT
Cancel OK