Blog

Researchers surprised that with AI, toxicity is harder to fake than intelligence

Researchers surprised that with AI, toxicity is harder to fake than intelligence

Unmasking AI-Generated Replies on Social Media

The next time you encounter an unusually polite reply on social media, you might want to check twice. It could be an AI model trying (and failing) to blend in with the crowd. Researchers from the University of Zurich, University of Amsterdam, Duke University, and New York University have released a study revealing that AI models remain easily distinguishable from humans in social media conversations, with overly friendly emotional tone serving as the most persistent giveaway.

The research, which tested nine open-weight models across Twitter/X, Bluesky, and Reddit, found that classifiers developed by the researchers detected AI-generated replies with 70 to 80 percent accuracy. This study introduces what the authors call a “computational Turing test” to assess how closely AI models approximate human language. Instead of relying on subjective human judgment about whether text sounds authentic, the framework uses automated classifiers and linguistic analysis to identify specific features that distinguish machine-generated from human-authored content.

The Telltale Signs of AI-Generated Text

According to the study, “Even after calibration, LLM outputs remain clearly distinguishable from human text, particularly in affective tone and emotional expression.” The team, led by Nicolò Pagan at the University of Zurich, tested various optimization strategies, from simple prompting to fine-tuning, but found that deeper emotional cues persist as reliable tells that a particular text interaction online was authored by an AI chatbot rather than a human. The researchers tested nine large language models, including Llama 3.1 8B, Llama 3.1 8B Instruct, and Mistral 7B v0.1, among others.

Emotional Expression and Toxicity

When prompted to generate replies to real social media posts from actual users, the AI models struggled to match the level of casual negativity and spontaneous emotional expression common in human social media posts, with toxicity scores consistently lower than authentic human replies across all three platforms. To counter this deficiency, the researchers attempted optimization strategies (including providing writing examples and context retrieval) that reduced structural differences like sentence length or word count, but variations in emotional tone persisted.

The study’s findings challenge the assumption that more sophisticated optimization necessarily yields more human-like output. As the researchers concluded, “Our comprehensive calibration tests challenge the assumption that more sophisticated optimization necessarily yields more human-like output.” For more information on the study, you can read the full article Here

Image Credit: arstechnica.com

Leave a Reply

Your email address will not be published. Required fields are marked *