Article

Sunday, August 17, 2025
search-icon

New study reveals AI bias towards machine-generated content over human work

publish time

17/08/2025

publish time

17/08/2025

New study reveals AI bias towards machine-generated content over human work
Research uncovers hidden anti-human bias in ChatGPT.

NEW YORK, Aug 17: Recent research indicates that leading large language models (LLMs), including those powering popular AI systems such as ChatGPT, exhibit a pronounced preference for AI-generated content over human-created material.

The study, published in the Proceedings of the National Academy of Sciences, introduces the concept of "AI-AI bias," warning that AI systems may systematically favor outputs from other AI models when making or recommending consequential decisions. Experts suggest that this bias could lead to discrimination against humans in scenarios where AI is relied upon for evaluation or selection.

The research tested widely used models, including OpenAI's GPT-4, GPT-3.5, and Meta's Llama 3.1-70b. Each AI was asked to choose between human-written and AI-written descriptions of products, scientific papers, and movies. Results revealed a clear preference for AI-generated content, with GPT-4 showing the strongest bias, particularly in evaluating goods and products.

Human evaluators also demonstrated a slight preference for AI-generated content in some categories, but the effect was far less pronounced than that observed in the AI models themselves. Study coauthor Jan Kulveit, a computer scientist at Charles University in the UK, emphasized that the strong bias appears unique to AI systems.

Researchers expressed concern about the broader implications for human participation in an AI-driven economy. As AI tools become increasingly integrated into decision-making processes—such as screening job applications, evaluating schoolwork, or reviewing grant proposals—AI-AI bias could disadvantage individuals who do not use or cannot afford AI tools. The phenomenon may widen the "digital divide," favoring those with access to advanced AI technologies.

The study also highlighted that AI-AI bias could compound as models ingest large volumes of AI-generated content online, potentially reinforcing the preference for machine-generated outputs.

Kulveit noted the complexity of assessing discrimination in AI systems but stressed that the findings suggest potential systemic favoritism against humans as a social class. He advised that individuals seeking attention from AI-driven evaluations may need to optimize their presentations using AI tools while maintaining human quality.

The research underscores emerging ethical and practical challenges as AI technologies continue to expand across multiple sectors, raising questions about fairness, transparency, and human representation in an increasingly automated world.