If AI Can’t Guarantee Truth, Why Trust It to Spot Plagiarism?

If AI Can’t Guarantee Truth, Why Trust It to Spot Plagiarism?

If we know that tools like ChatGPT generate outputs that are often inaccurate, incomplete, or fabricated, why are we so quick to believe that AI detection tools—built on the same underlying technology—are any more reliable? This question sits at the heart of a growing dilemma in academia and beyond, where generative AI and AI-driven detection systems are being adopted without sufficient scrutiny.

The problem begins with a misplaced trust in what these systems do. Generative AI models like ChatGPT don’t “know” anything; they generate text based on probabilities derived from their training data. Their outputs are coherent but often wrong, a phenomenon known as hallucination. Yet, when it comes to detection tools—designed to flag AI-generated content or determine authorship—this basic truth seems to be forgotten. Institutions treat these systems as reliable arbiters of originality, even though they are governed by the same probabilistic mechanisms as the generative models themselves.

The Flaws in AI Detection

AI detectors work by analysing patterns in text—sentence structures, word choices, and stylistic markers—that align with known characteristics of machine-generated content. But like generative AI, detection tools rely on statistical probabilities, not definitive truths. This makes them prone to false positives, misclassifying human-generated text as AI-written.

Non-native English speakers, for instance, often write in ways that deviate from standard conventions embedded in training data, making their work more likely to be flagged. Similarly, creative or unconventional writing styles can confuse detection systems, resulting in false accusations of plagiarism or misconduct. These errors carry real consequences—damaged reputations, academic penalties, and a loss of trust between institutions and individuals.

The irony is hard to ignore: we are deploying AI systems to identify AI-generated content, all while acknowledging that these same systems are unreliable in producing factual or accurate outputs.

Why Are We So Quick to Trust AI Detectors?

Part of the problem lies in perception. AI detection tools are marketed as solutions to the challenges posed by generative AI, offering an illusion of control in a rapidly shifting landscape. They promise certainty where ambiguity reigns. But this trust is misplaced.

The reality is that AI detectors share the same limitations as generative models. They are trained on incomplete, biased, and often ethically questionable datasets. They cannot contextualise or evaluate intent. And like all statistical systems, they are only as reliable as the data and algorithms behind them—data and algorithms that remain opaque to most users.

This misplaced trust reflects a broader societal trend: the willingness to defer to technology as an objective authority, even when the stakes involve human judgment. In academic settings, this deference can erode the values of fairness and integrity that institutions are meant to uphold.

The Broader Implications

The implications extend far beyond false plagiarism accusations. When institutions rely on AI detection tools, they risk creating a chilling effect on creativity and diversity in writing. Students, aware of the risks of misclassification, may feel pressured to conform to rigid linguistic norms to avoid suspicion. This not only stifles originality but also penalises those whose voices don’t align with algorithmic expectations.

Moreover, reliance on AI detection tools shifts the focus from teaching and learning to policing. Instead of fostering an environment where students develop critical thinking and writing skills, institutions are incentivised to adopt a defensive posture, prioritising detection over education.

Moving Beyond the Illusion of Certainty

If we can’t trust generative AI to provide accurate outputs, we shouldn’t trust detection tools to reliably identify them. Both systems operate within the same probabilistic framework, and neither can deliver the level of certainty that their users often expect.

The way forward requires a fundamental shift in how we approach these technologies. Institutions need to recognise the limitations of AI and resist the temptation to delegate human judgment to machines. Educators must remain central to decisions about originality, intent, and integrity, using AI as a tool to inform—not dictate—their assessments.

Transparency is equally crucial. Institutions must educate students, staff, and administrators about the limitations of AI detection tools, including their susceptibility to error and bias. Without this openness, trust in these systems will continue to erode, leaving behind a landscape of suspicion and inequity.

At its core, this issue isn’t about technology; it’s about values. Are we willing to accept the inherent uncertainties of human creativity, or will we allow flawed systems to dictate what counts as original or authentic? The answer will shape not only how we use AI but also how we define fairness and integrity in an increasingly automated world.


Richard Foster-Fletcher is the Executive Chair at MKAI.org | LinkedIn Top Voice | Professional Speaker, Advisor on; Artificial Intelligence + GenAI + Ethics + Sustainability.

For more information please reach out and connect via website or social media channels.


Insightful take on a crucial issue faced by students, Richard!

Tuuli Eaton

Head of Growth @ Logical Operations | Cybersecurity, CMMC, GenAI, Data Ethics ...

2w

Thank you, Richard. Uncompromised integrity is the key, and empowering and educating educators is essential to ensure humans stay central to the decision-making process. Your insights on the need to scrutinize AI detection tools highlight an urgent conversation we must continue to have—one that balances innovation with the values of humanity and equity.

Eric Bye

AI Training, Strategy & Implementation | Guiding Teams & Leadership to Unlock AI Value | Practical AI for Growth, Efficiency, and Business Impact—From Boardroom to Frontline

2w

I’m just waiting for the class action lawsuits against all the kids being expelled in the UK and USA. I heard about a midlands school where 40% of students were given a fail (maybe kicked out) when AI detection software flagged their work. On the other side, institutions are also depending on humans marking to tell whether it’s been AI generated (which has also been studied and proven dangerous) rather than reorganising curriculum. Big changes are needed quick, imo, because we will have a generation of students that had both the option of the easiest shortcut ever, or a learning superpower, and no way to tell. Expect this is partially a resource issue, and schools need the support and backing to rework almost everything so AI isn’t available as a shortcut, and things are designed so if students do use it it can only enhance the experience

Very important headline question here Richard

To view or add a comment, sign in

Explore topics