Unmasking AI-journalism: "The Crucial Role of AI Detection Classifiers in Upholding Quality Journalism"

Unmasking AI-journalism: "The Crucial Role of AI Detection Classifiers in Upholding Quality Journalism"

News organizations have a responsibility to distinguish AI-created news articles from pure journalist-created texts.

The public should be capable of relying on trusted news sources to create a public opinion. Journalists are accountable for their reporting and follow strict ethical guidelines. They can be questioned, and they are working together in a newsroom. News managers and Editors-in-chief control the quality of the news creation process to avoid mistakes and bias.

Our readers should know exactly what the source of information is. Thus, we have an obligation of complete transparency. To be well informed, our audience may expect research and a sceptic approach, serious reflection, quality and depth provided by seasoned journalists. AI will generate news based on patterns and structures that it learned through its training and it lacks the ability to understand the subject it is writing about.


The more transparent we are about our editorial process and the ethical rules that we apply, and the more we make a clear distinction between AI-created content and our journalistic creation, the more trust we might expect from our audience.

 

AI can help us establish this trust by detecting the difference between human-generated and computer-generated texts. It can do this using sophisticated models that are called AI detection classifiers.

 

AI detection classifiers examine various features like word usage, grammar, style, and tone to differentiate between AI-generated and human-written text.

 

Every word has its own covert code that represents a word as a "vector". AI uses vectors because it doesn’t understand words. We call these vectors that compose words "embeddings".

 

Since AI models are basically just trained to predict the next embedding, we can check how well these predictions are done in a text. Humans don’t write by predicting embedding. Our human texts are full of what we call "perplexity".

 

We can ask an AI to check the level of perplexity in a text. The lower the perplexity, the more likely the text is AI-generated. It’s like a policeman using a fingerprint match to identify a suspect.

 

In addition to this, AI models usually produce output that is relatively uniform in length and complexity, while human writing typically shows more significant variability. This is what we call the level of "burstiness" in a text. AI detectors can flag a text as potentially AI-created if it notices a low level of burstiness and thus not much variance in sentences, their length, structure and tempo.

 

An AI detector that analyzes text will give the text a score. The higher the score, the more likely it is AI-generated content instead of human-written.

 

Detecting information created by AI accurately is vital for maintaining news credibility and revealing AI-created misinformation and copyright infringements.

 

AI text detection can thus be very helpful in maintaining our independent journalist standards high and in fulfilling our mission of objectively informing the public opinion in our democratic society.

--

 

 

To view or add a comment, sign in

More articles by Patrick Lacroix

Insights from the community

Others also viewed

Explore topics