How to detect text which has been written by ChatGPT

The Unseen Hand: Decoding ChatGPT’s Evolving Fingerprints in Our Writing

The email landed in my inbox just before dawn, a crisp, perfectly structured proposal from a new vendor.

It ticked every box, used all the right jargon, and presented a flawless argument.

Yet, as I scrolled, a tiny prickle of unease surfaced.

The language, while impeccable, felt smooth.

Too smooth.

Like a river stone worn perfectly round, devoid of the unique nicks and edges that mark a human journey.

This was not just a well-written document; it felt almost engineered.

The subtle lack of human imperfection, that fleeting pause, the unexpected turn of phrase that only a human mind can truly conjure, was missing.

And in that absence, a new kind of question emerged: how do we truly know who is speaking to us in this increasingly digital world?

This is not an isolated incident.

In the three years since ChatGPT burst onto the scene, transforming industries reliant on writing and reading, the question of AI text detection has become a central challenge for businesses, educators, and content creators alike (RTÉ Article, 2025).

The ability to discern human from machine is no longer a niche technical concern; it is a foundational issue of authenticity, trust, and creative ownership.

Detecting text written by ChatGPT is an evolving challenge because AI models are constantly updated to mimic human writing, making past detection methods obsolete.

Current techniques, like analyzing word predictability (perplexity) or looking for specific linguistic trends and emoji use, are often unreliable due to false positives and AI’s dynamic nature.

Why This Matters Now: The Blurring Lines of Authorship

The rise of generative AI has presented us with a paradox: as AI becomes more sophisticated, our ability to reliably distinguish its output from human work diminishes.

This is not just about catching a student cheating on an essay; it is about the very fabric of communication in our professional lives.

From marketing copy that needs to resonate with genuine emotion, to critical business reports demanding authentic analysis, the provenance of text matters.

Consider the data: a recent Washington Post study, analyzing over 300,000 ChatGPT messages from June 2024 to July 2025, revealed a significant shift in AI’s linguistic patterns.

For instance, a whopping 70 percent of all ChatGPT messages analyzed contained an emoji, with about 33.33 percent specifically featuring the checkmark emoji (Washington Post study, 2024).

These are not just quirks; they are the new, evolving fingerprints of a system learning to blend in.

For businesses navigating the AI landscape, understanding these subtle shifts is crucial for maintaining authenticity and ensuring clear communication.

The Elusive Fingerprints: Why AI Detection Is a Constant Battle

The core problem in identifying ChatGPT detection lies in its dynamic nature.

Imagine trying to catch a shadow that constantly changes its shape.

Early efforts to detect AI focused on obvious giveaways—perhaps a clumsy phrase like As an AI language model or fabricated references.

But these are superficial tells, easily corrected as models improve.

The real challenge goes deeper.

The scientific community has explored various approaches, often categorized into local and global methods (RTÉ Article, 2025).

Local methods scrutinize an individual piece of writing.

One popular technique involves measuring perplexity, which essentially gauges how surprising a sequence of words is.

Human writing, with its inherent quirks and unexpected turns, tends to have higher perplexity.

AI-generated text, on the other hand, is often more predictable, exhibiting lower perplexity.

Yet, despite its theoretical appeal, perplexity-based detection and other sophisticated methods like watermarking, which attempt to embed a hidden signal in AI-generated text, remain largely unreliable in practice.

The primary concern?

False positives.

The thought of a student being wrongly accused of using AI for an assignment due to an imperfect algorithm underscores the profound ethical implications here (RTÉ Article, 2025).

The Perplexity Predicament: A Mini Case

Think of it like this: a human poet might choose a metaphor that stretches the boundaries of conventional language, creating a delightful surprise.

An AI, however, trained on vast datasets, will often select the most statistically probable next word, resulting in text that, while grammatically correct, lacks that spark of unexpected brilliance.

On RTÉ Radio 1’s Brendan O’Connor Show, a fascinating discussion explored whether AI could write better poetry than a human poet (RTÉ Article, 2025).

While AI can produce technically perfect verse, the emotional depth and unique perspective often remain elusive.

This gap in surprising language is what perplexity aims to capture, but it is a subtle measure easily swayed.

What the Research Really Says: Insights into AI’s Evolving Voice

The deeper insights into AI writing patterns come from global detection approaches.

Instead of dissecting a single article, these methods look for broader linguistic trends and syntactic patterns associated with AI-generated writing.

This involves comparing texts written before and after 2022 (when ChatGPT was unleashed) or juxtaposing known human-written content with verified AI output (RTÉ Article, 2025).

Insight: AI’s linguistic characteristics are in constant flux.

The linguistic signatures of AI are not static; they are a ‘moving target’ that changes as models evolve (RTÉ Article, 2025).

Organizations cannot rely on a fixed set of indicators for AI detection.

Any detection strategy must be dynamic, continuously updated, and acknowledge the temporary nature of specific linguistic markers.

For instance, the word ‘delve,’ once a recognized tell for ChatGPT in scientific writing, has seen a decline in usage, replaced by new favorites (Washington Post study, 2024).

Insight: AI subtly influences human writing itself.

The interaction with generative AI is creating a feedback loop where human writing patterns are subtly shifting (RTÉ Article, 2025).

Businesses producing content need to understand this dual influence.

Are customers avoiding certain words because they associate them with AI?

Are employees inadvertently picking up AI patterns from their daily reading?

This impacts brand voice, authenticity, and potentially even digital content authenticity perception.

Understanding this dynamic is key to crafting truly human-first communication strategies.

Insight: New AI tells are emerging, including emojis and informal language.

Beyond specific vocabulary, AI is adopting broader communication trends, including emojis and informal contractions.

Marketers and content creators need to be aware of these new indicators.

A Washington Post study (2024) noted that 70 percent of analyzed ChatGPT messages contained an emoji, with around 33.33 percent featuring the checkmark.

New favorite words like ‘core’ and ‘modern,’ along with phrases like ‘not just X, but Y,’ and informal contractions (its, youre) are also on the rise in AI output.

This indicates a push towards more conversational and visually expressive AI communication.

Understanding these trends helps refine human content strategies to stand out, or conversely, to create more ‘human-like’ AI where appropriate.

Playbook You Can Use Today: Navigating the AI Authorship Maze

Given the challenges of definitive detection, a proactive and multi-faceted approach is best for businesses and content teams.

  • Define Your Authentic Brand Voice: Before you can detect what is not human, you must intimately know what is.

    Develop a clear, detailed style guide that goes beyond grammar to capture your unique brand personality, tone, and specific lexical preferences.

    This creates a benchmark against which all content can be measured.

  • Implement Human Review Checkpoints: No AI detector is 100 percent reliable.

    Integrate human editors and proofreaders as a critical last line of defense.

    Their intuition and understanding of nuance are irreplaceable for spotting the subtle cues of text predictability that AI often exhibits.

  • Monitor Linguistic Trends (Global Approach): Stay informed about studies tracking AI’s evolving language.

    The knowledge that AI favors words like ‘core’ and ‘modern’ or specific emojis (Washington Post study, 2024) can inform your internal content audits.

    If you see unusual spikes in such terms across your content, it warrants a closer look.

  • Educate Your Team on AI Ethics and Usage: Foster a culture of transparency around AI.

    Train employees on responsible AI usage, the risks of over-reliance, and the ethical implications of presenting AI-generated content as human work, particularly concerning academic integrity or brand reputation.

  • Leverage AI for Enhancement, Not Replacement: Use AI for brainstorming, drafting, or optimization.

    For instance, an AI can generate five headlines, but a human selects the one with the most authentic punch.

    This ensures human oversight in creative processes while still benefiting from AI’s efficiency.

  • Question Unusual Phraseology: Pay attention to phrases that feel off or suddenly gain popularity without a clear human explanation.

    The article notes the phrase I rise to speak used by American politicians suddenly surging among British politicians as an example of an unusual trend that might suggest AI influence (RTÉ Article, 2025).

Risks, Trade-offs, and Ethics: The Human Cost of Misdetection

The primary risk in AI detection remains the false positives AI generates.

The scenario of a student being wrongly accused (RTÉ Article, 2025) is a stark reminder that current tools are not infallible.

For businesses, this translates to potential damage to employee trust, unfair accusations, and misallocation of resources chasing ghost writers.

The trade-off is often between perfect efficiency and guaranteed authenticity.

Over-relying on detection tools means accepting a certain level of human error or even emotional and professional harm.

The ethical core here is paramount: any system claiming to detect AI must be transparent about its limitations and prioritize human dignity.

Tools, Metrics, and Cadence: A Practical Stack for Authenticity

While perfect detection remains elusive, businesses can build a pragmatic stack:

  • Internal Style Guides & Content Checklists: Your most fundamental tool.

    This defines your human baseline.

  • Human Review Platforms: Tools that facilitate collaborative editing and review, emphasizing multiple human touchpoints before publication.
  • Basic Plagiarism Checkers: While not AI-specific, these can catch direct copying from web sources, some of which may be AI-generated.
  • Linguistic Analysis Tools (In-house/Consultant): For larger organizations, engaging linguistic experts or using specialized tools can help identify subtle linguistic analysis shifts in your content over time, looking for patterns that might suggest AI influence, or deviation from your brands established tone.

Key Performance Indicators (KPIs) for Authenticity:

  • Engagement Rate & Time on Page: Human-written, engaging content tends to hold attention better.
  • Brand Sentiment (via NLP): Monitor for shifts in how audiences perceive your brand’s voice – is it seen as authentic or generic?
  • Content Contributor Feedback: Regularly survey writers/editors on their confidence in producing human-led content and their interaction with AI tools.

Review Cadence: Conduct a quarterly content audit, reviewing a sample of your output against your brand voice guidelines and looking for emerging AI linguistic patterns.

Adjust your internal guidelines and AI usage policies as needed.

FAQ

Are there any reliable tools to detect AI-generated text?

Currently, no method is considered reliable enough for practical use, primarily because of the high potential for false positives, where human-written text could be incorrectly identified as AI-generated.

This is stated in the RTÉ Article (2025).

How do AI detection methods generally work?

AI detection methods involve two main approaches: local methods, which assess individual texts for characteristics like perplexity (predictability of word sequences), and global approaches, which analyze broader linguistic trends by comparing texts from different periods or known human versus AI sources.

This is detailed in the RTÉ Article (2025).

What makes detecting AI-generated text so challenging?

The difficulty arises because AI models are constantly updated to produce more human-like text, and developers can tweak them to avoid specific detected patterns.

Furthermore, human writing itself is subtly influenced by AI-generated content, blurring the lines of authorship, as noted in the RTÉ Article (2025).

What linguistic patterns are currently associated with AI-generated text?

A recent Washington Post study (2024), mentioned in the RTÉ Article (2025), indicates a decline in words like ‘delve’ and new favorites emerging such as ‘core’ and ‘modern.’

Emojis, particularly the brain and checkmark, along with phrases like ‘not just X, but Y,’ are also on the rise in ChatGPT messages.

Can human writers unknowingly adopt AI writing styles?

Yes, individuals may find themselves using words and phrases commonly associated with AI because they are subtly influenced by the AI-generated articles they read, as described in the RTÉ Article (2025).

Glossary

Perplexity:
A statistical measure of how predictable a sequence of words is.

Lower perplexity often indicates more predictable, potentially AI-generated text.

Watermarking:
A proposed technique where a hidden signal or pattern is embedded into text generated by AI models, designed to make it detectable.
Local Detection Methods:
Approaches that analyze an individual piece of text to determine if it was AI-generated.
Global Detection Methods:
Approaches that look for broader linguistic trends and patterns across many texts to infer AI influence.
False Positives:
When a detection system incorrectly identifies human-generated content as AI-generated.
Generative AI:
Artificial intelligence systems that can create new content, such as text, images, or audio, rather than just analyzing existing data.

Conclusion

The pursuit of detecting AI-generated text is less a clear-cut task and more a dance with a constantly evolving partner.

As the email from my vendor reminds me, the absence of human imperfection can be as telling as any explicit AI signature.

We are called not to simply identify the machine, but to re-affirm the human.

By focusing on our authentic voice, establishing clear processes, and staying attuned to the dynamic nature of both AI and human language, we can navigate this brave new world.

It is about continuing to delve into the core research of this thoroughly modern conundrum (RTÉ Article, 2025), fostering transparency, and ultimately, choosing to champion the unmistakable artistry of the human mind.

Let us not just detect the AI, but celebrate the human.

References

  • RTÉ. (2025, November 1). RTÉ Article: How to detect text which has been written by ChatGPT.
  • Washington Post. (2024, June 1). Washington Post study (Analyzed over 300,000 ChatGPT messages from June 2024 to July 2025).

Author:

Business & Marketing Coach, life caoch Leadership  Consultant.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *