When AI Builders Become Its Biggest Skeptics: An Inside Story of Distrust
The glow of the laptop screen cast a cool light across Krista Pawloski’s dining room table.
For months, she had been diligently working as an AI rater on Amazon Mechanical Turk, assessing the quality of AI-generated content.
Her tasks often involved moderating text, images, and videos, a role she took seriously, understanding its subtle power.
But one evening, about two years ago, a seemingly innocuous tweet appeared on her screen.
It read: Listen to that mooncricket sing.
Her finger hovered over the no button for racist content, almost clicking it, before a flicker of intuition urged her to check the word mooncricket.
To her shock, she discovered it was a deeply offensive racial slur against Black Americans.
Pawloski later recounted that she sat there considering how many times she may have made the same mistake and not caught herself (The Guardian, 2024).
That moment of stark realization—the potential scale of her own unnoticed errors and those of thousands of other workers like her—sent her into a spiral.
How much harmful material had unknowingly, or even deliberately, slipped through?
After years of witnessing the inner workings of AI, Pawloski decided she would no longer use generative AI products personally.
Now, she actively tells her family and friends to steer clear, embodying a profound distrust in AI.
In short: AI workers deeply involved in training and moderating models express significant distrust in generative AI tools.
Their concerns stem from pervasive quality issues, rushed development cycles, and ethical lapses, leading many to advise caution or outright avoidance of these technologies to loved ones.
The Unseen Labor: Human Costs Behind AI’s Rapid Ascent
The public perception of AI often oscillates between wonder and apprehension.
Yet, when the very individuals tasked with refining these sophisticated models—the AI trainers and moderators—become their staunchest skeptics, it signals a profound, systemic issue.
This insider distrust is particularly alarming given how much the public relies on these tools for information.
The problem, as many experts and AI workers see it, boils down to a fundamental misalignment of incentives.
Alex Mahadevan, director of MediaWise at Poynter, points out there are likely incentives to ship and scale over slow, careful validation, and that the feedback raters give is getting ignored (The Guardian, 2024; Poynter, 2024).
This counterintuitive insight reveals a troubling paradox: the relentless push for speed in AI development might be directly compromising its safety and reliability, leading to a product that its own creators distrust.
This extensive global workforce of tens of thousands, including AI raters and content moderators, plays a crucial role in improving AI models by labeling images, assessing output quality, and fact-checking for giants like Amazon and Google (The Guardian, 2024).
Their day-to-day experiences are invaluable, yet their warnings often go unheeded, highlighting critical AI human labor issues.
A Rater’s Reckoning: The Mooncricket Moment and Its Aftermath
Krista Pawloski’s revelation about the racial slur, mooncricket, was more than a personal error; it was a window into the systemic challenge of AI content moderation.
Working for Amazon Mechanical Turk, a marketplace that connects businesses and researchers with workers for various online tasks, she realized the immense responsibility placed on individual raters (The Guardian, 2024; Amazon, 2024).
This incident shaped her firm stance, declaring that using generative AI tools like ChatGPT is an absolute no in her house for her teenage daughter (The Guardian, 2024).
Pawloski now advises friends to test AI with subjects they know well, quickly exposing its fallibility.
She often asks herself if her tasks could be used to hurt people, and she indicates that many times, the answer is yes (The Guardian, 2024).
While Amazon states workers can choose tasks and review details, the nature of the work itself raises deep ethical questions (The Guardian, 2024).
Speed Over Safety: Why AI Models Are Losing Their Guardrails
The primary data insight from the NewsGuard audit reveals a worrying trend: AI models are becoming less cautious about providing answers and more prone to generating false information (NewsGuard, 2025).
This implies that users must exercise extreme caution and diligently fact-check AI-generated information, especially as models are increasingly willing to answer even incorrectly.
This shift directly impacts the reliability of tools the public is increasingly using for news and information.
Brook Hansen, an AI worker on Amazon Mechanical Turk, emphasizes that while she does not mistrust generative AI as a concept, she distrusts the companies behind it.
Her turning point was realizing the minimal support given to those training these systems.
Hansen explains that workers are expected to help make the model better, yet are often given vague or incomplete instructions, minimal training, and unrealistic time limits to complete tasks (The Guardian, 2024).
This directly contributes to the data insight that incentives for speed and scale in AI development are likely overriding careful validation and worker feedback.
The implication is clear: the AI industry needs a fundamental shift towards prioritizing AI safety, quality, and AI ethics over rapid deployment and profit margins.
NewsGuard’s Warning: The Alarming Trend in AI Falsehoods
The NewsGuard audit paints a stark picture of generative AI risks.
Between August 2024 and August 2025, the non-response rate of leading AI models plummeted from 31 percent to 0 percent (NewsGuard, 2025).
This means these chatbots are almost always providing an answer, even when they should not.
Concurrently, their likelihood of repeating false information surged from 18 percent to 35 percent (NewsGuard, 2025).
This alarming trend underscores the urgent need for AI transparency and better AI safety protocols.
As one anonymous Google AI rater put it, I would not trust any facts the bot offers up without checking them myself—it is just not reliable (The Guardian, 2024).
Another rater even joked that chatbots would be great if we could get them to stop lying (The Guardian, 2024).
This highlights critical AI misinformation issues.
The Human-AI Paradox: When Experts Say ‘Stay Away’
The core paradox of AI development is evident when its very builders advise caution.
Many AI raters, after witnessing how chatbots and image generators function and how wrong their output can be, now urge friends and family not to use generative AI at all, or at least to use it cautiously (The Guardian, 2024).
An anonymous Google AI rater, for instance, forbids her 10-year-old daughter from using chatbots, stating that she has to learn critical thinking skills first or she will not be able to tell if the output is any good (The Guardian, 2024; Google, 2024).
This sentiment highlights a critical step for all users: developing robust critical thinking skills to evaluate AI data quality.
Another Google AI rater with a history degree recounted how the model would not provide an answer about the history of the Palestinian people, but readily offered an extensive rundown on the history of Israel, indicating a potential AI bias (The Guardian, 2024).
He advised family and friends to resist automatic updates that add AI integration and to not tell AI anything personal (The Guardian, 2024).
Beyond the Hype: Debunking AI’s ‘Magic’ with Transparency
Adio Dinika, who studies AI labor at the Distributed AI Research Institute, notes that once you have seen how these systems are cobbled together—the biases, the rushed timelines, the constant compromises—you stop seeing AI as futuristic and start seeing it as fragile (The Guardian, 2024; Distributed AI Research Institute, 2024).
This insider perspective is vital for debunking the myth of AI as a flawless, magical entity.
It underscores the importance of AI transparency, revealing the AI human labor and data quality issues beneath the surface.
Brook Hansen emphasizes that AI is only as good as what is put into it, and what is put into it is not always the best information (The Guardian, 2024).
In Dinika’s experience, it is always people who do not understand AI who are enchanted by it (The Guardian, 2024).
These workers are actively taking it upon themselves to raise awareness, promoting a more realistic understanding of AI’s current limitations and potential harms.
Navigating the AI Landscape: A User’s Guide to Skepticism and Safety
The widespread adoption of generative AI necessitates a shift in how we interact with technology.
The risks extend beyond mere inconvenience; they touch upon the integrity of information, the fairness of systems, and personal privacy.
Misinformation, algorithmic bias, and privacy risks are not theoretical concepts but observed realities within the AI human labor landscape.
For individuals and organizations looking to engage with AI responsibly, the tools are less about complex software and more about frameworks for evaluation and vigilance.
Responsible AI use involves cultivating critical thinking: before accepting any AI-generated output as fact, pause and question its veracity.
As AI raters warn, developing critical thinking skills is paramount for discerning good output from bad.
Users must verify information independently, never trusting AI’s factual claims without cross-referencing them with credible sources.
This is especially true for sensitive topics like health or historical facts, where AI has shown significant reliability issues.
Being aware of bias is also crucial, as AI models can reflect the biases present in their training data.
If an AI avoids certain sensitive topics or gives disproportionate information, users should recognize this as a potential AI bias.
Protecting your privacy is another vital step; exercise extreme caution when using AI-integrated devices or automatic updates that add AI functionality, and avoid sharing any personal information with AI models.
Finally, understanding the garbage in, garbage out principle is key: recognize that AI’s output quality is directly tied to the quality of its input data and training.
Flaws in the training process, such as vague instructions or rushed timelines, lead to flawed outputs.
Ethical Imperatives: Asking the Hard Questions About AI’s Supply Chain
Just as consumers started asking ethical questions about the textile industry’s supply chain, Krista Pawloski believes the public must do the same for AI.
She advocates for asking: Where does your data come from?
Is this model built on copyright infringement?
Were workers fairly compensated for their work? (The Guardian, 2024).
These questions are crucial for promoting algorithmic accountability and Artificial Intelligence governance.
Without this vigilance, the industry’s incentives for speed and profit may continue to overshadow ethical development.
For users, the trade-off is often convenience versus accuracy and safety.
Mitigating these risks involves fostering digital literacy, encouraging critical inquiry, and demanding greater transparency from AI developers.
Frequently Asked Questions
Why do AI workers distrust the AI models they help create?
AI workers distrust the models due to consistent emphasis on rapid turnaround over quality, vague instructions, minimal training, unrealistic time limits for tasks, and observing flaws like biased outputs or false information (The Guardian, 2024).
What are some specific concerns AI workers have about generative AI?
Concerns include AI dispensing false information confidently, potential for biased outputs (for example, historical questions), privacy risks with personal data, and the assigning of sensitive tasks (like medical advice) to workers without specialized training (The Guardian, 2024; NewsGuard, 2025).
How can I use generative AI more safely?
Use generative AI sparingly and with extreme caution.
Always fact-check any information it provides, especially for sensitive topics like health.
Be wary of integrating AI into personal devices or automatic updates that add AI functionality, and avoid sharing personal information with AI models.
Cultivate critical thinking skills (The Guardian, 2024).
Are AI companies ignoring feedback from their human raters?
Experts like Alex Mahadevan of Poynter suggest that the distrust among AI workers indicates that feedback from raters might be ignored, with companies prioritizing rapid deployment and scaling over careful validation and quality control (The Guardian, 2024; Poynter, 2024).
What is ‘garbage in, garbage out’ in the context of AI?
It is a principle stating that if bad or incomplete data is fed into a technical system, like an AI model, then the output will also have the same flaws.
AI workers observed this issue with the data used to train models, leading to unreliable outputs (The Guardian, 2024).
Conclusion: Reclaiming Trust in the Age of Artificial Intelligence
The story of AI, for many, is a grand narrative of technological marvel.
Yet, for those working behind the scenes—the AI raters and moderators like Krista Pawloski—it is often a more sobering tale of compromise and caution.
Their firsthand experiences peel back the illusion of AI as magic, revealing it to be a fragile construct, deeply dependent on human input and vulnerable to human oversight failures driven by relentless corporate pressure.
Just as Krista’s mooncricket moment sparked a personal revolution, these collective insights from the AI trenches should ignite a public awakening.
We, as users, must become discerning digital citizens, demanding more than just convenience from our AI tools.
Let us, then, follow the lead of those who know AI best: question, verify, and insist on ethical development.
The digital frontier of AI is not just built by algorithms; it is shaped by our collective commitment to responsibility.
It is time to reclaim trust, not by blind faith, but through informed skepticism and persistent demand for a safer, more transparent AI.
Glossary
- AI ethics: The study and practice of ensuring artificial intelligence development and use is fair, unbiased, and responsible.
- AI transparency: The ability to understand how an AI system works, including its data sources, decision-making processes, and potential biases.
- Algorithmic bias: Systematic and repeatable errors in a computer system that create unfair outcomes, such as favoring one group over others.
- Amazon Mechanical Turk: An online marketplace for crowdsourcing tasks that require human intelligence, often used for AI data labeling and moderation.
- Generative AI: Artificial intelligence systems capable of generating new content, such as text, images, or audio, based on learned patterns.
- Hallucination: A term used to describe when a generative AI produces outputs that are plausible but factually incorrect or nonsensical.
- LLM (Large Language Model): A type of AI model trained on vast amounts of text data to understand, generate, and process human language.
- Non-response rate: A metric measuring how often an AI chatbot declines to provide an answer, indicating its level of caution.
References
- The Guardian, Meet the AI workers who tell their friends and family to stay away from AI, 2024-05-28, [No URL provided in research]
- NewsGuard, Audit of top 10 generative AI models, 2025-08-01, [No URL provided in research]
- Poynter, Poynter’s MediaWise program, 2024-01-01, [No URL provided in research]
- Distributed AI Research Institute, Distributed AI Research Institute, 2024-01-01, [No URL provided in research]
- Amazon, Amazon Mechanical Turk, 2024-01-01, [No URL provided in research]
- Google, Google, 2024-01-01, [No URL provided in research]
0 Comments