AI in Education: From Traitor to Trust in the Classroom

Bridging the Divide: From AI Resistance to Human-First Pedagogy

The aroma of strong coffee usually signals a moment of calm in the relentless academic pace.

A few years ago in Montreal, a conversation with my colleague, Stéphane Paquet, sparked a different intensity.

As he detailed his workshops on teaching students to leverage ChatGPT, I felt a familiar pang—a mixture of alarm and betrayal.

To me, teaching AI felt like handing out cheat codes for the very craft I cultivated: the painstaking, deeply personal act of writing.

I joined many educators clinging to old-school methods to safeguard academic integrity.

In-class essays, oral defenses, and stringent grading were my fortifications against a rising tide I feared would wash away critical thinking.

Yet, a nagging worry persisted: was I sacrificing the trust and deep reflection I valued, simply to keep the robots out?

The evolving landscape of artificial intelligence in education felt less like a wave and more like a tsunami, demanding a response beyond mere resistance.

In short: This article details an educator’s journey from strict AI resistance to a nuanced, process-oriented teaching approach.

Influenced by colleague insights and student realities, this shift emphasizes ethical AI integration, transparency, and valuing human effort in higher education.

The Unspoken Reality: Why This Matters Now

My initial reaction was not unique.

Across higher education, professors grapple with generative AI’s profound impact on writing and learning, leading to widespread concern about academic integrity.

Many initially reverted to traditional methods, convinced that excluding AI was the only way forward.

However, this protective stance often collides with an undeniable truth: students are already deeply engaged with these tools.

A recent poll at Champlain College-Saint Lambert revealed a staggering 87 percent of students were using AI on a weekly basis.

This is not just a technological shift; it is a fundamental change in how a generation learns and prepares for a world where AI is increasingly ubiquitous.

Ignoring this reality only widens the chasm between classroom expectations and lived experience, creating what Stéphane Paquet rightly calls an an unhealthy clash.

Paquet observed that it is not healthy to have this kind of clash between the way students learn in the class and the way they learn outside the class.

The Silent Classroom Revolution: When Assumptions Collide

The problem is not just that students use AI; it is how they use it, often without guidance.

They pick up tips from social media, potentially leading to uncontrolled and unethical practices.

For many educators, this felt like an erosion of core writing skills and critical thinking, driving us to double down on vigilance.

We assumed students would only use AI as a shortcut.

Yet, this fear of AI as purely a shortcut often overlooks its potential as a catalyst for renewed engagement and pedagogical innovation.

Stéphane Paquet found that the whole AI thing, in a way, saved his interest in his career.

After years of refining routines, AI presented a welcome challenge.

He immersed himself in the tools and launched workshops, eventually pivoting to training students directly.

He recognized that AI was already in their learning ecosystem.

This pragmatic approach highlights a counterintuitive insight: the most effective way to address student AI use might not be to ban it, but to teach it transparently and ethically.

This embodies a progressive approach to teaching pedagogy.

What the Data and Experience Reveal

The journey from fear to informed integration is not a straight line, but insights from diverse educators highlight key patterns in the evolving landscape of teaching pedagogy and generative AI.

Student AI use is pervasive and unguided.

A significant majority of students already use AI weekly, often without proper pedagogical direction, as shown by Champlain College-Saint Lambert.

Outright bans are thus likely ineffective and unsustainable.

Educators must engage with AI to provide guided, ethical integration, fostering AI literacy.

Credibility concerns drive educator adaptation.

Instructors worry about appearing ancient and outdated if they outright ban AI, fearing they might undermine their own credibility.

Mary Towers from McGill Writing Centre expressed this concern.

This desire for relevance influences instructors’ willingness to cautiously integrate AI.

Institutions and departments need to support instructors in evolving their practices without feeling forced into technological obsolescence.

AI’s role depends on learning objectives.

For foundational writing courses, AI may be discouraged to develop core skills.

For professional writing programs, however, it is embraced as an efficiency tool.

Maggie McDonnell, Coordinator of Composition and Professional Writing at Concordia University, noted that in the real world, technical writers use AI.

AI is not a one-size-fits-all solution; its utility varies significantly based on course goals.

Curriculum development needs to differentiate appropriate AI use for core skill development versus real-world professional application, promoting thoughtful instructor discretion.

Process-oriented pedagogy emerges as common ground.

Despite differing views on AI, many educators are converging on an emphasis on the learning process—brainstorming, drafting, and revision—rather than just the final product.

This shift rewards human effort and critical choices involved in learning, even when AI is present.

Reimagining assignments and grading rubrics to value the journey, not just the destination, can enhance academic integrity and support deeper learning in the age of AI.

A Human-First Playbook for Navigating AI

Navigating the complexities of AI in education requires a strategic, human-first approach that balances innovation with academic integrity.

Here is a playbook for higher education institutions and individual educators.

  • Embrace Transparency and AI Literacy: Like Stéphane Paquet, initiate workshops to teach students and faculty about AI’s strengths, limitations, biases, and ethical use.

    Given that 87 percent of students are already using AI, proactive guidance is crucial.

    Provide clear ethical AI guidelines, emphasizing what is permitted and what constitutes academic misconduct.

  • Differentiate AI Use by Learning Goal: As Maggie McDonnell suggests, tailor AI integration based on course objectives.

    Encourage AI for brainstorming, outlining, and image generation in technical writing, but discourage its use for core skill development in introductory composition courses.

    This promotes judicious instructor discretion.

  • Reorient to Process, Not Just Product: Shift grading to reward the journey.

    Incorporate mandatory outlines, oral seminars, and multi-stage submissions that document the drafting and revision process.

    This process-oriented pedagogy, adopted by both myself and Stéphane Paquet, makes AI shortcuts less appealing and highlights human effort in writing skills development.

  • Prioritize Accessibility: Be cautious about moving all assignments to in-class, as this can create accessibility issues for students with learning disabilities who require accommodations.

    Maggie McDonnell highlighted that in terms of accessibility and teaching, this becomes problematic.

    Ensure solutions support all learners.

  • Maintain Instructor Credibility: Engage with AI proactively to avoid appearing outdated.

    Mary Towers noted she would be undermining her own credibility if she did not.

    Show students concrete, ethical ways AI can help their writing, for example, by offering feedback on a troublesome sentence, rather than just banning it.

    This fosters student trust and adoption of responsible technology use.

  • Experiment with Humility: As pedagogical counsellor Sara Hashem advises, we are in reactive mode.

    Be open to experimentation, acknowledge uncertainty, and avoid rushing into grand pronouncements.

    Share experiences and adapt as new evidence emerges.

    This iterative approach supports ongoing technology adoption and curriculum development.

The Ethical Tightrope: Risks and Rewards

While AI’s potential to enhance learning is immense, its integration is not without risks.

Over-reliance on generative AI can stifle critical thinking and original writing skills.

It can perpetuate biases embedded in training data and lead to misinformation through hallucinations.

The core challenge remains ensuring students make conscientious choices as writers, as noted by Mary Towers, rather than assuming technology can do the thinking for them.

This requires strong academic integrity practices.

Mitigation strategies include requiring detailed AI transparency reports from students, outlining what software was used and what prompts were given.

Instructors can also design assignments that inherently demand human insight and original research, such as asking for annotated sources or requiring students to justify their choices orally.

By focusing on the how and why of writing, educators can guide students away from mere shortcuts towards thoughtful, ethical engagement with AI in education.

Measuring Progress and Adapting with Agility

As AI in education evolves, continuous monitoring and agile adaptation are essential.

Collaboration and annotation platforms that track changes and allow for shared feedback can help monitor the process.

Prompt engineering practice platforms support students in developing AI literacy and ethical prompting skills.

AI detection tools should be approached with caution and always in conjunction with human judgment, as accuracy can be an issue.

  • Key performance indicators institutions should track include AI Literacy Workshop Attendance, measuring the percentage of students and faculty attending workshops, and Student Transparency in AI Use, observing the percentage of assignments with required AI reports.

    Qualitative Feedback on AI Use through student and faculty surveys can gauge helpfulness and ethics.

    Academic Integrity Incidents should be tracked, noting instances of AI misuse compared to prior periods.

  • For review cadence, institutions should implement a continuous feedback loop.

    Monthly departmental check-ins can share best practices and challenges.

    Semesterly student and faculty surveys can assess AI integration effectiveness.

    Annually, a comprehensive review of AI policies and curriculum adaptations should be conducted based on emerging research and lived experience.

    This reflects a commitment to ongoing curriculum development and technology adoption.

Common Questions About AI in the Classroom

Educators are concerned about AI eroding traditional writing, critical thinking skills, and academic integrity by allowing students to outsource intellectual effort and potentially fabricate work.

Approaches to AI range from strict bans and reversion to old-school methods, like in-class writing and oral defenses, to full integration, teaching students ethical AI use, and adapting assignments to leverage AI tools.

Student cheating with AI remains a key concern.

Despite bans, students are using AI; a poll at Champlain College-Saint Lambert revealed 87 percent of students use AI weekly, often without official guidance, indicating bans may be largely ineffective and impractical.

The main shift in pedagogy emerging in response to AI is towards more process-oriented teaching, emphasizing unique assignments, detailed feedback on drafting and revision steps, and rewarding the human effort and critical choices involved in the writing process.

Some educators resist blanket bans on AI due to concerns about accessibility for students with learning disabilities, the impracticality of monitoring all work, the risk of appearing obsolete, and the need to prepare students for an AI-integrated professional world.

This involves balancing instructor discretion with institutional guidelines.

Conclusion

The journey from seeing a colleague as a traitor to finding common ground on AI in the classroom has been humbling.

It mirrors a broader institutional wrestling match: how do we honor the past while preparing for the future?

We still value the human hand, the individual voice, the awkward syntax that betrays genuine thought.

Indeed, I find myself less punitive about style, more forgiving of imperfections, seeing them now as the authentic mark of a mind at work.

I will likely continue deciphering student handwriting for a while yet, balancing the comfort of tradition with the imperative to adapt.

Ultimately, the path forward remains one of experimentation, grounded in empathy and a commitment to our students’ holistic development.

There is no single, definitive answer for AI in education, only an ongoing conversation.

Let us keep the dialogue open, the experiments flowing, and our humanity at the core of education.