Illustration for: AI in Education: The Professor Who Stopped Banning ChatGPT and Started Teaching With It
Real AI Stories
🔧 Intermediate

AI in Education: The Professor Who Stopped Banning ChatGPT and Started Teaching With It

A composition professor redesigned her course from banning AI to requiring it with transparency. Plagiarism incidents dropped 80% while student writing improved.

TL;DR

  • A professor shifted from banning ChatGPT to requiring it with full transparency in 18 months
  • Her new approach: require AI, require disclosure, assess the process not just the product
  • Plagiarism/misconduct incidents dropped 80% while in-class writing quality improved
  • Best for: Educators wrestling with AI policy, institutions developing AI guidelines
  • Key insight: Fight mindless AI use, not AI use itself - make transparency mandatory

Redesigning courses to embrace AI with mandatory transparency produces better learning outcomes and fewer integrity issues than attempting to ban tools students will inevitably use.

Dr. Sarah Mitchell banned ChatGPT in January 2023.

By September 2024, her syllabus required students to use it.

This is the story of how she changed her mind.

The Panic Phase

When ChatGPT launched, Dr. Mitchell’s first reaction was horror.

She taught composition at a mid-sized state university. Writing was her life’s work. And now there was a machine that could produce B-grade essays in seconds?

“I felt like everything I’d built was about to collapse,” she recalls. “Why would anyone learn to write if AI could write for them?”

Her first policy was simple: No AI. Period. Zero tolerance.

She added Turnitin’s AI detection. She designed essay prompts about hyper-specific personal experiences. She considered going back to handwritten blue book exams.

It didn’t work.

The Arms Race

The problem with banning something ubiquitous: you can’t.

Students used ChatGPT anyway. Some got caught by detection software. Others were smart enough to rewrite AI drafts in their own voice.

Dr. Mitchell spent hours investigating suspicious essays. She held uncomfortable meetings with students who may or may not have cheated. She watched detection tools flag essays that looked perfectly human-written, and miss essays that felt obviously AI-generated.

“I was turning into a cop instead of a teacher,” she says. “And I was losing.”

Detection software accuracy was shaky at best. False positives created nightmares. False negatives let cheaters through. The stress of constant suspicion was exhausting - for her and her students.

One night, after a particularly frustrating grading session, she asked herself a question she’d been avoiding:

What if I’m fighting the wrong battle?

The Reframe

Dr. Mitchell started reading about how other educators were handling AI. She found a spectrum:

The Banners: “AI is cheating. Period.” The Ignorers: “We’ll pretend it doesn’t exist.” The Embracers: “Let’s teach students how to use it.”

She was firmly in camp one. But she kept encountering an argument that nagged at her:

Students will use AI in their careers. Shouldn’t we teach them to use it well?

Her students weren’t going to become novelists. Most were business majors, education majors, nursing students who needed to write clearly for professional purposes. In their jobs, they’d have access to AI tools. Probably they’d be expected to use them.

By banning AI entirely, was she preparing them for a world that no longer exists?

The Experiment

Fall 2024, Dr. Mitchell tried something new. She redesigned her composition course with three phases:

Phase 1 (Weeks 1-4): Writing Without AI

Students wrote essays the old-fashioned way. No ChatGPT, no Grammarly AI, nothing. She wanted them to have a baseline: What can YOU produce without assistance?

This phase established:

  • Each student’s natural voice
  • Each student’s baseline skills
  • A reference point for later comparison

Phase 2 (Weeks 5-10): Writing With AI

Now students were required to use AI. Every assignment had an AI component.

The twist: they couldn’t just generate and submit. They had to:

  • Show their AI prompts
  • Submit the AI’s output alongside their final draft
  • Write a reflection: “What did AI do well? What did you change and why?”

This forced students to critically evaluate AI output rather than blindly accept it.

Phase 3 (Weeks 11-15): Choosing When to Use AI

Students made their own decisions about AI use, with full transparency required. They had to justify their choices: “I used AI for outlining but not drafting because…” or “I wrote this without AI because…”

The goal: develop judgment about when AI helps versus hurts.

What She Discovered

The results surprised her.

Finding 1: AI exposed skill gaps more clearly

When students submitted AI drafts alongside their own, Dr. Mitchell could see exactly where they struggled. If the AI produced a clear thesis and the student couldn’t, that was a teaching moment - not a cheating incident.

“Before, I’d see a weak thesis and not know if the student didn’t understand or just rushed. Now I could say: ‘Look, the AI got this right. Let’s talk about why you couldn’t.’”

Finding 2: Reflection assignments revealed understanding

The “why did you change the AI’s output?” reflections were goldmines. Students who understood the material made sophisticated edits. Students who didn’t, couldn’t explain what they changed or why.

“You can’t reflect on what you don’t understand. The reflections became my new assessment of learning.”

Finding 3: Some students improved faster

Students who used AI iteratively - generating, critiquing, regenerating, refining - often improved their own writing more quickly than expected.

“They were essentially getting infinite feedback loops. Generate something, see what’s wrong, try to fix it, generate again. It’s like having a tireless writing tutor.”

Finding 4: The best writing was still human

This one was comforting. The essays that moved her, that had genuine insight and voice, were human-written. AI could produce competent B-grade prose. It couldn’t produce A-grade thinking.

“The floor rose. The ceiling didn’t. AI makes mediocre writing better but can’t touch excellent writing.”

The New Syllabus

Dr. Mitchell’s 2025 syllabus includes:

Required tools:

  • ChatGPT or Claude for brainstorming and feedback
  • Grammarly for grammar checking
  • NotebookLM for research organization

Required disclosure: Every assignment includes: “Describe your AI use: prompts, tools, percentage of text generated vs written.”

Assessment shift:

  • 20% traditional writing (no AI, in class)
  • 40% AI-assisted writing (with transparency requirements)
  • 20% AI critique assignments (identify and correct AI errors)
  • 20% reflection and process documentation

Sample assignment:

Ask ChatGPT to write a 500-word essay on [topic]. Then write a 500-word critique identifying: (1) factual errors, (2) logical weaknesses, (3) missing nuance, (4) improvements you would make. Finally, write your own 500-word version incorporating what you learned.

This assignment teaches critical thinking about AI output - a skill students will need for life.

The Backlash (and Response)

Not everyone agreed with her pivot. Some colleagues accused her of “giving up.” Some parents complained their kids were being “taught to cheat.”

Her response:

“My job isn’t to prepare students for 1990. It’s to prepare them for 2030. They will work with AI whether we like it or not. Teaching them to use it critically, ethically, and effectively is more valuable than pretending it doesn’t exist.”

She also points out: “Using a calculator doesn’t mean you don’t understand math. Using spell check doesn’t mean you can’t spell. Using AI for writing doesn’t mean you can’t write - unless you use it mindlessly. I’m teaching them not to use it mindlessly.”

The Results

After one year of the new curriculum:

  • Student essays showed clearer thesis statements overall
  • In-class writing (no AI) improved - students transferred skills
  • Plagiarism/AI misconduct incidents dropped 80%
  • Student evaluations mentioned “feeling trusted” and “learning to think”

One student wrote: “This is the first class where I feel like I’m actually learning to write, not just being punished for not knowing already.”

What Other Educators Can Learn

Dr. Mitchell’s advice for teachers wrestling with AI:

1. Fight the right battle

“Don’t fight AI use. Fight mindless AI use. That’s the actual enemy.”

2. Require transparency, not abstinence

“Hiding AI use is the problem. Transparent use is fine. Make disclosure mandatory and judgment-free.”

3. Assess process, not just product

“If you only grade the final essay, AI can game it. If you grade the process - drafts, prompts, reflections - you see real learning.”

4. Use AI yourself first

“Before judging students, use the tools. Understand what they can and can’t do. It changed my perspective completely.”

5. Remember the purpose

“The goal isn’t ‘students produce essays.’ The goal is ‘students develop thinking and communication skills.’ AI can be part of that if wielded right.”

The Bigger Shift

Dr. Mitchell now believes the AI moment is similar to the calculator moment in math education.

At first, calculators were banned because they’d “make students lazy.” Over time, education evolved. Now calculators are standard tools, and math education focuses on when to use them, when not to, and understanding what they’re doing.

“We’re in the ‘panic and ban’ phase with AI. In five years, we’ll be in the ‘thoughtful integration’ phase. I decided to skip ahead.”

She still assigns in-class writing without AI. She still values human voice and original thinking. She hasn’t abandoned standards - she’s adapted them.

“I’m still teaching writing,” she says. “I’m just doing it in a world where AI exists.”

FAQ

Should professors ban AI or allow it in coursework?

Neither extreme works well. Banning creates an unwinnable arms race against detection while ignoring workplace realities. Complete permission without structure leads to mindless use. The effective middle ground: require transparency about AI use and assess the learning process, not just the final product.

How do you prevent students from just submitting AI-generated work?

Require disclosure of prompts and AI outputs alongside final submissions. Add reflection assignments asking students to explain what they changed and why. Assess the process, not just the product. Students who don't understand the material can't meaningfully critique or reflect on AI output.

Do AI detection tools actually work?

Current AI detection tools have significant accuracy issues. False positives create nightmares for innocent students. False negatives let cheaters through. Many educators find the stress of constant suspicion damages the learning environment more than the cheating itself.

Does using AI for writing actually help students improve?

When used iteratively with reflection, yes. Students who generate, critique, and regenerate are essentially getting infinite feedback loops. The key is requiring them to articulate what's wrong with AI output and how to fix it - that reflection builds genuine understanding.

How should AI use be graded?

Consider splitting assessment: some weight on traditional writing without AI (establishing baseline skills), some on AI-assisted work with transparency requirements, some on AI critique assignments where students identify and correct AI errors, and some on process documentation and reflection.

Last updated: January 2026