We Know How to Fix AI Policies. Nobody Has Time to Do It.
AI policies in higher education aren’t failing because instructors don’t understand the problem—they’re failing because meaningful solutions demand a level of instructional design no one has bandwidth
Two weeks ago I gave a workshop on creating assignment-specific AI policies for engineering courses. (Engineering provides especially clear examples, but the underlying issues show up across disciplines.) You can read a summary of that workshop here, and watch the recording here.
The workshop focused on how to move from blanket rules to assignment-specific decisions. In short, I mapped AI use at the assignment level, based on:
What kind of thinking the task targets (conceptual, procedural, design, or analysis)
Where students are in their skill development (novice → proficient)
What form of AI use aligns with the learning goal (prohibited, comparison, verification, or collaboration)

Most faculty I talk to already know that blanket AI bans don’t work. Students use AI anyway—just without guidance. But moving toward adaptive policies that calibrate AI use based on skill level, task type, and learning objectives? That’s where things stall.
The Time Tax of Granular Thinking
Integrity-focused policies are cognitively cheap. You write one rule—“no AI on homework”—and you’re done. It’s simple, it’s clear, and it requires no additional course redesign.
Learning-focused policies are the opposite. They require you to map every single assignment:
What skill is this actually building?
Is this conceptual understanding, procedural execution, design synthesis, or debugging?
Where are students developmentally when they hit this task?
Should AI be prohibited, allowed only for comparison, used for verification, or embraced as a collaborative tool?
And even those decisions undersell the real workload. A single homework set might require labeling each problem by skill type, identifying which components are procedural versus conceptual, and deciding for each whether AI should be banned, comparison-only, or allowed for verification. It isn’t complicated work—but it’s time-consuming, and it piles onto prep, grading, research, and service.
The framework is rational. The activation energy is not.
And this isn’t just an engineering issue. Any discipline that depends on sequenced skill development—writing, coding, statistics, language learning, even studio arts—runs into the same problem.
The framework scales. Faculty bandwidth doesn’t.
The Procedural Content Paradox
The second problem runs deeper.
In procedural and computational content (ex: circuit analysis, beam deflection, thermodynamics), students build understanding through repetition. The manual execution is what encodes patterns. Solving twenty beam-deflection problems develops the automaticity and error recognition that separate competent engineers from students who’ve only memorized formulas.
AI disrupts this in a very specific way - it lets students turn in perfect homework and still fail the exam because they never encoded the procedures.
But here’s the bind: industry expects graduates to use AI for routine calculations. We can’t prepare students for AI-saturated workplaces by banning the tools they’ll be expected to use professionally.
And this is pervasive across disciplines. Swap in debugging for a programming course, running analyses in a statistics class, or drafting in a writing course—the structure of the problem is the same. Whenever procedural fluency matters, AI creates the same tension:
Ban AI and students don’t learn to use the tools they’ll need.
Allow AI and students can bypass the very practice that builds competence.
Now layer on the realities of large course loads, scaling office hours, and grading under time pressure. How do you thread that needle in a way that fits within a normal semester?
What Might Actually Work
I don’t think we need a brand-new idea. If AI erodes skill development outside class, then the only reliable place left to build those skills is inside class. And we already know what works there: active learning.
If students build procedural confidence in class—working through problems with immediate feedback and without AI—they develop the self-efficacy to attempt problems independently outside class. Active learning doesn’t stop students from using AI; it changes how they use it, from outsourcing the thinking to checking, comparing, and extending their own work.
In a sophomore circuits course, that might look like:
In class: Students solve nodal-analysis problems by hand with worksheets or clicker questions; AI use is explicitly off-limits.
Early homework: Students attempt similar problems independently, then can use AI for comparison only—checking steps and identifying where their work diverged.
Later homework: Once they’ve shown procedural fluency, students can use AI for verification or computation, with the expectation that they still choose the method and check physical reasonableness.
The same structure works across disciplines: protected practice first, then structured AI support, then more open AI use once fluency is established.
There’s a decade of evidence behind this. Freeman et al.’s 2014 meta-analysis in PNAS showed that students in traditional STEM lectures were 1.5 times more likely to fail than those in active learning environments. Faculty already know this. The problem is the hours it takes to redesign a course around it.
Which brings us back to the time tax.
Where This Goes Next
I don’t think there’s a single clean solution. But in an AI-saturated world, “assessment design” has to expand to include:
where students must work without AI to encode core skills,
where AI can support comparison and feedback,
and where AI becomes a legitimate collaborator—and how we assess students’ judgment in using it.
And while my examples here come from engineering, every discipline is now wrestling with versions of these same questions. Whatever frameworks we propose have to fit within the realities of faculty workload, not the ideal conditions we wish we had.
These are the barriers I’m seeing as I work with instructors on assignment-level AI policies. But your context may look different.
Where did you get stuck when trying to adapt your AI policies?
What patterns have actually worked for you—even small ones?
I’ll be exploring these issues in future posts, but I’d like those posts to be shaped by the challenges people are actually facing. If you’d like to follow along as I keep exploring this space, please subscribe below!

