Walk through any faculty corridor this autumn and you’ll hear the same quiet choreography: a lecturer lowering their voice to ask a colleague whether their department “has something official yet,” a program director thumbing a PDF labelled “draft—do not circulate,” a dean rehearsing the new line on plagiarism for an alumni breakfast. Artificial intelligence used to be the undergrad’s contraband flashlight—smuggled into seminar rooms, flicked on under the desk, deniability at the ready. Now the custodians of the syllabus are screwing in brighter bulbs. A new UNESCO snapshot of higher education says the quiet part out loud: roughly two-thirds of institutions either already have guidance on AI use or are developing it. Policy is finally catching up to behavior.
Like most academic transitions, the change arrived slowly and then all at once. If 2023 was the year of institutional prohibition, and 2024 the year of murmured tolerance, 2025 looks like the year of choreography—rules, toolkits, and a faint swagger that suggests universities may yet domesticate this unruly technology. The data help explain the urgency. In the UK, the share of students who say they’re using generative AI surged from 66% to 92% in a year. This isn’t a subculture; it’s the culture. University leaders can’t pretend they’re regulating an exception anymore; they’re designing around a default.
The story underneath the percentages is less about machines and more about legitimacy. For eighteen months, students have learned to be bilingual—sounding one way on paper and another in private message threads; practicing a new craft under the old surveillance regimes. Now administrators are publishing what amounts to a peace treaty: you can use these models here, you must cite them there; log what you did with them; don’t paste sensitive data; don’t let them think on your behalf in ways we still expect you to think. As someone who has watched dozens of universities do their best to retrofit nineteenth-century routines to twenty-first-century tools, I can attest: these short documents—three pages, sometimes five—carry the weight of a new settlement.
There’s also a legal shove behind the cultural nudge. The EU AI Act—that sprawling piece of Brussels engineering—tucks a deceptively simple mandate into Article 4: every provider and deployer of AI systems must ensure AI literacy among the people using them. Read that again with a registrar’s eyes. If your university allows staff to use AI to triage admissions queries, or academics to co-pilot literature reviews, or students to practice a viva with a tutor-bot, you aren’t just free to offer training; you’re obliged to. In policy terms, AI literacy has become the new lab safety. Gloves on, goggles down, logs kept.
From Catching to Designing
For months, institutions tried to catch AI use like one catches colds—thermometers everywhere, warning signs on every surface, the fantasy of early detection. But detection at scale is brittle—as any department that chased “AI probability” scores down blind alleys will tell you. A better instinct is finally prevailing: design for transparent use and assess the process, not just the artifact. That means staged assignments whose intermediate drafts are part of the grade; oral defenses that test a student’s grasp of their own sources; annotated prompts and model logs attached as appendices; and viva-style spot checks for high-stakes submissions. In the UK, as the student adoption numbers spooked university leaders, the messaging shifted from “ban or detect” to “stress-test your assessments before they break in public.”
The bright spot of this season is the appearance of pragmatic, public exemplars. Erasmus University Rotterdam publishes no-nonsense usage guidelines—ask your faculty what’s allowed; map your choices to program-level rules; check the university’s running index of GenAI practices. It’s not grand theory, which is its virtue. VU Amsterdam is likewise writing in crisp ink: if the course doesn’t explicitly allow AI, assume original work; if AI is allowed, document how you used it; and by the way, students cannot be required to use tools the university hasn’t licensed—a small line with large implications for privacy, procurement, and equity. Policies like these don’t end the ambiguity, but they move it into daylight.
The Institutional Divide
Every university, if we’re honest, contains multiple universities. Engineering faculties will happily wire a model into their pipelines by lunchtime; law schools will hold a colloquium to define “originality” and break for the summer. EDUCAUSE’s 2025 AI Landscape Study names this for what it is: an institutional digital divide inside higher education itself. Strategy and leadership are maturing; policies and guidelines are proliferating; and yet the distribution of capability remains uneven—by discipline, by budget, by appetite for reputational risk. The institutions that prosper in this phase won’t be the ones with the thickest policy, but the ones that adapt their assessment and staff-development at the pace of the tools.
There’s a broader European arc here too. Before anyone in Brussels drafted chapter numbers, universities were already drifting toward flexible learning, micro-credentials, and an expanded brief for “digital transformation.” EUA’s Trends 2024 reads like a prequel to the current moment: institutions grappling with pandemic legacies, war, demographic jolts, and technology moving faster than committee cycles. AI doesn’t cancel those forces; it amplifies them. When assessment changes, so do progression rules; when staff practice changes, so does quality assurance; when quality assurance changes, so do accreditors. The machinery is heavy but turning.
Literacy as Governance
Because AI tools are general-purpose, “AI literacy” has the same hand-wavy feel as “computer literacy” did in the 1990s. Universities have to make it tangible. The emerging pattern is sensible: baseline modules for students and staff (critical questioning, data hygiene, model limits); role-specific training (what a lab manager needs to know is different from what a library acquisitions lead needs); and auditable logs that demonstrate not just that training existed, but that it was completed and applied. The Commission’s own Q&A on the AI Act practically begs institutions to operationalize this—“providers and deployers… shall take measures to ensure… a sufficient level of AI literacy”—but it stops politely short of telling them how. That’s for ministries, rectors, and accreditation bodies to flesh out, and they’re starting to.
If this sounds bureaucratic, that’s because some of it is. But governance does real work here. Consider the simple rule VU articulated: students cannot be required to use unlicensed tools. It avoids a silent sorting effect where wealthier students can pay for premium copilots and everyone else gets a scuffed free tier. It forces institutions to take procurement seriously—privacy, data location, software transparency—rather than outsourcing those questions to the whims of startup pricing pages. It also protects staff; when the university licenses a tool, it shares responsibility for its behavior. That’s literacy’s quiet cousin: liability.
The Integrity Question Won’t Disappear—It Just Moves
Let’s dispense with a comforting illusion: no policy will restore 2019’s exam room. Students aren’t using AI because they’re faithless to learning; they’re using it because the tools are useful—to draft, to summarize, to outline what a week of reading might look like if their shift ended on time for once. The HEPI–Kortext numbers show the texture: students use AI to save time and improve quality, and a non-trivial share paste model text in whole or in part. The paradox for universities is to accept the usefulness without delegating the underlying cognition. That’s the point of process evidence—drafts, notes, citations, prompt appendices, oral defenses. You’re not trying to exile the machine; you’re asking the human to show their work.
The risk isn’t simply cheating. It’s over-fitting the self to the tool—what a Dutch sector report, from SURF, frames as the ethical hazards of hyper-personalized learning. Imagine assignments so perfectly tuned by a student’s algorithmic twin that the scrapes and surprises of learning—the friction that makes knowledge stick—get sanded away. Echo chambers become syllabi; performance curves smooth out; curiosity narrows. As segmentation deepens, the shared public square of the seminar risks thinning. If you care about universities as democratic spaces, not just credentialing factories, that’s a problem. Governance, again, is not a mood; it’s a floor.
What Good Guidance Looks Like
The best university AI policies I’ve read have the rhythm of an anti-panic manual. They are short; they describe allowed uses by assignment type; they require citation or a usage note (“I used a language model to outline sections two and three; I confirmed all references manually.”); they set data-handling rules (“no personal data pasting; use the institution’s instance when available”); and they push assessment upstream into the design of learning rather than downstream into police work. They also accept a truth that took too long to say: blanket bans and “AI detection” crusades are high-friction dead ends. The point is not to catch; the point is to shape. The Dutch and the Dutch-adjacent are, as is their habit, out ahead: publish clear program-level stances; build staff playbooks; prototype, iterate, publish again. Erasmus’s living pages for staff are a model for how to keep the translation running as the tools change.
A University Is a Memory Device
When you strip away the compliance and the anxiety, you arrive at a quieter insight. Universities are memory devices—repositories of methods for making knowledge together. Every technology cycle tests whether those methods are elastic or brittle. In the early days of Wikipedia, faculty mistook a new public memory for a threat to private knowledge; then they learned to have students write the entries, sourcing them as meticulously as any essay, and the institution survived just fine. Generative AI is another memory challenge. It can be a stimulant for better drafts and a solvent for lazy reading; it can be a rehearsal partner for arguments or a factory for plausible nonsense. The lesson of the last year is that institutions can hold both truths simultaneously, if they retool the rituals: teach students to interrogate outputs, compare sources, keep judgment where judgment belongs—with a human—while embracing the speed and reach of the tools. That is literacy, and more importantly, it’s culture.
The Uneven Future
The next year will bring new rifts. Some universities will move fast to license model access, spin up private instances, and integrate records of AI use into their learning-management systems; others will wait for sector bodies to hand down template paragraphs. Some will harden assessment with oral work and live coding; others will double down on proctoring software and wish the genie back into the lamp. Some will treat AI literacy like a box-check; the better ones will embed it in degree outcomes and staff promotion criteria. The EDUCAUSE study names the terrain, but each institution will draw its own map. If I were wagering reputation points, I’d bet on the places that do three things quickly: (1) publish program-level rules; (2) fund staff development with time, not just webinars; (3) redesign assessment to surface thinking, not just prose.
Beneath it all runs a simple, unfashionable idea: the university is at its best when it changes the conditions of attention. If AI lets you sprint to a first draft, spend that dividend on slow reading in seminar; if AI helps you outline, spend the saved hours in office hours; if AI pulls you toward efficiency, insist on one stubbornly inefficient intellectual pleasure per course—a long argument, a bad hypothesis rescued, a detour through a counterexample that the model didn’t predict. Guidance documents won’t say it this way, but that’s the heart of the thing. Make room for the parts of learning a model cannot automate: taste, judgment, courage, doubt.
The Bottom Line
- Policy is the floor, not the ceiling. Publish the rules; then get to the real work, which is capability and culture. UNESCO’s numbers show the sector moving—finally—from panic to choreography.
- Assess the process. If the artifact is part-machine, grade the human trail that produced it—drafts, notes, citations, and a few minutes of spoken explanation under mild pressure. The UK’s adoption surge makes this non-negotiable.
- Treat AI literacy like lab safety. Not inspirational posters—measurable training attached to roles, logged and auditable. That’s not just good practice; in Europe, it’s the law.
- Mind the divide. Don’t let policy ossify where capability is thin. Budget for staff practice, not just policy PDFs. EDUCAUSE has the receipts; don’t be the last to read them.
- Design for the human parts. SURF’s warning about hyper-personalization should haunt curriculum committees. Keep friction in the system; protect the shared square.
If this sounds like an argument for modesty, it is. The sector tried hubris first. It chased mirages of “AI detection,” whispered bans it couldn’t enforce, and treated students like adversaries. The better story is starting to appear in the footnotes of institutional websites: a policy page that speaks plain; a staff workshop that demonstrates how to cite a prompt; a course outline that requires two drafts and a viva; a procurement rule that protects students from predatory terms; a dean who says, with a smile and a deadline, “Show me how you learned.”
That’s how universities change: not with manifestos, but with habits. Watch the habits this year. They’re the new rulebook.
