Most mornings I pass a café in Alfama where students revise with laptops open, custard tarts within reach, and a chatbot blinking politely on the screen. This is 2025: AI isn’t a visitor in education anymore; it pays rent. The question I keep getting—over coffee, in faculty rooms, on trams—is simple: Is it helping us think, or quietly making thinking optional?
Short answer: both. Longer answer below—with links, dates, and receipts.
When AI teaches, it works. When it shortcuts, it backfires.
A carefully designed Socratic AI tutor—one that makes you show steps, nudges your reasoning, and withholds the final answer until you’ve tried—can outdo very respectable human-led active learning on first-pass tests. In a Harvard randomized trial published in Scientific Reports (Nature portfolio, June 2025), students learned more in less time and felt more engaged and motivated with a purpose-built AI tutor than in class.
Flip the design and the results flip with it. A large high-school field experiment (≈1,000 students) published in PNAS (e-pub June 25, 2025; issue July 2025) found that unfettered access to GPT-4 during practice gave a sugar-rush boost while the bot was available, then reduced later test performance once access was removed. A guard-railed “GPT Tutor” that prompted reasoning blunted the harm. Same model family, opposite outcome; interaction design is the pivot.
Zooming out, the adoption curve makes these effects consequential, not marginal: the HEPI–Kortext Student Generative AI Survey 2025 (UK) reports 92% of undergrads now use GenAI (up from 66% in 2024), and 88% have used it in assessed work—mostly to explain concepts or summarize readings. (Feb 26, 2025.)
Rule of thumb you can take to class: Tutor-mode good; answer-machine bad.
Universities are past bans; assessments are mutating.
On campus, AI is ordinary behavior across Europe; the real action is in assessment redesign. Sector coverage that accompanied the HEPI report urged universities to “stress-test” assessments—assume AI is in the room and reward process as well as product. (Feb 2025.)
The policy backdrop matters too. In the EU, obligations for general-purpose AI (GPAI) providers under the AI Act entered into application on Aug 2, 2025, with the Commission publishing Guidelines (July 18/31, 2025) and a voluntary GPAI Code of Practice (July 2025) to help vendors comply on transparency, copyright, and safety. Universities buying tools should now expect model documentation and update logs as standard.
Workplaces: enthusiasm, anxiety, and the trust tax of “bossware.”
Across the euro area, workers are more curious than fatalistic: an ECB survey analysis (blog, Mar 21, 2025) shows many already use AI, and most don’t expect imminent job loss—especially those who actually use the tools. The pattern is familiarity reduces fear.
What does sour the mood: surveillance. A Chartered Management Institute survey reported by The Guardian (today) found about a third of UK employers now deploy “bossware” to monitor activity. The UK data regulator (ICO) reminds employers that monitoring must be necessary, proportionate, and transparent; see its Monitoring at Work guidance (Oct 2023, still the reference point).
Meanwhile, the Microsoft Work Trend Index 2025 (April 23, 2025) shows leaders racing to adopt while capability building and workflow redesign lag—translation: don’t just hand out licenses; teach people how the tool fits Tuesday’s job.
What the brain science actually says (so far)
Neuroscience is early, but we do have some signals:
- Cognitive offloading is real. A mixed-methods study in Societies (Jan 2025) linked heavier AI tool use to lower critical-thinking scores, mediated by cognitive offloading. Correlational, yes—but a useful warning sign about “easy-mode” habits.
- Attention shifts with design. Lab work pairing EEG with AI-supported tasks suggests that well-designed support can reduce unnecessary load and sharpen attentional focus, while poorly designed support can make users spectators. See an EEG study of AI-generated content use in design education (Frontiers in Psychology, July 2025).
- The next wave looks more causal. New frameworks for measuring how LLM interactions affect attention and cognitive load (EEG + behavioral) and preregistered trials on cognitive effort in students are now in motion (2025 methods/protocols papers).
Bottom line: AI can lighten mental load and speed progress if you spend the saved effort on thinking. If you don’t, the “assist” becomes dependency—and your later recall and transfer suffer. That matches the PNAS field result above. PubMed
A simple field guide (for Monday morning)
If you teach (or design learning tools):
- Ask before you tell. Default to Socratic prompts and delayed reveals; make students show a step before the model shows its own. (Why? See the harm from unguarded access vs. the mitigations from guard-rails.)
- Make retrieval unavoidable. Space short quizzes; let AI generate variations, but require an answer before explanation. (Protects long-term learning—the Harvard RCT’s design notes echo these principles.)
- Grade process, not just product. Reasoning traces, version histories, and brief vivas surface understanding in an AI-rich world. (Sector guidance after HEPI–Kortext.)
- Differentiate scaffolds. More guidance for novices; more autonomy (and harder checks) for advanced learners—avoids over-assistance and offloading. (Consistent with the RCT and EEG findings above.)
If you run a team:
- Train for oversight. The job shifts from drafting to critiquing and integrating model output; teach verification and decision checkpoints. (Work Trend Index 2025.)
- Explain your monitoring—or don’t do it. If you must track usage, follow ICO principles (necessity, proportionality, transparency). Trust is a learning technology too.
- Pilot with real tasks. Pick two workflows, run a four-week pilot, collect “AI helped here / hurt here,” then turn that into a playbook—before scaling. (ECB’s “users are more positive” finding hints why piloting matters.)
The policy compass (two quick anchors)
- EU AI Act, GPAI scope (since Aug 2, 2025): Universities should expect vendors to show model documentation, update logs, and copyright policies aligned with the GPAI Code of Practice and Guidelines. (July–Aug 2025.)
- Global ethics baseline: UNESCO’s Recommendation on the Ethics of AI (adopted 2021; ongoing updates) and the OECD’s AI literacy work (2025) converge on human oversight, fairness, privacy, and AI literacy as core competencies in school.
Lisbon epilogue: using the exoskeleton without forgetting our legs
On the riverfront near Cais do Sodré, I sometimes watch skateboarders practicing the same trick for an hour. You can see learning: attempt, wobble, micro-adjust, try again. If a robot rolled up and lifted them over the rail each time, they’d land something camera-ready—and learn almost nothing.
AI can be that robot. It can also be the coach who says, “show me your feet,” then offers one tip that saves a week. The difference, as ever, is how we use it.
Treat AI like a cognitive exoskeleton: support that lets you lift more while you still do the lifting. Build tutors, not answer machines. Reward process, not just product. And keep a little space in your day—between the tiles, the coffee, and the deadlines—to practice thinking with no one lifting you at all.
