You Cannot Open a Laptop in 2026 Without Touching AI. So Why Are We Still Pretending?
A student’s perspective on AI, academic integrity, and what learning actually needs to look like now.
The Contradiction Nobody Wants to Name
You cannot open a laptop in 2026 without touching AI.
Adobe Reader prompts you to summarise the document before you’ve read a word. Google serves you an AI answer before the first blue link. Gmail wants to respond to your emails for you. LinkedIn wants to rewrite your posts. Grammarly is rewriting your sentences as you type them. GitHub Copilot is completing your code before you’ve finished thinking it through.
Google Gemini is embedded in Docs, Slides, and Gmail. Microsoft Copilot lives inside Word, PowerPoint, and Excel: summarising, generating, suggesting, at every turn. These are the tools your university pays for, licenses, and recommends.
These aren’t tools students are sneaking in through the back door. These are the defaults. The licensed, institutionally approved, school-recommended defaults.
And yet, we sign declarations stating we haven’t used AI.
I want to be careful here, because this isn’t an argument for academic dishonesty. It’s an argument for institutional honesty. The dishonesty in higher education right now isn’t students using AI. It’s the pretence, increasingly impossible to maintain, that opting out is even an option.
The Maths Teacher Was Right. But the Problem Is Subtler Than You Think
My high school maths teacher used to say: you learn maths by doing it. A hundred problems. Struggle, failure, correction, pattern recognition, until the concept stops being something you look up and becomes something you know. That principle hasn’t changed.
What’s changed is that AI makes it very easy to simulate having done the work without actually doing it. A student can produce a polished answer without ever building the mental scaffold that makes the answer theirs. The output looks identical. The learning is absent.
That’s the real danger. And it’s subtle enough that even well-intentioned students can fall into it without noticing. You’re not cheating in the old sense. You’re not copying someone else’s essay. You’re doing something arguably more insidious: producing output that looks like understanding while quietly bypassing the process that would have created it.
Which Subjects Are Most Exposed?
Not all disciplines are equally vulnerable, and this distinction matters enormously for how institutions should respond.
The subjects most at risk are the ones where the output is the product: essays, reports, literature reviews, code submissions, reflection journals, policy briefs. In these disciplines, a polished final document has historically been sufficient evidence of learning. AI breaks that assumption completely. There is now no reliable inference from a well-written essay to the mind that supposedly wrote it.
Less exposed are the subjects where the process is observable: lab practicals, oral defences, field placements, mathematical derivations shown step by step under exam conditions. In these settings, AI cannot silently substitute for you: the work leaves traces of the person doing it.
This is exactly why a supervisor reviewing your models can tell whether you understand them. Every modelling decision encodes a reasoning process, and that process either holds up under questioning or it doesn’t.
When I fit Bayesian spatial models for my thesis, I insist on being able to derive the posterior distribution by hand. I want to explain why a single child’s malaria test result is a Bernoulli trial, why aggregating across children within a cluster gives you a Binomial likelihood, and why a Poisson approximation fails here: the data doesn’t qualify as a rare event. That argument has to live in my head, not in a chatbot response. When my supervisor pushes back, I need to have it.
A submitted PDF essay encodes none of that. You cannot tell, from the document alone, whether the person who wrote it could defend a single sentence.
Do universities still build that kind of intuition? That’s the question I’m not sure anyone is asking loudly enough.
The implication is important: the subjects that have always assessed through final written outputs are the ones that need to rethink their assessment design most urgently. The lab sciences and oral traditions have, perhaps accidentally, built in a degree of robustness. The humanities and social sciences, which have long relied on the essay as the primary vehicle of assessment, face the sharpest disruption.
So What Might Actually Work, and Who Gets to Decide?
I want to be careful here, because I don’t think I have the answer. What I have is a reflection, and a strong feeling that the answer cannot come from one side of the table alone.
Some things feel worth exploring. Mini vivas, short oral defences where you’re asked to explain your own work, could tell a lecturer more in ten minutes than a submitted document ever could. Though I’ll be honest: if we’re not careful, students will just cram their own essays the night before, and we’ll have shifted the performance without shifting the learning.
Process-documented submissions. Staged drafts. Live problem-solving with unseen material. These aren’t new ideas. What’s new is the urgency.
But here’s what I keep coming back to, from a different part of my life entirely. I spent years working in HIV and SRHR advocacy and programme design, having been that same affected young person to being one of the people designing, implementing and evaluating such programs. The consistent failure in that space, not from bad intentions but from bad process, was designing for young people without designing with them. Programmes built on assumptions. Policies shaped by what adults thought young people needed, or feared, or would do. They often missed. Not because the people designing them were wrong to care, but because the people most affected weren’t genuinely at the drawing board.
I see the same dynamic in higher education’s response to AI. The conversation is happening: in faculty meetings, in policy committees, in academic integrity offices. Students are largely not in that room. And yet students are the ones living inside this contradiction every day, navigating which tools are permitted, which are invisible, which are unavoidable, and what any of it actually means for whether they’re learning.
What I’d actually advocate for is simple, and it’s not a policy recommendation: honest, structured conversations between institutions and students about what AI is actually doing to learning, not just to integrity. Not enforcement consultations. Not feedback forms. Real co-design, where students’ reflections about their own formation are treated as evidence, not noise.
Because right now the institutional response is built mostly on assumption and the instinct to protect academic integrity as a procedural value. I understand that instinct. I just don’t think it’s working. And I don’t think it will work until the people most affected are genuinely part of figuring out what comes next.
There is a distinction, and I think it’s the most important one in this conversation, between using AI as an agent to produce assignments, and using AI as a scaffold to genuinely understand better. One hollows out your formation. The other accelerates it. But students can only make that distinction meaningfully if someone has actually talked them through it, honestly, not in a two-hour compliance module.
AI is here to stay. Learning will never look the same. The question institutions need to stop avoiding is not how to detect AI use. It’s what kind of learners they actually want to produce, and whether they’re willing to ask students that question directly.
In Part 2, I’ll show what the second kind of use actually looks like: a Claude skill I built for my own MSc studies that changed how I prepare for exams and work through hard modelling problems. Not AI doing the work. AI making the struggle more productive.
Godwill | MSc Health Data Science, University of Galway | statsbeneath.com