My Students Are Better at This Than the AI Is

My Students Are Better at This Than the AI Is

What the OECD's 2026 education report confirmed about teaching English in Gwangju — and what I'm doing differently in the second half of this semester because of it.

A few months ago, a colleague sent me a link to the OECD's Digital Education Outlook 2026. I read it the way I read most big policy documents: with one eye on the research and one eye on my actual classroom. Sometimes they feel like they're describing different planets. This time, they weren't.

I teach two ESP courses at Chosun University in Gwangju — one for freshmen, one for sophomores — both focused on English for public administration and social welfare. My students are Korean undergraduates who are going to go work in welfare offices, local government, community support roles. They're not studying English for its own sake. They're studying it because their professional futures will require it.

The OECD report is about AI in education. But what it's really about — buried under the data — is a question every language teacher is already living with: when a student produces something that looks like learning, how do you know if it is?

The Finding That Stopped Me

There's a data point in the OECD report that I keep coming back to. Students using AI tools were 48% more successful at completing tasks. But when AI access was removed — in an exam, say, or a live spoken exchange — their performance dropped by 17% compared to students who hadn't used AI at all.

The report calls this a "mirage of false mastery." I'd been calling it something less elegant in my head, but the concept was familiar. I've watched students hand in beautifully composed homework and then freeze in discussion. I've seen essays that clearly weren't theirs — not because I could prove it, but because the person sitting in front of me wasn't capable of defending a single sentence in them.

"Successfully completing tasks with GenAI does not necessarily translate into learning. Benefits depend on how AI is embedded in pedagogical design — not simply on whether it is used." — OECD Digital Education Outlook 2026

What the OECD is describing is a design problem, not a technology problem. The question isn't whether to allow AI. It's whether the way you're using it in your course produces real acquisition or just better-looking output.

What I Was Already Doing (Without Knowing It Had a Name)

Both my syllabi allow AI for homework — with mandatory disclosure. Students have to write a sentence at the end of any AI-assisted work: "I used AI to check grammar," or "I used AI to look up vocabulary." Using AI to write the homework itself is an automatic fail, and using it to speak in class isn't possible anyway, because I ask questions and they have to answer, in English, with no device.

I didn't design this because I'd read the OECD report. I designed it because I've been teaching EFL long enough to know that the moment you can't hear someone think in the target language, you've lost the plot. The speaking requirement isn't punitive. It's diagnostic. It tells me — and the students — what's actually there.

Reading the OECD report, I discovered this approach already has a framework around it. They call it the distinction between "performance" and "learning gains." They recommend what they call process-oriented assessment — evaluating not just what a student produces, but how they engaged with the material, what they changed, what they questioned. Thirty percent of my grade is class participation. Another twenty is reflective writing that AI is explicitly banned from touching. I was doing process-oriented assessment before I knew the term.

That was reassuring. But the report also pushed me to think harder about what I'm doing in the second half of this semester — Weeks 10 through 15 — which is where things get more interesting.

The Part About Language That Hit Differently

Here's the finding from the OECD report that I haven't seen many people talking about: AI produces small positive learning effects in English and Chinese, but considerably negative effects in other languages. The meta-analysis is there in the data. Korean-speaking students using AI tools designed and trained primarily on English are in a genuinely complicated position.

Because the AI is fluent in English. Impressively fluent. Which means a Korean student can bypass the effortful cognitive work of thinking in English and still produce a text that looks like English thinking. The mirage of false mastery, again — but with a linguistic dimension that's specific to EFL contexts.

Language is not just a technical parameter. It is culture, identity, and power. — OECD Digital Education Outlook 2026 Conference, 2026

My courses are anchored in topics where my students are the experts and the AI isn't: Gwangju's human rights history, Korean demographics, local welfare policy, the lived reality of public service in this city. I didn't set up my syllabus this way as an anti-AI strategy. I set it up this way because it's good ESP pedagogy — language is always more meaningful when it's doing real work about real things students know and care about.

But reading the OECD report made me see it differently. When a student asks an AI about the May 18th democratic movement and then has to correct what it gets wrong, they are doing something genuinely sophisticated: they're using their own cultural knowledge to evaluate an English-language source. That's a professional skill. That's exactly what a welfare administrator navigating international research or policy documents will need to do.

Their knowledge is the asset. The AI's limitation is the pedagogical lever.

What I'm Changing for Weeks 10–15

I'm not overhauling anything. The course structure stays the same. What I'm doing is making the AI dimension of the work more deliberate, more visible, and more explicitly tied to what it means to be a professional who uses AI thoughtfully rather than one who is used by it.

For my freshmen, that looks like building a sequence across the second half of the semester. We go from using AI as an information tool (with critical verification), to fact-checking AI claims against Korean sources, to a week where AI is almost entirely absent because the topic — Gwangju human rights — is one where I tell them plainly: you are the expert here, not the machine. Then we move into evaluating AI-generated policy proposals, and finally to each student writing a personal AI Policy Statement: how they will use AI as a welfare professional, and what they will never hand over to it.

For my sophomores, who are working with welfare theory at a more analytical level, the design is slightly different. I'm asking them to submit their AI-generated outlines alongside their final presentations — not to catch cheating, but because the gap between what the AI gave them and what they kept, changed, or rejected is itself evidence of critical thinking. I want to see the editing. The editing is the learning.

The OECD distinguishes between "fast" AI use — generating output quickly — and "slow" AI use: iterative, questioning, reflective. Slow use, they find, is where genuine learning and creativity emerge.

In Week 13, my sophomores will have a discussion I'm genuinely looking forward to: what is AI's role in welfare administration in Korea over the next decade, and what should it never do? That's not a language exercise. That's a professional ethics conversation in English. The language is the medium, but the thinking is the point.

The Question I Keep Asking Myself

The OECD report is careful to say that teachers shouldn't be managers who simply permit or prohibit AI use. They should be professional decision-makers who guide students in using AI as a tool to support thinking. I like that framing, but it puts real weight on us. It requires us to know what we're doing and why — not just to set a policy and hope students follow it.

The question I keep asking myself as I plan these final weeks is: what does this task teach them to do with AI that they couldn't do before? Not just what does it produce. What does it teach.

If a student finishes this semester knowing that AI is fast and fluent and confidently wrong about Gwangju, and that their job is to know the difference — that seems like a genuinely useful thing to know. More useful, maybe, than any particular piece of welfare administration vocabulary.

They're going to spend their careers alongside AI tools. I want them to be the ones holding the judgment, not outsourcing it.

Key Takeaways

  • The OECD 2026 report's central finding — that AI-assisted task performance doesn't automatically produce learning — is a design challenge, not a reason to ban AI.
  • Mandatory AI disclosure, in-class speaking requirements, and reflective assessment are already aligned with the OECD's process-oriented framework.
  • For Korean EFL learners specifically, AI's English fluency can mask the absence of real acquisition. Locally-grounded content is a deliberate pedagogical response.
  • The gap between what AI produces and what a student chooses to keep, change, or reject is itself evidence of critical thinking — and worth assessing.
  • The goal isn't AI literacy as a separate skill. It's professional judgment about when to trust a tool and when to override it.

OECD (2026). OECD Digital Education Outlook 2026: Exploring Effective Uses of Generative AI in Education. OECD Publishing, Paris. https://doi.org/10.1787/062a7394-en

Maria Lisak teaches ESP at Chosun University, Gwangju, Korea. Her courses focus on English for public administration and social welfare.

Comments

Popular posts from this blog

Portfolio for Maria Lisak, EdD

Week 1: Thresholds + Intuition

Gaps and Opportunities in the South Korean Digital Content Creation Landscape