Contextual Interpretive Framework: AI-Mediated Labor and Liminal Pedagogy

Contextual Interpretive Framework: AI-Mediated Labor and Liminal Pedagogy

for: Dialoguing with the Algorithm: An Autoethnographic Study of Midlife Voice, Uncertainty, and Teacher Identity in a ChatGPT Exchange

by: Maria Lisak EdD (How to cite)

Rationale: As an aging, foreign educator working with generative AI, I use a framework that brings together identity, voice, and labor. These lenses help me ask what it means to teach—and to age—in a field shaped by algorithms and speed. I draw on aging and teacher identity, dialogic theories of voice, and critical perspectives on digital labor to make sense of the affective, epistemic, and ethical tensions in my exchange with ChatGPT. Here I share my literature review on labor and liminal pedagogy.

AI-Mediated Labor and Liminal Pedagogy

Building on the identity and labor concerns of the first framework and the dialogic analytic lenses of the second, this contextual interpretive framework situates my inquiry within the structural realities of aging, contingent academic work in South Korea. It draws on both my earlier autoethnographic analyses of precarious EFL teaching and selective scholarship that clarifies how generative AI enters and reshapes this terrain.

Generative AI arrives not into a vacuum, but into the layered realities of teaching work that are already shaped by shifting borders—between paid and unpaid hours, between formal institutional recognition and informal care, between personal ethics and institutional mandates. In these liminal spaces, where professional identity is continually negotiated, tools like ChatGPT become both seductive and suspect. As Oliveira et al. (2023) note, the rhetoric of “efficiency” often masks deeper structures of precarity. As Breshears (2019) documents in the Canadian TESL context, this precarity is intensified for language educators whose work is already undervalued and insecure. When AI is introduced as a solution, it risks further obscuring the invisible labor of pedagogical care, ethical judgment, and design discernment that sustains meaningful teaching.

Labor, Reputation & Precarity

If the previous section situates AI within the liminal, overlapping zones of pedagogical labor, this section addresses the structural frame: higher education’s entrenched dependence on casualized and contingent faculty. Generative AI in education does not simply appear as a neutral tool—it intersects with neoliberal university management practices that have already normalized labor intensification, reputational precarity, and the expectation of constant adaptability. The increased reliance on casualized academic labor has become a defining feature of the sector, creating particular vulnerabilities for educators who do not fit dominant narratives of technological fluency and adaptability (Gill, 2016).

The Seduction and Suspicion of Efficiency

AI tools promise "efficiency" at a moment when academic workers face unprecedented demands for productivity optimization. However, as Oliveira et al. (2023) argue, such promises often mask deeper structures of labor intensification rather than genuine relief. When AI is introduced as a solution to overwork, it risks further obscuring the invisible labor of pedagogical care, ethical judgment, and contextual design that characterizes thoughtful teaching.

The rhetoric of efficiency becomes particularly problematic for aging educators who have developed pedagogical approaches rooted in reflection, relationship-building, and contextual responsiveness—approaches that resist the logic of acceleration that AI tools often embody. The pressure to "keep up" with technological innovation creates what Standing (2014) describes as a digitized world with "no respect for contemplation or reflection", fundamentally at odds with the contemplative dimensions of experienced pedagogical practice. As Butler (2012) reminds us, these pressures are not just technical or economic, but ethical—calling into question whose vulnerability is recognized and whose is erased in the drive toward innovation.

Design Work as Vulnerable Labor

For educators like myself—foreign, contingent, operating within rigid institutional hierarchies—pedagogical design work represents both a source of professional agency and a site of particular vulnerability. This design labor is typically unpaid, often unrecognized by institutions, yet increasingly expected to meet standards of technological sophistication and visual polish despite the realities of academic exhaustion and resource constraints.

Precarious female academics face particular challenges, existing almost as non-citizens of the academy whose work is systematically undervalued (Breshears, 2019). In the Korean context, where hierarchy and face (chemyon or 체면) shape institutional relationships, foreign women educators navigate additional layers of marginalization that make the promise of AI assistance both appealing and threatening.

The introduction of AI tools into this landscape does not simply automate design work—it reframes and redistributes it. While AI can generate content and structure, the work of cultural translation, ethical judgment, and contextual adaptation remains intensely human. Yet this redistribution is rarely acknowledged or compensated, creating new forms of invisible labor disguised as technological enhancement (Song, 2018).

The Temporal Politics of AI Adoption

The speed at which AI tools operate creates temporal pressures that conflict with the embodied rhythms of reflective pedagogical practice. Experienced educators often work through iterative processes of design, reflection, and revision that unfold over extended periods. AI's capacity for immediate response can feel both liberating and alienating—offering quick solutions while disrupting the slower temporalities of careful pedagogical thought.

For aging educators, this temporal disruption intersects with broader ageist assumptions about technological adaptation. The expectation that all educators will quickly integrate AI tools into their practice ignores the legitimate pedagogical reasons why some might prefer slower, more reflective approaches to curriculum development. The rush to AI adoption can thus become another mechanism through which experienced educators are marginalized or pressured to prove their continued relevance (Han & Kim, 2023).

Agency, Automation, and Professional Identity

The relationship between AI assistance and professional agency is more complex than simple enhancement or replacement. In my own experience with ChatGPT, the tool neither automated my work nor left it unchanged—it created new categories of decision-making and judgment that required articulating values and commitments I had previously held tacitly.

This dynamic reveals what might be called the "labor of collaboration"—the additional work required to engage AI tools ethically and effectively. Rather than reducing cognitive load, AI collaboration often increases it by making visible the assumptions and biases embedded in both algorithmic and human approaches to pedagogical design (Al-Ghazi & Thompson, 2023). The result is not efficiency but a different kind of intellectual and emotional work that requires new forms of professional skill.

Synthesis: Reframing AI as Labor Condition

Seen through this contextual interpretive lens, generative AI is not simply a pedagogical tool but a labor condition, operating within and amplifying existing structures of institutional neglect and professional marginalization. For aging educators in contingent positions, AI adoption interacts with precarity, gendered hierarchies, and temporal politics in ways that complicate narratives of simple enhancement or threat.

This reframing underscores that sustainable, ethical AI integration requires more than technical training. It calls for institutional commitments to valuing the full spectrum of pedagogical expertise—including slow, reflective, relationship-centered practices—and to addressing the labor inequities that make efficiency promises so seductive. By situating AI within the intertwined realities of structural precarity, temporal disruption, and redistributed labor, this framework provides the lived and institutional backdrop against which the five analytical moments in my findings unfold.

This contextual interpretive framework situates the identity and analytical concerns of the first two frameworks within the structural realities of AI adoption in precarious, aging academic labor, providing the lived and institutional backdrop against which the five analytical moments unfold.

For this paper, click here. 


Comments

Popular posts from this blog

Portfolio for Maria Lisak, EdD

Week 1: Thresholds + Intuition

Gaps and Opportunities in the South Korean Digital Content Creation Landscape