Designing with AI: A Reflective Inquiry into Syllabus Revision, Agency, and Precarity
Designing with AI: A Reflective Inquiry into Syllabus Revision, Agency, and Precarity
By Maria Lisak EdD (How to cite this)
Bio: With over 30 years of EFL experience, Maria Lisak, EdD works at Chosun University, where she teaches social entrepreneurship in English using experiential learning and sociocultural approaches. Her work integrates constructivist and emancipatory frameworks, with research focusing on funds of knowledge, Gwangju as Method, and social justice education. She also designs educational technologies and materials for diverse ESP contexts, linking classroom practice with community needs. Her current interests include literacy, culture, and language education, and participatory frameworks for teacher wellbeing. Her interdisciplinary work invites reflection on multimodal pedagogies, material making, and context-driven innovation in borderland spaces.
Abstract: This reflective action research study examines how generative artificial intelligence—specifically ChatGPT—shaped one EFL educator’s syllabus revision process at a South Korean university. Situated within a context of institutional precarity and cultural marginality, the inquiry explores how AI can both support and complicate teacher agency, instructional design, and pedagogical care. Drawing on chat transcripts, syllabus drafts, design memos, and discourse analysis, the study highlights five critical decision points where AI influenced—but did not replace—teacher judgment. Rather than positioning AI as a neutral assistant, the analysis treats it as a discursive partner whose outputs required interpretation, resistance, and cultural contextualization. The findings suggest that while generative AI can scaffold design and reduce cognitive load, it also introduces new tensions—around voice, authority, efficiency, and labor. Ultimately, the study argues for a situated, ethics-driven approach to educational AI use—one that foregrounds care, transparency, and reflective practice in the design of learning experiences.
Keywords: Syllabus design; Teacher agency; ChatGPT in education
Introduction
In the spring of 2025, I sat down to revise my university English language syllabus—something I do each semester as part of my pedagogical rhythm. This time, however, I had a new design partner: ChatGPT. What began as an experiment in using generative artificial intelligence to streamline planning quickly became a deeper inquiry into what it means to co-construct curriculum under conditions of institutional constraint, professional precarity, and pedagogical care. I wanted to know: Could this tool support—not supplant—my decision-making as a teacher-designer? What would it reveal about my practice, and about the role of technology in shaping the ethics of instructional design?
As a white, U.S.-born EFL educator who has lived and worked in South Korea for three decades, I routinely navigate the paradoxes of curricular freedom and institutional standardization—while also negotiating the complexities of being culturally embedded but not ethnically or nationally Korean. Teachers in my context are rarely given time, pay, or recognition for syllabus creation, despite its central role in shaping student experience. Against this backdrop, ChatGPT promised speed, responsiveness, and a seemingly inexhaustible capacity for iteration. But it also raised important questions: Would this tool amplify my agency—or dilute it? Could it honor the lived experiences of my students—Korean learners with complex language education backgrounds shaped by years of formal instruction in English and other regional languages, yet with limited active fluency—or would it merely default to generic templates drawn from Western teaching norms?
This paper explores the answers through a cycle of reflective action research. Using transcripts from ChatGPT conversations, working syllabus drafts, design notes, and analytic memos, I examine how generative AI participated in, challenged, and sometimes expanded my pedagogical reasoning. Rather than treating the AI as a neutral assistant, I approached it as a discursive partner—one that required my contextual judgment, cultural knowledge, and care-based stance to generate meaningful instructional design.
The significance of this inquiry lies not in proving the usefulness of ChatGPT, but in surfacing how technologies like it interact with teacher agency, instructional ethics, and the hidden labor of curriculum work. While instructional design frameworks such as ADDIE (analysis, design, development, implementation, evaluation) have long guided syllabus revision, the increasing presence of generative AI adds a new layer of complexity. This study foregrounds the affective, relational, and sociocultural dimensions of designing with AI—especially from the positionality of a marginalized educator negotiating agency within a marketized system. In doing so, it contributes a teacher-designer’s perspective to the emerging discourse on AI in education—one grounded in reflection, resistance, and relational pedagogical care.
Background & Literature
This section reviews key areas of scholarship that inform this study: action research, instructional design, pedagogical care, teacher agency and precarity, and the integration of artificial intelligence in English language teaching. These areas shape how I understand my syllabus revision as not only a technical task but also a reflective, ethical, and contextual process—one that engages AI as a partner in pedagogical inquiry.
Action Research and Reflective Practice
This study is grounded in traditions of action research and reflective practice, particularly as conceptualized by Cochran-Smith and Lytle (2009), who emphasize inquiry as a stance teachers take toward their own classrooms. For me, syllabus revision has long served as a site of inquiry and reflection—a way of navigating both institutional expectations and personal pedagogical values. In this project, I treated ChatGPT not simply as a tool but as a design interlocutor, leading to its use for prompt reflection, testing ideas, and documenting decision-making. Hines and Zachocki’s (2015) work on practitioner inquiry during high-stakes educational reform helped clarify how even well-meaning methodologies can be co-opted when teachers are positioned as passive recipients of external expertise. Similarly, Kim and Kim (2017) describe how action research in South Korea has been appropriated as a metric for teacher evaluation, rather than a process of empowerment. My engagement with AI represents a reclaiming of reflective design work as bottom-up and teacher-driven, resisting both the instrumentalization of action research and the automation of planning.
Instructional Design and ADDIE
While I do not explicitly follow the ADDIE model (Analysis, Design, Development, Implementation, Evaluation), its rhythm informs my work (Molenda, 2003). In this study, I focused on the first three stages. During Analysis, ChatGPT surfaced both content and epistemic gaps, prompting me to reconsider whose knowledge was centered. In Design, it offered sample sequences and task types, which I adapted based on my students’ lived realities. During Development, I used it to sketch and refine activities, but always through a critical lens informed by my experience. Although Implementation and Evaluation are not part of this study, their absence is deliberate: I aimed to document a single design cycle before classroom delivery. Rather than relying on AI as a shortcut, I used it as a provocateur—mirroring, challenging, and sometimes amplifying my thinking.
Pedagogy of Care
My work is shaped by a pedagogy of care grounded in decolonial, humanizing practices. Opportunities for silence, multimodal learning, and counterstorying all contribute to a pedagogy of inclusion and care—supporting learners' rights to communicate across modes (Stein, 2007), honoring lived experiences through participatory literacies (Campano, Ghiso, & Welch, 2016), and offering space for counter-narratives that challenge dominant educational discourses (Solórzano & Yosso, 2002). Teaching as a nonKorean woman in South Korea, I inhabit a marginal, liminal role that influences how I build classrooms centered on safety, responsiveness, and inclusion. Care, for me, is not sentimental—it is strategic and structural. It means designing with silence, multimodality, and trauma-informed flexibility. I attend to how learners express knowledge across modes and how cultural identities shape classroom risk-taking. These values influenced how I judged ChatGPT’s suggestions. While the tool offered useful ideas, its default to Western-centered examples reminded me to filter all outputs through a lens of cultural humility. My use of AI was shaped by care: care for students’ lived experiences, for my own labor, and for the ethical complexity of pedagogical design.
Agency and Teacher Precarity
In marketized higher education contexts like South Korea, many EFL educators work under non-tenured contracts and institutional hierarchies that restrict professional autonomy. Agency in such contexts is not about full freedom but about small, intentional choices made under constraint (Priestley et al., 2015). For foreign teachers, especially, agency must be negotiated relationally and reflexively. My syllabus revision is one such act of agency. It reflects how I claim pedagogical space, despite institutional pressures, through iterative, reflective work. Following Cochran-Smith and Lytle (2009), I treat syllabus-making as professional knowledge-in-action. AI adds another layer: it offers affordances but also demands judgment. The danger is in being seduced by its fluency. The opportunity lies in using it to sharpen—not dull—our attention to learners and to our own critical capacities.
Artificial Intelligence in ELT
AI tools are rapidly entering ELT, in many pedagogical forms (Crompton et al, 2024). How teachers might use AI for planning and design is a matter of establishing ethical practices. This study contributes to that emerging conversation by documenting one educator’s use of ChatGPT to revise an undergraduate EFL syllabus. As Siu and Fok’s (2025) findings show, experts when using AI prefer to retain control over complex synthesis and interpretation activities that require nuanced domain understanding. AI is not a substitute for pedagogical reasoning. It can scaffold, but not replace, the situated knowledge of teachers. I used ChatGPT to externalize decisions, clarify sequences, and refine pacing—but the ethical and contextual decisions remained mine. This study also adds to practitioner-led explorations of AI in classroom settings. It highlights how generative tools like ChatGPT can prompt reflection and negotiation, especially when used by teachers with a care ethic and a critical lens.
Taken together, these strands of literature underscore that teaching—and syllabus design in particular—is a site of professional judgment, emotional labor, and sociocultural negotiation. They also suggest that tools like generative AI must be situated within reflective, critical practice rather than adopted uncritically. With these frameworks in mind, I turned to action research as a methodological approach to document and examine how ChatGPT shaped, supported, and at times challenged my syllabus revision process. The next section outlines how I gathered and interpreted data from this design experience.
Methodology: Reflective Action Research with AI Collaboration
This study used a reflective action research (AR) approach to explore how generative AI—specifically the GPT-4-turbo model (also known as GPT-4.5 in some communities)—could support the syllabus design process in a South Korean university English language course. The syllabi were created for two levels of English for Specific Purposes (ESP) courses offered to first- and second-year students majoring in welfare administration. These students are Korean nationals who have studied English formally through secondary education but often lack confidence in speaking and writing. The ESP courses aim to build academic communication skills while introducing students to field-relevant content and professional literacies. Action research was selected for its emphasis on practitioner inquiry, iterative reflection, and classroom-rooted change (Cochran-Smith & Lytle, 2009). Rather than evaluating a finalized course, this inquiry centers on the design phase, examining how AI-mediated planning shaped my thinking, agency, and care as an EFL educator.
The study unfolded during the syllabus development period between May and June of 2025 as I was finishing the Spring Semester, and drew on a combination of digital and analog artifacts:
ChatGPT transcripts: Two detailed chat sessions with ChatGPT served as the primary data sources. The first transcript (see Appendix I) inspired recollection of previous activities that I wanted to include in the following semester. The other transcript (Appendix III) is the work done to write and revise syllabi for the Fall Semester. These transcripts documented iterative planning requests, design critiques, cultural contextualizations, and my own prompts of varying complexity. Each exchange reflects decision points in syllabus creation.
Syllabus drafts: Versions of the working syllabus—ranging from rough notes to formatted outlines—functioned as living documents that mapped the development of pedagogical logic, topic sequencing, and multimodal activities.
Personal design notes: These included comments embedded in Google Docs (Appendix II), voice memos, handwritten margin notes, and embedded reflections in planning spreadsheets. These informal records capture moments of doubt, revision, and improvisation.
Meta-reflections: Emerging through analytic memos (Appendix IV) and post-session journaling, these reflections allowed me to interpret both the design process and my emotional/intellectual reactions to using AI as a pedagogical tool.
I approached this data reflexively and interpretively, recognizing that my teaching identity—as a nonKorean woman working in a South Korean university system—shaped the way I engaged with both AI tools and instructional decisions. Rather than seeking generalizable truths, I viewed this inquiry as an exploratory effort to document the tensions and potentials of AI use in a specific pedagogical context. The chat transcripts were not treated as objective outputs but as co-constructed design conversations that reflect my values, anxieties, and desires for my students.
Throughout the process, I engaged ChatGPT in multiple capacities: as a brainstorming partner, a logistical organizer, a voice-leveling editor, and a cultural outsider requiring guidance. In some cases, I also used ChatGPT to reflect back and summarize the very exchanges we had, allowing me to step outside the immediacy of the conversation and notice patterns.
This methodology embraces the liminality of teacher design work—especially for educators who operate under conditions of precarity and cultural marginality. As a foreign woman teaching English in South Korea, I occupy a position that is both embedded in and peripheral to institutional norms. This positionality shaped not only my teaching but also how I approached AI as a collaborator. The AI-enhanced dialogue became a site where I could externalize, test, and revise pedagogical decisions that are often made tacitly. Through iterative prompting, reflection, and adaptation, I treated the syllabus not only as an instructional product but also as a record of my negotiations with power, identity, and care. In this sense, the evolving collaboration between human agency and machine assistance became a lens for examining the hidden labor of teaching in contemporary ELT.
This study does not attempt to compare the AI-revised syllabus to past syllabi. Instead, it focuses on a single design cycle, using ChatGPT transcripts and accompanying materials as catalysts for reflection. In this way, the project offers a situated, critical account of what it means to co-construct pedagogical design with a generative system, from the perspective of a teacher committed to care, adaptability, and ethical responsiveness.
Meta-Reflection on Findings: What My Prompts Reveal
Beyond the specific syllabus decisions, my exchanges with ChatGPT revealed something important about voice and authority. Early prompts were casual, emotionally candid, and marked by hesitation (e.g., “dang... my students will run away”). Readability analysis (Appendix VI) confirms this: my first prompt rated a 4.8 grade level, while ChatGPT’s response came in at a college-level 11.4. This difference highlights a discourse mismatch—one that I later revised in my earliest drafts of this paper with greater clarity and complexity (Grade 15+). These shifts show how AI not only supports design decisions but also participates in a dialogue where voice, tone, and interpretive stance are continually adjusted.
Beyond the content of the syllabus revisions, the interaction between myself and ChatGPT reveals an additional layer of meaning: the discourse of the prompts themselves. A discourse analysis (see Appendix V) of the full transcript shows that my prompts did not function merely as questions—they operated as reflective utterances, marked by pedagogical stance, emotional tone, and professional judgment. Drawing from discourse analytic frameworks, the prompts were examined through four lenses:
Metadiscourse and stance markers (Hyland, 2005) revealed how I framed the task, marked uncertainty or conviction, and negotiated authority in relation to ChatGPT.
Speech act analysis (Searle, 1979) categorized the prompts not just as requests, but as instructing, reporting, questioning, and often reflecting. Many prompts became spaces for thinking aloud.
Interdiscursivity (Fairclough, 1992) highlighted how multiple discourses—care, professionalism, cultural awareness, efficiency—were layered within a single prompt. These layers illustrate how values and pedagogical identity infuse even small requests.
Relational alignment (DuBois, 2008) was key to observing how I shifted between collaborative tones and directive ones—sometimes treating ChatGPT like a co-designer, other times like a tool needing correction or redirection.
Taken together, this analysis shows that prompts serve as windows into the teacher’s evolving agency and instructional ethics. My shifting tone, formality, and emotional inflection were not stylistic quirks but meaningful reflections of how I negotiated power, care, and intention in this co-constructive space.
As Scollon and Scollon (2003) remind us, discourse does not exist in a vacuum—it is always “in place,” shaped by physical, institutional, and social contexts. My prompts emerged not as abstract inquiries, but as situated teacher talk—rooted in South Korean university realities, my own marginality as a foreign EFL educator, and the rhythms of classroom life. In this sense, even my casual phrasings or embedded critiques indexed a richly contextualized practice, where design decisions were always already entangled with my lived professional world.
Findings: Five Moments in AI-Supported Design
This section presents five critical design moments from my collaboration with ChatGPT in revising an EFL syllabus. These vignettes were selected from a longer transcript of prompts and responses exchanged during the design phase and represent distinct moments of pedagogical decision-making—whether through challenge, surprise, affirmation, or adjustment. Each captures a unique dynamic in the interaction with AI: from judgment and revision to co-creation and ironic reflection. Rather than summarizing the full exchange, these moments were chosen for how clearly they surfaced my values, discomforts, and professional reasoning. Together, they offer insight into how teacher agency is enacted not through grand gestures, but through everyday negotiations with tools, traditions, and emerging technologies.
Vignette 1: Prompting for Cultural Nuance When I initially asked ChatGPT for a Week 1 lesson plan, it returned polished but culturally inappropriate activities that did not align with South Korean classroom dynamics—particularly in terms of face-saving and low-stakes engagement. I pushed back, offering context-specific feedback about EFL learner anxiety, performance norms, and drop/add realities. In response, ChatGPT revised its plan to include visual tools, emoji-based interactions, and no-pressure speaking.
Reflection: This moment revealed how essential teacher context is to appropriate lesson design—and how AI, while responsive, cannot anticipate cultural nuance without explicit input. My care practice emerged through critique, not compliance.
Vignette 2: Scaffolding Without Asking After supplying ChatGPT with three weeks of lesson plans, I requested ideas for Weeks 4–7. Without prompting, the AI continued the scaffolding logic I had embedded in earlier weeks, providing leveled suggestions for both freshmen and sophomore students. This demonstrated pattern recognition and retention of implicit design logic.
Reflection: Though I didn’t explicitly name “scaffolding,” the AI picked up on my rhythms. This was a moment of co-creation—not because the tool predicted my goals perfectly, but because it mirrored back my own pedagogy in ways I could refine.
Vignette 3: Surprised by Lateral Extension When I intended to offer ideas for Weeks 9–12 but hit “Enter” too early, ChatGPT responded independently with a progression that extended earlier themes in surprising, thoughtful directions. Its ideas weren’t redundant, nor were they repeats—they were adjacent in focus and connected to broader themes I’d explored in previous chats.
Reflection: This was the first moment I felt the AI moving laterally—not just following a prompt chain, but improvising within the pedagogical groove we had established. The output surprised me in a generative way, echoing the kind of idea I might receive during a hallway chat with a colleague.
Vignette 4: Collating Ideas to Reduce Cognitive Load When my prompts reached the planning stage for Weeks 13–15, I didn’t ask ChatGPT to generate new lesson plans. Instead, I offered background context about my instructional goals for those final weeks of the semester—namely, helping students articulate their learning, reflect on growth, and set up final interviews. To support this, I uploaded a Google Doc containing an earlier set of ChatGPT-generated ideas about the role of community colleges and educational equity for students outside elite university tracks. That document had since been heavily annotated with my own comments, linking its ideas to prior activities I had successfully used in class. I asked ChatGPT to analyze the document and organize those brainstormed activities into a coherent, week-by-week sequence for both of my course levels. The AI returned a structured plan that aligned well with my teaching goals and saved me substantial time I would have otherwise spent manually sorting and mapping out those ideas.
Reflection: This saved me significant time and effort. Ordinarily, this kind of collation would take up to 90 minutes by hand and likely result in the loss of good ideas simply due to overwhelm. AI supported my executive function here, not by replacing judgment, but by scaffolding it.
Vignette 5: Naming the Hidden Curriculum—Or Not At the end of our interaction, ChatGPT summarized its contributions and offered a final comment: that I need not explicitly name certain concepts (e.g., “soft credentialing” or “strategic stacking”) in the syllabus. This advice struck me as ironic. One of its earlier suggestions had involved exploring the hidden curriculum, yet it now advised leaving things implicit. My own pedagogy, by contrast, centers learner empowerment through explicit naming—so students can understand, articulate, and transfer their learning.
Reflection: This moment highlighted a core mismatch between AI and my pedagogical ethics. ChatGPT's suggestion was practical in light of some of the parameters of my teaching context. However, not only did ChatGPT not follow my value of transparency but it also broke from its own suggestion to teach explicitly about hidden curriculum. My commitment to transparency, learner voice, and naming power called me to reject it. This was not just about what to include—it was about who teaching is for. ChatGPT didn't carry this through logically; instead it offer a more leveled, practical approach than teaching high-level but content-relevant vocabulary to students.
Across these five moments, what emerges is a picture of AI not as a substitute for teaching but as a provocation to reflect. It served many roles—co-planner, summarizer, heuristic—but ultimately relied on my cultural knowledge, ethical stance, and emotional labor to make its outputs pedagogically sound. These design vignettes offer insight into how reflective educators might use AI tools not simply to save time, but to surface values and refine craft.
Discussion
This study began as an experiment with ChatGPT to assist in revising my EFL course syllabus. What emerged was not just a streamlined planning process, but a complex emotional and professional encounter with a technology that both supported and unsettled me. The experience illuminated how artificial intelligence can extend teacher agency while also intensifying the hidden labor of pedagogical care.
In many institutional contexts—particularly those shaped by precarity—teachers rarely have full control over their syllabi. And even when they do, designing from scratch is not always desirable. Tools like ChatGPT enable a lone educator to occupy multiple roles: materials developer, subject-matter expert, instructional designer, linguist, and pedagogist. On the surface, this appears empowering. But as both Kim & Kim (2017) and Hines and Zachocki (2015) suggest, even tools or models framed as “empowering” may serve institutional logics that subtly reinscribe neoliberal labor expectations. The same reflective tools meant to center teacher voice can become requisites for institutional performance.
AI, in this context, democratizes access to curriculum design, allowing for "just-in-time" content creation that can respond to learner needs more flexibly. This is particularly valuable in under-resourced classrooms, where learners may lack access to high-quality textbooks, stable technology, or institutional support. But the same technology that expands the teacher’s reach also raises ethical concerns. The economic logics behind AI tools often prey not on the student’s wallet, but on the teacher’s bandwidth and care. While I didn’t have to revise my syllabus in this way, I chose to—because I see it as part of my ethical obligation to continually adapt, reflect, and meet my students with intention. As Cochran-Smith and Lytle (2009) remind us, such practitioner inquiry is a form of stance-taking—a political and ethical position teachers assume when they study their own work.
The extended “power” AI gave me—its ability to synthesize my ideas, re-sequence my curriculum, and maintain continuity across levels—was genuinely helpful. It allowed me to provide more tailored learning experiences. But I also noticed a new kind of exhaustion. When I plan by hand, the process unfolds organically. I gather notes, reflect, test ideas over time. There’s a rhythm to the work. With AI, the acceleration of decision-making and design compressed all of that. I found myself disoriented by how much I could accomplish, and how fast.
This created a subtle but deep sense of vertigo. I could still design excellent courses without AI. But with it, I could refine more, do more, reach further. The analogy that came to mind was the evolution of spreadsheets: once I started using Excel instead of calculating grades by hand, I never wanted to go back. But that doesn’t mean the labor disappeared—it just became less visible, and perhaps more totalizing.
In real-world practice, especially in under-resourced environments, AI offers real benefits. It allows me to scaffold activities, diversify content, and design for learners who deserve more than generic materials. It enables students to break out of the limitations of suboptimal resources and engage in higher-order thinking, reflection, and language use. But there are risks. If AI-powered design becomes the new standard, without accompanying pay, time, or institutional recognition, then educators, especially precarious ones, will be expected to do more with less, again. And this time, faster. As Oliveira et al. (2023) argue in their critique of marketized higher education, these patterns of "efficiency" often obscure the intensification of labor.
Throughout this process, my agency remained intact. I accepted, rejected, and reworded AI outputs, guiding the system with the expertise that comes only from years in the classroom. Care, too, was present—not only in how I planned for my students’ needs but in how I responded to ChatGPT’s overly Western, extroverted suggestions. My ability to laugh, reframe, and center my learners' realities was a kind of resistance and a reminder of who I teach for.
And yet, the project also underscored my vulnerability. By feeding my expertise into the machine, I was contributing to something that might one day replace not just my tasks, but my pedagogy, my care. That contradiction—that the tool I use to extend care might be complicit in devaluing it—is what haunts me.
AI did not replace my role as a teacher-designer. But it reframed it. It changed how and when decisions get made, and how quickly my expertise must operate. It illuminated how little time teachers often have to think. And it confirmed that the emotional and ethical labor of teaching remains squarely on my shoulders—even when the lesson plan writes itself.
Conclusion
This study began as an exploration of how generative AI might support syllabus revision, but it quickly evolved into a deeper inquiry into what it means to design ethically, reflectively, and contextually in an increasingly automated educational landscape. My iterative exchanges with ChatGPT were not just about lesson planning—they became sites of negotiation, resistance, and meaning-making. The AI tool offered suggestions, patterns, and structures, but the real work lay in interpreting, adapting, and sometimes rejecting those offerings based on my knowledge of students, my values as a teacher, and the constraints of my teaching context.
Rather than replacing my pedagogical judgment, AI amplified the need for it. My prompts to ChatGPT were themselves acts of reflective practice—embedded with stance, emotion, and intention. This project affirms that human-AI interaction in education is not merely technical but deeply discursive and ethical. It requires teachers to constantly navigate the boundary between efficiency and care, innovation and exhaustion, automation and embodiment.
For teachers working under conditions of precarity, as I do, AI tools present both promise and risk. They can lighten the cognitive load of curriculum design, support creativity, and offer scaffolding that might otherwise be unavailable. But they also introduce new pressures—to do more, faster, with fewer resources—and to feed our expertise into systems that do not recognize or reward the labor involved. As Gee and Hayes (2011) argue, digital technologies are not neutral—they reflect the social systems and cultural ideologies in which they are embedded. In this way, AI often mirrors the very systems it claims to optimize, subtly reinforcing norms around productivity, standardization, and expertise unless critically engaged.
The syllabus I revised through this study is not the final product of AI intervention; it is the result of a deeply human process of sifting, questioning, imagining, and revising. It reflects the complex interplay between care, context, and criticality that defines good teaching. As generative AI becomes more present in our classrooms and workflows, we must not only ask what it can do, but what kind of pedagogical relationships we want to cultivate through it—and what remains uniquely ours to hold.
References
Campano, G., Ghiso, M. P., & Welch, B. J. (2016). Partnering with immigrant communities: Literacy through action.
Cochran-Smith, M., & Lytle, S. L. (2009). Inquiry as stance: Practitioner research for the next generation. Teachers College Press.
Crompton, H., Edmett, A., Ichaporia, N., & Burke, D. (2024). AI and English language teaching: Affordances and challenges. British Journal of Educational Technology, 55(6), 2503-2529.
Du Bois, J. W. (2008). The stance triangle. In Stancetaking in discourse: Subjectivity, evaluation, interaction (pp. 139-182). John Benjamins Publishing Company.
Fairclough, N. (1992). Discourse and text: Linguistic and intertextual analysis within discourse analysis. Discourse & society, 3(2), 193-217.
Gee, J. P., & Hayes, E. R. (2011). Language and learning in the digital age. Routledge.
Hines, M. T., & Zachocki, J. (2015). Using practitioner inquiry within and against large-scale educational reform. Teacher Development, 19(1), 62–77.
Hyland, K. (2017). Metadiscourse: What is it and where is it going?. Journal of pragmatics, 113, 16-29.
Kim, J., & Kim, M. (2017). Educational action research in South Korea: Finding new meanings in practitioner-based research. In Zuber-Skerritt, O. (Ed.), The Palgrave International Handbook of Action Research (pp. 165–182). Palgrave.
Molenda, M. (2003). In search of the elusive ADDIE model. Performance Improvement, 42(5), 34–36.
Oliveira, D. A., Lee, M., & Chen, Y. (2023). Navigating an academic career in marketized universities. Higher Education Research & Development, 42(1), 120–135.
Priestley, M., Biesta, G. J., Philippou, S., & Robinson, S. (2015). The teacher and the curriculum: Exploring teacher agency. The SAGE handbook of curriculum, pedagogy and assessment, 187-201.
Scollon, R., & Scollon, S. W. (2003). Discourses in place: Language in the material world. Routledge.
Searle, J. R. (1979). Expression and meaning: Studies in the theory of speech acts. Cambridge University Press.
Siu, A., & Fok, R. (2025). Augmenting Expert Cognition in the Age of Generative AI: Insights from Document-Centric Knowledge Work. arXiv preprint arXiv:2503.24334.
Solórzano, D. G., & Yosso, T. J. (2002). Critical race methodology: Counter-storytelling as an analytical framework for education research. Qualitative inquiry, 8(1), 23-44.
Stein, P. (2007). Multimodal pedagogies in diverse classrooms: Representation, rights and resources. Routledge.
Appendices
Appendix I: Reflection Chat about community colleges
This summary is based on an AI-facilitated professional dialogue conducted via ChatGPT (OpenAI, 2025), used as a tool for reflective inquiry and comparative analysis between South Korean and U.S. higher education advising practices.
Summary of Professional Dialogue on University Prestige, ROI, and Student Advising in South Korea and the U.S.
This appendix summarizes a reflective dialogue exploring the implications of university prestige, socioeconomic return on investment (ROI), and advising practices in higher education—particularly within the South Korean context, with comparative insights from the United States. The conversation was prompted by a social media thread in which a poster argued that attending an elite university (e.g., Yale) would have been preferable to attending a community college, regardless of graduation status, due to the enduring benefits of elite institutional networks.
Building from that premise, the discussion turned toward the South Korean higher education system, especially the dominant role of the “SKY” universities (Seoul National, Korea, and Yonsei). It highlighted how students at a third-tier provincial university often use their first year to "level up" or "level down" based on GPA and career strategy—sometimes transferring in order to optimize either their degree signal or their transcript scores. Despite comparatively lower tuition than in the U.S., Korean students often incur significant out-of-pocket costs through extensive enrollment in private cram schools (hagwons), especially for English exams, computing certifications, and public service entrance exams.
The dialogue emphasized several critical points for advising:
The degree no longer guarantees employment in either country, and educational ROI is often delayed or unrealized (as evidenced by OECD and KEDI reports).
Korean students must approach university as one component of a broader career portfolio, which includes private study, soft skills development, internships, and credential stacking.
Strategic advising should move beyond linear prestige-focused trajectories and help students recognize horizontal opportunities for professional development and societal contribution, even outside elite institutions.
The importance of social capital, soft skills, and alternative credentialing was underscored for both Korean and American contexts.
The advisor’s role now includes helping students think critically about how to navigate and combine institutional resources, private learning, and network-building in an increasingly competitive and saturated labor market.
In practical terms, the researcher considers pivoting advising strategies to include bilingual resources, visual aids, and workshops to support students and families in understanding how to “use” their degree more effectively—shifting the narrative from admission to long-term employability and purpose. This aligns with the researcher's broader focus on public administration, EFL education, and sociocultural transitions in higher education.
Appendix II: Learnings from Appendix I Reflection Chat with comments
This appendix includes a screenshot from my personal Google Document used during the early stages of conceptual planning for this action research project. The document contains embedded comments reflecting critical questions, pedagogical intentions, and implementation strategies related to advising students on university prestige, return on investment (ROI), and future-oriented skill development in South Korea. These annotations served as a space for iterative reflection and practical application of research-informed insights. The screenshot captures a snapshot of my thought process and working dialogue with myself as an educator-researcher.
Appendix III: Syllabus Revision Chat
This appendix presents a summary of a reflective dialogue combining an AI-assisted exchange (via ChatGPT) and my personal course planning notes. The discussion focused on revising a syllabus to better align with evolving student needs, institutional priorities, and insights from prior teaching experiences. While the full ChatGPT transcript and original planning documents are not included due to proprietary content and privacy considerations, this summary captures key themes, questions, and decisions that emerged from the integrated review process. The process helped bridge past pedagogical practices with current instructional design goals in a confidential and exploratory format.
ChatGPT-Supported Curriculum Planning Log
This appendix documents a collaborative curriculum development exchange between the instructor and ChatGPT (GPT-4-turbo) during the planning stages for two EFL university courses in South Korea: one for first-year (freshman) students and one for second-year (sophomore) students. The conversation took place across multiple entries, with the instructor guiding the pedagogical intent and contextual constraints.
The purpose of the exchange was to co-construct a 15-week semester plan that:
Reflected low-pressure, student-centered practices for mixed-level EFL learners
Integrated multimodal, place-based, and reflective learning activities
Incorporated ideas from a previously annotated professional planning document focused on social capital, soft credentialing, and non-linear success strategies in the Korean education context
Key outcomes of the chat included:
A scaffolded week-by-week content arc for both courses, balancing reflection, skill development, and issue-based content (e.g., bullying, urban aging, green space access, invisible labor)
Thoughtful accommodation for institutional realities, such as large class sizes (45–60 students), students’ language anxiety, device access, and Korea-specific academic pressures (e.g., test culture, face-saving behaviors)
Infusion of action-research-relevant strategies, such as:
Diagnostic and portfolio-based assessment models
Use of low-stakes tech tools (Padlet, Google Forms, short video voiceovers)
Weekly scaffolding of identity-building, civic awareness, and reflective practice
Differentiation between the general citizenship focus for freshmen and city-based field inquiry for sophomores
Integration of elements from an internal planning document (e.g., concepts such as “strategic stacking,” “hidden curriculum,” “soft skills,” and “public service orientation”) into weekly lesson themes without requiring explicit theoretical framing for students
The interaction exemplifies how AI can be used to support teacher agency in creative, context-sensitive curriculum design while also aiding in the operationalization of broader educational research themes into weekly practice. The resulting structure emphasizes coherence across diagnostic assessment, experiential learning, and reflective synthesis phases over the 15-week semester.
Appendix IV: First analysis (My notes or memos)
This appendix contains a direct copy of my initial analytic notes, including early memos, thematic outlines, and color-coded observations. These informal reflections represent my first attempt to make sense of emerging patterns, tensions, and priorities in the action research process. The content is presented in its original form to preserve the exploratory and iterative nature of practitioner inquiry. These notes served as a foundation for deeper synthesis and revision in later stages of the project.
My Notes
Chatgpt took my notes & made narrative
First try super scholarly
2nd try more like my discourse
I choose the vignettes i want to highlight/unpack
Decent job; but still there are some disconnects. Makes it sound like chatgpt is asking me to culturally nuance things; still using weird/illogic for hidden curriculum critique
I copy/paste chatgpt revising syllabus chat to a gdoc & upload to ask for sequencing prompt/answer.
Gave me prompt number with cats: user input; function; chatgpt response; analytical note
Asked chatgpt to do a discourse analysis of the studied chat.
Meta-Descriptive Memos
My Prompt 1: Organizational - doc/comments into possible activity list. Manages scope.
My prompt 2: Request for week 1 lesson plans based on possible activities as well as my class parameters
My Prompt 3: Feedback that is culturally nuanced, with rationale
My prompt 4: Tentative acceptance of week 1; week 2 request for lesson plans
My prompt 5: Tentative acceptance of week 2; extensive instructions about activities for week 3; request to help organize these
My Prompt 6: Accepted week 3; pitched ideas for weeks 4-7 for both course levels
Chatgpt answer to prompt6: understood that i wanted scaffolds without me stating this. As this was previously established in w1-3
My prompt 7: Accepts and I mention this will help me in w8 (reflection midterms). Ask for topics for both courses for w9-12. I was going to mention some ideas but i hit enter instead of return to go to a new line and give ideas.
Chatgpt answer to prompt 7: made a decision to go deeper into and naturally extending w 4-7. However, their response was more unique than my expectations. I thought they would cover the same topics in a new way, but instead they pitched other content topics that i have previous chats on (not exactly but some similarity)
My prompt 8: I then explain my goals for w13-15 but dont ask for help. I offer this info as background so ai wont feed me something i dont want. I then ask it to go back to the original gdoc attachment and massage the doc/comments into both courses.
Chatgpt answer to prompt 8: offers grids of my activity ideas from the comment sections and how they could fit the different courses week by week.
My memo for prompt 8: This saves me enormous time.this would have taken my handwriting with crayons/recycled paper at least 30-90 min to collate and accept. Even then i might be apt to go over it several times before actually making choices of what would actually go on the final syllabus version.or a lot of my ideas would be left behind because i couldnt see a simple way to include them without overwhelming my students.
Chapgot answer in a summary to this convo: It also offers me a final thought that i dont need to be explicit about some of the original concepts pitched by ai in my initial response to the threadsp post “strattegic stacking” and “soft credentialing”.
My memo to this final chatgpt summary: However i find this final advice ironic. Another concept that was pitched was the hidden curriculum. And my regular teaching style is to make things explicit for the learner as well as give them the power to name things so they can then talk about their experience, work and skills and convince others of their usefulness/importance/relevance.
Annotate how ChatGPT influenced specific syllabus revisions
Adding more of my experience knowledge about K culture and student demonstrated responses from the past.
Most of my prompts were to elicit details for lesson plans which is helpful for me because these are the things i love most and would laterally extend these to beyond my students capabilities/capacity and then i would have to edit down and cut. Instead the ai response which was short and clear helped me reduce one of my own problematic (fun but problematic) practices in my pedagogy creation. So i was reading the responses for logical flow as well as where they were really helpful for my students. In the early weeks i mention that i am not fully on board but that we would move along. This mimics my past practice with making several iterations and then ignoring some ideas or in my talk alouds with friends about some particular problem i might be facing in my class.
Appendix V: Discourse Analysis
This appendix presents a discourse analysis generated by ChatGPT, focusing on an extended interaction between the teacher-researcher and the AI during the syllabus revision process. The analysis draws on content from the file titled "ChatGPT Syllabus Revision", examining patterns of inquiry, feedback loops, and the co-construction of pedagogical meaning. This AI-assisted analysis was used as a reflective tool to surface underlying assumptions, shifts in stance, and evolving priorities in curriculum design.
Discourse Analysis of ChatGPT-Supported Syllabus Design: User Prompting as Reflective Practice
1. Methodology and Criteria This discourse analysis focuses on a single extended interaction between a teacher-researcher and ChatGPT, as documented in the file "ChatGPT Syllabus Revision." The purpose is to examine how the user's prompts function not merely as queries but as forms of reflective discourse, embedded with pedagogical stance, emotional tone, and professional judgment. While both sides of the interaction are analyzed, particular attention is paid to the user's language to understand how teacher agency, positionality, and values are articulated and negotiated.
The analysis uses the following discourse analytic lenses:
Metadiscourse and stance markers (Hyland, 2005): To examine how the user positions themselves, frames the interaction, and negotiates authority.
Speech act analysis (Searle, 1979): To classify the user prompts functionally (e.g., instructing, requesting, reporting, reflecting).
Interdiscursivity (Fairclough, 1992): To examine how multiple discourses (e.g., care, professionalism, efficiency, cultural knowledge) are woven together.
Relational alignment: To observe how the user constructs a collaborative or directive relationship with the AI tool.
2. Data Observations and Findings
Finding 1: The prompts function as iterative reflections, not one-off questions Across the document, the user doesn’t merely request information from ChatGPT—they scaffold their own thinking aloud, often using the prompt space as a place to consolidate, rehearse, or test curricular logic. This is especially visible in prompts that begin with phrases like:
“Here’s what I’m thinking...”
“I want to try something...”
“I know I want to include...”
These are metacognitive cues. The user uses prompts to surface and shape their design logic. This positions ChatGPT less as a source of expertise and more as a reflective surface.
Finding 2: The user frequently switches between discourse registers There is a notable shifting between:
Professional instructional language (“formative assessment”, “low affective filter”, “multimodal scaffolding”)
Casual/personal register (“this might be a mess”, “I know that sounds chaotic”)
Technical delegation language (“can you collate this into a grid”, “massage this content into both courses”)
This blending of registers is a key feature of authentic teacher talk during curriculum building—it marks the user as both an expert and someone in process. It also underscores the user's use of the chat as a liminal, in-between space for experimenting without performative stakes.
Finding 3: Politeness and hesitation signals show emergent negotiation of trust Early prompts include hedges like “maybe”, “I’m not sure”, “I wonder if”, which diminish by mid-conversation. This pattern suggests increasing confidence in the AI tool as a design partner. The interaction shifts from tentative to strategic, reflecting the user’s emerging trust in the iterative value of AI participation.
Finding 4: Explicit concerns about cultural context reveal a care-based pedagogy At multiple points, the user corrects or redirects ChatGPT by invoking South Korean university context, EFL classroom dynamics, or face-saving needs of students. These interventions are rarely framed as corrections; instead, the user recontextualizes:
“In my context...”
“That wouldn’t work here because...”
These speech acts are not only about pedagogical clarity—they surface the user’s ethical commitments. This marks a sustained discourse of care running parallel to the instructional logic of the planning conversation.
Finding 5: The AI adapts its register and logic in response to the user’s modeling ChatGPT’s early responses tend toward neutral, list-based suggestions. As the conversation continues, the AI mirrors the user’s more complex sentence structures, hedged tone, and context-specific vocabulary (“cultural nuance,” “low-stakes reflection,” “iterative scaffolds”). This shows how the AI becomes an increasingly attuned interlocutor when guided by sustained modeling. It also illustrates the role of prompt style in shaping AI output.
3. Interpretive Summary The user does not use ChatGPT in a purely instrumental way. Instead, they engage in a dialogic form of design where the prompt space becomes a site of metacognition, emotional processing, ethical positioning, and reflective inquiry. The AI’s responses are shaped not just by data but by the tone, scaffolding, and stance embedded in the prompts themselves. This interaction enacts the kind of critical, situated pedagogy the user aims to foster in their own teaching.
The discourse in this document reflects a teacher not seeking to automate their labor, but to remain agentive within it—to use AI as a tool for clarity, care, and curricular creativity while negotiating the complex positionality of EFL instruction in a precaritized context.
Appendix VI: Readability Comparison
This appendix includes a ChatGPT-generated readability analysis comparing four types of texts: the original prompts I created, ChatGPT’s responses, my reflective memos, and my first revised version of Prompt 3. The comparison offers insights into tone, complexity, and accessibility across human- and AI-generated content, as well as iterative refinements made during the syllabus revision process. This analysis supported my efforts to investigate the discursive meaning making of discourse over the collaboration process.
I said: dang. i cant do this in week 1! my students will run away. this is too high stakes in face saving korea. My method over the years has been low key week 1 - intro syllabus, myself as a teacher, and let them do an ice breaker. I have three class times, each with about 40-50m of possible activity time.
Chatgpt summarized: ChatGPT said: Thanks for clarifying—that's super helpful. You're right: in Week 1, especially in large, mixed-level EFL classes in Korea, low-key, low-expectation, low-exposure activities are essential. Face-saving and ambiguity tolerance need to be respected
I memoed: My Prompt 3: Feedback that is culturally nuanced, with rationale
Revision 1 Prompt 3: I was very disappointed in the proposed activities and gave some explicit, culturally nuanced feedback and the rationale behind the lesson structure. This was to revise the chatgpt response beyond surface-level planning and integrate how well it could respond to my specific teaching context. Result: ChatGPT adjusted its tone and reasoning, offering more appropriate suggestions, but still not quite the best fit for my South Korean university context. While the answer wasn’t significant, it showed responsiveness to my cues and affirmed that AI could be a reflective partner when nudged properly.
Interpretation:
Your prompt is conversational, emotionally candid, and low-stakes in tone—this aligns with the relatively low grade level and high readability.
ChatGPT’s response is more formal and syntactically dense, which significantly increases its complexity. It's approaching the readability of academic writing.
Your memo falls in between—still more accessible than ChatGPT’s, but more abstract and conceptual than your spoken-style prompt.
Your Revision 1 Prompt 3 is the most syntactically dense and abstract of the four samples. It uses academic constructions like “culturally nuanced feedback,” “surface-level planning,” and “contextual responsiveness”—which increase sentence length and syllable complexity, triggering a graduate-level grade score and a low readability rating.
Comments
Post a Comment