Designing with AI in Precarious Times: A Teacher's Critical Memoir on Syllabus Revision and Agency

Designing with AI in Precarious Times: A Teacher's Critical Memoir on Syllabus Revision and Agency

By Maria Lisak EdD (How to cite this)

Bio: With over 30 years of EFL experience, Maria Lisak, EdD works at Chosun University, where she teaches social entrepreneurship in English using experiential learning and sociocultural approaches. Her work integrates constructivist and emancipatory frameworks, with research focusing on funds of knowledge, Gwangju as Method, and social justice education. She also designs educational technologies and materials for diverse ESP contexts, linking classroom practice with community needs. Her current interests include literacy, culture, and language education, and participatory frameworks for teacher wellbeing. Her interdisciplinary work invites reflection on multimodal pedagogies, material making, and context-driven innovation in borderland spaces.

Abstract:

This reflective essay examines what it means to design curriculum with generative AI under conditions of institutional and professional precarity. Drawing from the author’s experience revising English for Specific Purposes (ESP) syllabi in Administrative Welfare at a South Korean university, the piece explores how ChatGPT became a site of pedagogical negotiation rather than a simple planning tool. Grounded in a pedagogy of care, teacher agency, and culturally responsive practice, the essay shares three critical vignettes that highlight tensions around cultural fit, instructional ethics, and the hidden curriculum. Rather than presenting AI as a co-designer, the author positions it as a reflective surface that can help externalize decisions but cannot replace human judgment or contextual understanding. For educators navigating constrained systems, especially in cross-cultural or contract-based environments, this account offers a teacher-centered perspective on syllabus design as emotional, political, and intellectual work. The essay encourages readers to think not only about what AI can do in education, but about how its use affects the relationships, values, and decisions at the heart of teaching.


Key words:

teacher agency, generative AI, syllabus design, pedagogy of care, culturally responsive teaching

Introduction

Each term, I enter a reflective period to revise my ESP (English for Specfic Purposes) syllabus in Administrative Welfare—a long-standing course I teach at a South Korean university. This iterative act of syllabus revision has become both habit and survival strategy—one of the few zones of curricular agency afforded to contract-based faculty in a rigid, top-down educational system. In the spring of 2025, however, I invited a new design collaborator into that space: ChatGPT.

Initially, I approached this generative AI tool with practical hopes. Like many educators teaching large, mixed-level EFL classes under time constraints, I wondered if ChatGPT could ease some of the cognitive load of planning, especially given the invisible labor involved in balancing institutional requirements, student needs, and personal teaching values. But what began as a tentative exploration quickly unfolded into a deeper ethical and pedagogical inquiry: What does it mean to design with AI under conditions of professional and institutional precarity?

As a white, U.S.-born woman teaching English in South Korea, my position is layered with complexity. I am embedded in the university system, but not fully of it, linguistically fluent in classroom English but still marked as a cultural outsider. My students are Korean nationals navigating the pressures of globalized credentialing, English-language proficiency, and social expectations around academic success. Many arrive with extensive experience in test-prep English but little confidence in active use, particularly in speaking and writing. My courses aim to support both communicative competence and critical engagement through place-based, multimodal learning activities.

Designing for these learners requires more than templated activities or imported standards. It demands contextual sensitivity, cultural humility, and a pedagogy of care, especially in a system where instructional design work is undervalued, unpaid, and often invisible. In my position, non-tenured, foreign teachers frequently shoulder the burden of innovation without the institutional power to sustain or reward it. This is the landscape in which ChatGPT appeared: not as a magic fix, but as a provocative new interlocutor.

In this reflective essay, I trace how AI became part of my syllabus design process, not to showcase polished outputs, but to examine the dialogue that unfolded between my pedagogical values and the tool’s generative suggestions. Drawing on annotated prompts, draft revisions, and reflective notes, I explore how designing with AI surfaced tensions around authorship, authority, care, and labor. Rather than treating ChatGPT as a neutral assistant, I engaged it as a discursive partner, one that required interpretation, critique, and cultural reframing.

Although situated in a South Korean university, the challenges discussed here, such as teacher precarity, culturally responsive syllabus design, and emerging AI tools, resonate across many East Asian EFL contexts, including Japan, where instructors face similar pressures and innovations. This piece does not present a study with measured outcomes. Instead, it offers a teacher-designer’s perspective on the evolving role of AI in curriculum work. In doing so, it considers how technology enters the everyday practices of language educators—not in abstract futures, but in the gritty realities of overloaded syllabi, limited prep time, and students who deserve more than off-the-shelf solutions. Designing with AI under precarity means navigating a space of possibility and pressure, support and surveillance. It is in this liminal zone that my reflection begins.

Theoretical Grounding

My syllabus design process is informed by four overlapping areas of pedagogical theory: pedagogy of care, teacher agency under precarity, instructional design thinking, and culturally responsive pedagogy, especially as these relate to adult learning and content for administrative welfare. These frameworks help me engage critically with both the practice of designing curriculum and the ethical implications of incorporating generative AI into that process.

Pedagogy of Care

I draw from a pedagogy of care that is not merely interpersonal but structural and situated. In contexts like mine, teaching English for Academic Purposes to welfare administration majors in a Korean university, care must be enacted through course design choices that recognize and respond to students' lived realities, disciplinary needs, and emotional burdens. As Stein (2007) suggests, pedagogical care includes validating multimodal expression and resisting the marginalization of students’ cultural literacies. Similarly, Campano, Ghiso, and Welch (2016) emphasize participatory literacies as a route to humanizing education, particularly when students come from underrepresented or institutionally underserved groups.

For my students, English is not just a communication tool—it is a gatekeeper, a bureaucratic necessity, and often a source of academic anxiety. Many arrive with extensive exposure to testing regimes but limited experience in disciplinary discourse or dialogic learning in English. Designing with care means scaffolding complex thinking, making room for silence and emotional processing, and resisting the pressure to “cover” content at the expense of connection.

When incorporating ChatGPT into my design work, I applied this ethic of care to the tool itself: How did its outputs align with my students’ emotional and cognitive needs? Did its tone reinforce deficit models or invite inquiry? Were its suggestions flexible enough to be reworked for trauma-informed, inclusive, and critical engagement? The care I practiced was both toward students and toward my own labor, resisting the drive toward frictionless automation in favor of intentional, situated design.

Teacher Agency and Precarity

While I draw on Priestley, Biesta, and Robinson’s (2015) ecological framing of agency, I also situate this work within Asian EFL contexts and critical perspectives on teacher identity and positionality (Barkhuizen, 2017; Varghese et al., 2005). My teaching position, non-tenure-track, foreign faculty, sits within layers of precarity shaped by both institutional hierarchy and cultural location. In South Korean higher education, foreign educators often face unclear expectations, limited professional advancement, and invisibility within curricular decisions. Yet, as Priestley, Biesta, and Robinson (2015) argue, teacher agency does not require full autonomy—it emerges through thoughtful navigation of constraint.

Syllabus design is one of the few zones where I can assert professional judgment. The decisions I make—what texts to include, how to frame assignments, how to pace the course are acts of pedagogical authorship. This revision process became a way to assert that agency in a new context, mediated by AI. ChatGPT offered options, templates, and phrasing. But each decision to accept, reject, or revise its suggestions highlighted my embodied knowledge and contextual awareness.

Cochran-Smith and Lytle (2009) describe this kind of reflection as “inquiry as stance,” a way of seeing teaching as a continuous, principled investigation into one's own practice. My work with ChatGPT was not merely a matter of technical adoption; it was an act of reclaiming design labor as professional, intellectual work.

Instructional Design and Curriculum Thinking

While I do not formally follow the ADDIE model, its rhythm, especially those of Analysis, Design, Development, mirrors my iterative approach to syllabus creation (Molenda, 2003). During the analysis phase, I considered the evolving needs of my students, many of whom face pressures related to licensing exams, civic employment, and intergenerational caregiving. ChatGPT was used to brainstorm, surface assumptions, and re-sequence content, often prompting me to pause and reflect on whose knowledge and discourse styles were being privileged.

In the design phase, AI-generated outlines and sample questions became a kind of collaborative sketchpad. I remained critical of generic language or Western rhetorical framing, and I rephrased or localized where necessary. In development, I refined assignments and pacing through repeated interaction with the AI, but always with the understanding that I, not the tool, was responsible for instructional coherence and cultural fit.

Syllabus design in this context is not a neutral act. It is a form of invisible labor that is both intellectually rigorous and emotionally taxing. Engaging with generative AI allowed me to externalize some of my thinking, but it did not eliminate the judgment, adaptation, and care required to make design ethical and effective.

Culturally Responsive Pedagogy in Adult Learning

Much of the literature on culturally responsive teaching (Gay, 2000; Ladson-Billings, 1995) emphasizes validation, multidimensionality, and emancipation. These principles resonate with my work in adult learning, particularly in supporting students preparing for careers in social welfare. Many of my students are themselves future caregivers, civil servants, or public advocates. Designing culturally responsive English instruction means attending not just to surface-level culture (names, holidays, idioms), but to deep cultural narratives around service, authority, and social responsibility.

Drawing on frameworks from Ginsberg (2015), I aim to foster participation, transfer, and critical consciousness. This means contextualizing English instruction within Korean administrative discourse, public service case studies, and students’ own lived experiences with welfare systems, both personal and observed.

In this light, ChatGPT became a useful but limited partner. While it could generate disciplinary content, it lacked embedded cultural understanding. I had to reshape its output to reflect Korean policy environments, elder care structures, and social norms around public service. This reworking process was not merely technical—it was culturally responsive labor.

The Design Encounter

This section shares three key vignettes from my collaboration with ChatGPT in revising two English for Specific Purposes (ESP) syllabi for welfare administration majors at a South Korean university. Each vignette includes a brief narrative of the AI-human exchange, followed by a reflective interpretation of how it surfaced tensions, values, and opportunities. Together, these design moments illustrate how AI-assisted syllabus development is not a technical shortcut, but a discursive process that reveals the embedded pedagogical reasoning of the teacher-designer.

Vignette 1: Prompting for Cultural Nuance

When I initially asked ChatGPT to help plan the first week of the semester, it returned a structured and coherent outline. However, it lacked the cultural nuance required for my South Korean teaching context. The lesson plan leaned toward generalized EFL themes and overestimated students’ willingness to participate in speaking tasks on the first day. The activities assumed ease with peer interaction and comfort with classroom English norms more common in North American settings.

I intervened with clarifying prompts. I explained that many students in Korea are still deciding whether to drop the course in the first week and are not comfortable jumping into unfamiliar or performative tasks. I also emphasized the emotional risks that students associate with early speaking activities, especially when they are still gauging the class and instructor expectations. ChatGPT responded by generating a revised plan with more visual scaffolding, emoji-based introductions, and no-pressure speaking tasks, offering gentler entry points for students to begin using English without requiring vulnerable public performances.

Reflection

This moment underscored the cultural limitations of AI-generated content and reaffirmed the necessity of the teacher's local knowledge. ChatGPT’s initial suggestions mirrored a generic, likely Western-centric model of first-week instruction. While the tool was responsive to my correction, it couldn’t have anticipated the importance of drop/add week dynamics, nor the subtle social behaviors tied to classroom participation in Korean university culture.

My role as a teacher-designer became clearer in this exchange. The AI could offer drafts, but it couldn’t anticipate tone, classroom tempo, or the layered risks my students navigate. My insistence on low-stakes, relational entry points was not about simplifying content; it was an act of structural care. By pushing back on the AI’s assumptions, I surfaced my own priorities: protect learner agency, establish trust, and build a classroom climate that honors affective and cultural realities.

Vignette 2: Scaffolding Without Asking

As I moved further into planning, I asked ChatGPT to generate ideas for the middle weeks of the course. I had already shared three weeks’ worth of my own planning, and when prompted to continue the outline from Weeks 4 to 7, ChatGPT adopted a design logic that resembled my earlier decisions. It preserved differentiated content for freshman and sophomore levels, integrated multimodal assignments, and proposed tasks that mirrored the cognitive sequencing I had initiated, without me explicitly prompting it to do so.

In my notes, I reflected that the AI had mirrored my internal logic well: “It gave differentiated activities even though I didn’t ask it to, it tracked with what I was doing. I didn’t say ‘scaffold for different levels,’ but it did it.” This behavior suggested that the model had absorbed not just content format but a deeper pacing rhythm. It extended patterns without overstepping them.

Reflection

This moment offered a glimpse into how AI can function not just as a source of content but as a tool for pattern recognition and workload easing. I experienced a moment of surprise and minor relief: the tool had “read the room” of the previous chat. Yet the usefulness of this moment wasn’t just in what it produced—it was in how it allowed me to see my own design rhythm reflected back to me. I realized how consistent my internal scaffolding decisions had become and how much tacit knowledge I carry when sequencing themes, genres, and outputs.

Still, the tool’s usefulness was contingent on my oversight. Its suggested content needed revision to align more precisely with Korean policy contexts and professional discourse practices relevant to welfare administration. It drew on general ideas about social equity and public service but often missed national and institutional specificity. The prompt had been understood structurally but not culturally. This was a moment where co-creation was possible, but only through continued interpretation and editing.

Vignette 3: Naming the Hidden Curriculum—Or Not

At the end of my collaboration with ChatGPT, I asked it to summarize the values embedded in the syllabus design. I prompted it to describe what students might reflect on in a final unit, offering previous excerpts about learner growth, curriculum transparency, and field-specific literacy goals. One of ChatGPT’s suggestions was to include a reflection on how the course addressed implicit skills and “the hidden curriculum.” This struck me as unexpectedly aligned with my intentions. I had built in units on equity, educational access, and policy language to help students engage with real-world structures beyond the surface of classroom tasks.

However, in a later summary, ChatGPT contradicted its earlier point. It recommended not naming these concepts directly in the syllabus or in reflective activities. It advised allowing students to “discover” such ideas organically, arguing that soft skills and institutional logics are best “integrated quietly.” This contradicted not only its earlier advice but my own commitment to naming systems of power clearly and accessibly for students.

My pedagogy centers learner empowerment through explicit language. I want students to leave my course with terminology they can use to describe their experiences, academic paths, and social positions. In my design notes, I wrote: “The irony is that ChatGPT itself suggested the hidden curriculum, and now it tells me not to name it? It’s playing it safe when my students need explicit vocabulary.”

Reflection

This moment revealed the ethical fault lines between AI-driven efficiency and human-centered pedagogy. The model’s suggestion to keep institutional critique implicit reflected a risk-averse logic. But I teach from a position where naming power is essential. My students often lack language for their experiences with education as bureaucracy, test scores as class filters, and English as credential. Hiding those concepts under the guise of “professional neutrality” serves no one.

ChatGPT’s revision made pedagogical sense in a narrow way, it avoided potential controversy and kept the syllabus “clean.” But that was precisely the problem. My design work isn’t about polished neutrality; it’s about making room for friction, honesty, and critical language. In the end, I discarded the AI’s advice and revised the final unit to explicitly name soft credentialing, institutional discourse, and the idea of teaching as policy participation.

This design decision wasn’t a technical fix. It was a reaffirmation of my role: not as content curator, but as a facilitator of professional literacy in a system where what we say (and don’t say) shapes student access and empowerment.

Themes

Across these three encounters, what emerged most clearly was not the technical output of AI, but the reflective space it opened up for me as a designer. Each moment became an occasion to reexamine my own assumptions, clarify my stance, and articulate my pedagogical intentions more precisely.

  • In Vignette 1, I saw how prompting itself could be a practice of resistance, interrupting generic content to insist on culturally situated care.

  • In Vignette 2, I experienced the AI as a kind of mirror, offering back my own logic in ways that helped me see how scaffolded my design work already was.

  • In Vignette 3, I confronted the limits of AI alignment, its tendency toward depoliticization, and chose to reassert a pedagogy of transparency and naming.

What these vignettes reveal is that designing with AI is not passive or automatic. It is iterative, dialogic, and shaped by the teacher’s sense of care, justice, and institutional context. The tool may offer drafts, but the final design reflects deeply human values—ones that are cultivated, not coded.

Reflections on Technology and Labor

Working with ChatGPT across a full design cycle surfaced a set of tensions I had not anticipated at the outset. What began as an attempt to reduce cognitive load turned into a far more layered interaction, one that revealed how technological collaboration is entangled with teacher labor, ethical risk, institutional constraint, and emotional weight. In this section, I reflect on what this design experience suggests about the shifting nature of curriculum work when AI enters the frame, not as a passive assistant, but as a responsive interlocutor that mirrors and sometimes misaligns with the values of the teacher.

Technology as Support: Time, Friction, and Relief

The most obvious benefit of working with ChatGPT was temporal. Faced with the invisible labor of creating two differentiated ESP syllabi for welfare administration majors, I had reached a cognitive saturation point by early summer. Using ChatGPT helped me generate and organize content more quickly than I could have alone. I could ask it to suggest assessment options, reorganize pacing, or sequence content based on goals I described. This lowered the cognitive and emotional overhead typically involved in syllabus construction.

Even more useful was its ability to extend patterns I had already initiated. As seen in the middle-week planning process (Vignette 2), the AI picked up my design rhythm without prompting. That moment felt like genuine support, not because the tool was brilliant, but because it reflected back a structure I could recognize and refine. It relieved me from some of the heavy lifting, while still requiring discernment.

Yet even here, “support” was not simple. The tool offered pattern-based alignment, but without context awareness. Its scaffolding logic resembled mine, but its examples leaned generic and at times culturally mismatched. It lacked “epistemic empathy” (Jaber, 2021), the ability to understand not just what I was teaching, but why I was teaching it, to whom, and within what constraints. The labor it saved me in generation was reintroduced through necessary revision, critique, and translation into local realities.

Technology as Risk: Neutrality, Depoliticization, and Erasure

At several points, ChatGPT surfaced tensions that demanded deeper scrutiny not of the content, but of the values embedded in its logic. Its recommendations often prioritized clarity, efficiency, and formal balance, which may be ideal for instructional design in corporate or neutralized settings. But my pedagogy depends on friction, politicized naming, and student access to critical language.

The clearest example of this emerged in the “hidden curriculum” exchange (Vignette 3). Initially, ChatGPT recognized that institutional literacy and explicit skill development were central to my syllabus. But later, it backtracked, suggesting that I not name these ideas directly. This quiet erasure delivered through soft, professional phrasing felt deeply at odds with my commitments. I had designed the course around transparency for a reason. I want my students to leave the course able to name the forces shaping their academic trajectories, their English credentialing, and their place in the public sector. I want them to see bureaucracy not just as policy, but as power. The AI’s suggestion to “soften” these themes exposed a subtle but significant rift: the difference between helping students meet expectations and helping students interrogate them.

In this sense, AI did not just reflect content, it exposed pedagogical fault lines. The risk was not that it would “replace” me. The risk was that it would normalize a logic of depoliticized professionalism, one that my students, already navigating precarity and high-stakes futures, cannot afford to absorb without critique.

Design Pressure and Institutional Precarity

The push to adopt tools like ChatGPT is not occurring in a vacuum. It coincides with growing institutional pressures to produce more with less, faster, cheaper, and with reduced affective investment. For educators like me, foreign, contingent, operating within a rigid university system, design work is both a source of agency and a site of vulnerability. It is unpaid, often unrecognized, and increasingly expected to look “polished” despite the realities of academic exhaustion.

In this landscape, ChatGPT’s promise of “efficiency” maps uncomfortably onto neoliberal demands. As Oliveira et al. (2023) argue in their critique of marketized higher education, these patterns of "efficiency" often obscure the intensification of labor. The pressure to streamline, to optimize, to appear constantly productive, these are not just tech trends; they are labor conditions. And it’s easy to mistake relief for liberation. When the AI helps organize content or mirrors back our design patterns, it can feel like freedom. But if that freedom comes without space to interrogate the ethics of delegation, whose voice is being centered, whose authority is being deferred, whose labor is being erased, then we are simply digitizing the same pressures that already compromise our work.

My collaboration with ChatGPT made these dynamics more visible. It didn’t automate my design labor, it reframed it. I still had to decide what to keep, what to cut, and what to revise. But I also had to articulate, often in the moment, what values guided those decisions. The presence of the tool didn’t lighten the load so much as redistribute it. It surfaced judgment points more frequently and with greater nuance than I anticipated.

AI as Tool—or Co-Designer?

By the end of the process, I realized I was no longer using ChatGPT simply as a tool. I had started to treat it as a design interlocutor, a presence that could offer generative friction, unexpected patterns, and alternate phrasings. Yet I was also its editor, its corrector, its cultural translator. Our exchanges became a kind of dialogue, one that revealed as much about my pedagogy as it did about the tool’s capacities.

Still, I resist calling it a “co-designer” in full. That term suggests parity, mutuality, shared intent. ChatGPT has no stake in my students’ lives. It does not carry the ethical burden of misalignment, nor does it sit with the consequences of depoliticized outputs. I shaped the prompts, framed the revisions, and bore the responsibility for final decisions. The tool participated, but it did not care. And care, in my teaching, is not optional. It is the method.

If anything, AI’s value was in how it sharpened my own voice. It made my assumptions legible. It pushed me to articulate not just what I was doing, but why and to whom it mattered. In that way, ChatGPT was useful not as a co-designer, but as a reflective surface: a flawed but illuminating mirror through which my syllabus emerged more clearly aligned with my pedagogical values.

Conclusion

Designing with AI under precarity is not a matter of simply using a tool; it is an act of judgment, resistance, and care. While I began this project to explore whether ChatGPT could support syllabus revision, what emerged was something deeper: a portrait of design as reflective dialogue, where each AI output became a provocation rather than a solution. The AI offered speed, structure, and iteration, but the responsibility to interpret, adapt, and ethically revise remained mine.

For other educators working in contexts like mine, contingent, cross-cultural, and curriculum-responsible without institutional power, this reflection offers both caution and encouragement. AI tools like ChatGPT can indeed lighten the cognitive demands of planning, particularly when instructors are expected to generate new syllabi without additional time, training, or support. They can offer phrasing, pacing, and thematic connections that spark new insight or recover forgotten ideas. But they do not (and cannot) replace the deep cultural knowledge, ethical attunement, and care-based reasoning that real teaching requires.

AI does not understand our students. It does not grasp the labor of building trust across differences, of designing for silence, of revising around constraints. But when critically engaged, it can reflect back our own pedagogical thinking in ways that sharpen it. For instructors who rarely get the time or space to reflect aloud, these AI dialogues may serve as informal sites of professional articulation: not just “content generation,” but a discursive sketchpad where values become visible.

For teachers working across borders, under limited contracts, or within systems that undervalue curriculum labor, AI is neither salvation nor threat. It is a tool—powerful, flawed, and shaped by the same systems that shape our classrooms. How we use it and what we choose to prioritize through it remain human decisions. As Gee and Hayes (2011) argue, digital technologies are not neutral; they reflect the social systems and cultural ideologies in which they are embedded. In this way, AI often mirrors the very systems it claims to optimize, subtly reinforcing norms around productivity, standardization, and expertise unless critically engaged. The real work is not in mastering the prompt, but in remaining faithful to the people and purposes we teach for.

As generative AI becomes more common in educational design, our task is not simply to adopt it, but to ask: What does it mean to design with care? And how can we preserve pedagogical agency, especially when so much of our labor remains unseen?

References 

Barkhuizen, G. (Ed.). (2017). Reflections on language teacher identity research. New York, NY: Routledge.

Campano, G., Ghiso, M. P., & Welch, B. J. (2016). Partnering with immigrant communities: Action through literacy. Teachers College Press.

Cochran-Smith, M., & Lytle, S. L. (2009). Inquiry as stance: Practitioner research for the next generation. Teachers College Press.

Gay, G. (2000). Culturally responsive teaching: Theory, research, and practice. Teachers College Press.

Gee, J. P., & Hayes, E. R. (2011). Language and learning in the digital age. Routledge.

Ginsberg, M. B. (2015). Excited to learn: Motivation and culturally responsive teaching. Corwin Press.

Jaber, L. Z. (2021). “He got a glimpse of the joys of understanding”–The role of epistemic empathy in teacher learning. Journal of the Learning Sciences, 30(3), 433-465.

Ladson-Billings, G. (1995). But that's just good teaching! The case for culturally relevant pedagogy. Theory into Practice, 34(3), 159–165.

Molenda, M. (2003). In search of the elusive ADDIE model. Performance Improvement, 42(5), 34–36. 

Oliveira, D. A., Azevedo, R. S., & Sá, L. (2023). The intensified work of teachers in neoliberal times: Labor, technology, and exhaustion. Policy Futures in Education, 21(2), 124–140. 

Priestley, M., Biesta, G., & Robinson, S. (2015). Teacher agency: An ecological approach. Bloomsbury Academic.

Stein, P. (2007). Multimodal pedagogies in diverse classrooms: Representation, rights and resources. Routledge.

Varghese, M., Morgan, B., Johnston, B., & Johnson, K. A. (2005). Theorizing language teacher identity: Three perspectives and beyond. Journal of Language, Identity, and Education, 4(1), 21–44.

Comments

Popular posts from this blog

Portfolio for Maria Lisak, EdD

Week 1: Thresholds + Intuition

Gaps and Opportunities in the South Korean Digital Content Creation Landscape