Methods for Dialoguing with the Algorithm: Data Collection & Analysis

This is the third section, Data Collection & Analysis, of the Methods Chapter for: Dialoguing with the Algorithm: An Autoethnographic Study of Midlife Voice, Uncertainty, and Teacher Identity in a ChatGPT Exchange.

by: Maria Lisak EdD (How to cite)

Data Collection & Analysis

There were several points in the the various documentation around my ChatGPT exchange on syllabus design as well as my inner dialog with self that could be mined for analysis. In the Findings chapter, each incident, or moment, is explained, analyzed and interpreted in three stages: data context and selection rationale as well as the excerpt itself and an initial interpretation; analysis of AI response pattern as developed during iterative journaling;  and finally interpreted through a theoretical anchor as outlined in Chapter 2’s Voice, Reflexivity and Dialogic Becoming. Here, I map out the data sources and the different analytical procedures in motion.

Data Sources

The focal dataset is a single, extended ChatGPT exchange initiated during ESP syllabus redesign and subsequent reflection documentation. Data include:

  • ChatGPT transcripts: Two detailed chat sessions with ChatGPT served as the primary data sources. The first transcript (see Lisak 2025a, Appendix I) inspired recollection of previous activities that I wanted to include in the following semester. The other transcript ( Lisak 2025a, Appendix III) is the work done to write and revise syllabi for the Fall Semester. These transcripts documented iterative planning requests, design critiques, cultural contextualizations, and my own prompts of varying complexity. Each exchange reflects decision points in syllabus creation.
  • Syllabus drafts: Versions of the working syllabus—ranging from rough notes to formatted outlines—functioned as living documents that mapped the development of pedagogical logic, topic sequencing, and multimodal activities.
  • Personal design notes: These included comments embedded in Google Docs ( Lisak 2025a, Appendix II), voice memos, handwritten margin notes, and embedded reflections in planning spreadsheets. These informal records capture moments of doubt, revision, and improvisation.
  • Meta-reflections: Emerging through analytic memos ( Lisak 2025a, Appendix IV) and post-session journaling, these reflections allowed me to interpret both the design process and my emotional/intellectual reactions to using AI as a pedagogical tool.

The exchange began as a practical design conversation but quickly moved into reflective and critical terrain, surfacing tensions between efficiency and embodiment, neutrality and nuance, and automation and authorship.

Analytic Procedures

The analysis of my dataset proceeded in three interrelated stages. Each stage corresponded to one layer of meaning-making, moving from descriptive framing to dialogic interpretation.

1. Critical Incident Analysis

To organize the raw material of my extended ChatGPT exchange, I adopted Tripp’s (1993) approach to critical incident analysis. Following his emphasis on seemingly ordinary but retrospectively significant moments, I identified five episodes that crystallized tensions around teacher identity, uncertainty, and AI mediation. Each incident was documented and analyzed in two dimensions:

  • Data context and selection rationale: I recorded contextual details (when, where, task-in-progress, and constraints), the sequence of conversational turns, my immediate reactions (affective and embodied), and my surface framing (initial, pre-analytic interpretation). I then articulated the rationale for selecting the episode by noting the trigger for reflection (why the moment stood out) and the decision point (what choice or shift was at stake).
  • Excerpt and interpretation: I presented a verbatim excerpt of the exchange, highlighting the incident span. Interpretation at this stage remained limited to Tripp’s analytic lens—identifying the trigger, underlying assumptions, and decision point—without theorizing or generalizing beyond the single moment.

This procedure allowed me to treat each incident as both a bounded textual event and a reflective hinge in my professional identity work.

2. Analysis of AI Response Patterns

While Tripp’s (1993) critical incident analysis provided a starting point for identifying pivotal moments within the AI dialogues, it did not, on its own, offer the nuanced tools needed to explore how identity, authority, and epistemic stance were negotiated in this context. To address this gap, I developed a complementary analytic framework that I call Analysis of AI Response Patterns (AARP). Building from the critical incident documentation, I applied this emergent analytic frame developed inductively during journaling of my previous studies on this data where I treated ChatGPT as a discursive partner and pedagogical tool. 

That earlier project illuminated how AI could scaffold design decisions and reduce cognitive load, but also surfaced tensions around authority, labor, and voice. In this new study, I revisit the same ChatGPT exchange not to evaluate content outcomes, but to trace moments where my professional identity was shaken, stretched, or reasserted. Drawing from the critical incident method, I use four analytical moves—mapping shifts in voice, identifying epistemic dissonance, testing categories for fit, and tracking movement from expertise to uncertainty—to examine how I performed, negotiated, and recalibrated my scholarly self within the interaction. In several exchanges, my voice shifted from tentative and polite to clipped or ironic, revealing emerging skepticism or reassertion of control. At other points, I moved from disappointment to unexpected enthusiasm, signaling a pedagogical realignment. I test ChatGPT’s categorical offerings not only for relevance, but for ideological fit—especially when its suggestions reflect flattened or Western-centric assumptions. These moments of epistemic dissonance—when the AI's fluency obscures a mismatch in values or cultural context—prompt recursive reflection on what I know, how I know it, and what I am willing to unlearn. Ultimately, this analysis focuses not on ChatGPT as an instructional tool, but on the ways in which human-AI dialogue becomes a site for identity work, where expertise, humility, and care are continuously rehearsed and redefined.

This framework operates alongside Tripp’s method but serves a different purpose: rather than locating incidents, it attends to how those incidents unfold rhetorically and epistemically. Specifically, AARP involves four analytic moves: (1) mapping shifts in voice, (2) identifying epistemic dissonance, (3) testing categories for fit, and (4) tracing movements between expertise and uncertainty.

Mapping Shifts in Voice

  • From polite to curt
  • From overly familiar to disbelief/scorn/patience
  • From disappointment to enthusiasm

Across the interaction, my voice did not remain static—it shifted in tone, affect, and rhetorical stance depending on the degree of alignment (or misalignment) between ChatGPT’s suggestions and my pedagogical commitments. In my earlier study, I noted how prompts evolved from emotionally candid, informal utterances to more academically structured reflections. Here, I look more closely at those tonal changes not just as evidence of growing confidence, but as indicators of identity negotiation. For example, my initial prompts were polite and tentative, laced with hedging language like “maybe” or “I wonder if.” But as the AI began to offer context-flattened or culturally inappropriate suggestions, my voice became more directive, even curt—reflecting a reassertion of authority and care. At one point, an overly Westernized Week 1 plan led me to respond with brief, critical language bordering on scorn, followed quickly by a redirection grounded in cultural nuance. These shifts were not random—they marked moments of epistemic boundary-setting, when my professional voice was activated as a protective and clarifying force.

Epistemic Dissonance

Several exchanges in the chat provoked epistemic dissonance—moments when ChatGPT’s outputs, though syntactically fluent and pedagogically structured, clashed with my contextual knowledge or ethical stance. In the prior study, I framed these instances as pedagogical tensions; here, I treat them as identity disturbances. When ChatGPT advised me not to explicitly name certain curricular concepts like “strategic stacking” or “soft credentialing,” I felt a jarring contradiction—not only because these concepts had been introduced earlier by the AI itself, but because my pedagogical ethics hinge on naming power structures and equipping students with the language to understand their learning. That dissonance prompted a deeper reflection: if AI-generated advice feels “reasonable” but contradicts my values, what does that say about the logics encoded into its outputs—and my own moments of acquiescence or resistance? These tensions were not just conceptual; they exposed the recursive nature of identity work when one’s authority is destabilized by machine-generated fluency.

Testing Categories for Fit

ChatGPT frequently offered neatly packaged categories for lesson types, learner outcomes, and content themes. In my syllabus revision study, I treated these as helpful design scaffolds; in this inquiry, I interrogate how those categories align—or fail to align—with the epistemologies I teach from. For instance, when ChatGPT suggested generic intercultural communication topics or Western leadership models, I was reminded of its tendency to universalize pedagogical content. My prompts became sites of friction, not simply requesting alternatives but reworking or rejecting entire category frameworks. I began using the prompt space as a form of conceptual refusal, filtering out assumptions about what my students “need” or “should” learn. These acts of testing, revising, and reframing reflected not only curricular discernment but a defense of my own embodied knowledge—rooted in years of context-specific, care-based teaching. In this way, testing categories became an act of epistemic sovereignty.

From Expertise to Uncertainty

Perhaps the most surprising dynamic in the exchange was the recurring movement from expertise to uncertainty. Despite my extensive experience in EFL instruction and curriculum design, I found myself second-guessing decisions or hesitating to assert my authority in the face of ChatGPT’s polished, immediate responses. The earlier study noted how AI accelerated planning but also introduced a “new kind of exhaustion.” Here, I unpack that exhaustion as a form of identity dislocation. The speed and structure of AI-generated output compressed my reflective process, leaving less time to inhabit uncertainty as a generative space. In some instances, I accepted ChatGPT’s framing too quickly, only to revise it later after sensing something was “off”—a realization that often came from the embodied knowledge I had momentarily bypassed. These moments of reflexive discomfort point to the need for slowness, even in AI-enhanced environments. They also signal that teacher identity is not defined solely by what one knows, but by one’s willingness to sit with ambiguity and return to one’s core commitments after being momentarily swayed.

This layer captured the patterned qualities of algorithmic responses while linking them to my evolving identity positions. Together, these moves provided a way of rendering visible the subtle negotiations that unfolded in interaction with the AI. Rather than treating responses as static outputs, AARP attends to the dialogic rhythm of influence, resistance, and recalibration. It is less a coding schema than a mapping practice—one that foregrounds how patterns of tone, stance, and epistemic tension accumulate across exchanges.

3. Theoretical Anchoring - Analytical Framework: Voice, Reflexivity, and Dialogic Becoming

Following the initial critical incident analysis and the analysis of AI response patterns, selected moments were subjected to deeper theoretical-analytical examination using the framework articulated in Chapter 2, which conceptualizes teacher identity as relationally constituted through ongoing dialogic encounters. This stage-three analysis applied specific theoretical lenses to illuminate findings that emerged during the critical incident interpretation phase.

Dialogic Voice and Heteroglossia

Drawing on Bakhtin's (2010) dialogism, I analyzed selected incidents as sites of heteroglossic encounter where institutional discourse, professional identity performance, and personal vulnerability became woven together in my prompts and responses. Bakhtin's (2010) principles of addressivity and answerability revealed how I simultaneously addressed ChatGPT as tool, collaborator, and threat while being called to account for the positions I took and authority I claimed or ceded.

Performative Identity Constitution

Butler's (2009) theory of performativity guided my examination of how teacher identity emerged through repeated discursive acts rather than being expressed from a pre-existing core. Butler's (2009) work on precarity proved particularly relevant for analyzing my position as an aging, foreign educator, where the AI encounter embodied the paradox of interdependence that is simultaneously enabling and threatening.

Liminal Spaces and Threshold Experiences

Turner's (1969) concept of liminality informed my analysis of the AI exchange as a threshold space where conventional teacher-student, expert-novice, and human-machine distinctions became blurred. Within this liminal space, I attended to moments of what Turner (1969) calls "structural invisibility"—existing between established categories as neither digital native nor technophobe, neither fully autonomous nor entirely dependent on algorithmic assistance.

Intra-active Entanglement and Material-Discursive Practices

Barad's (2007) concept of intra-activity guided my analysis toward understanding how teacher identity and algorithmic agency emerged through mutual entanglement rather than discrete interaction. Barad's (2007) material-discursive framework helped me attend to how apparently immaterial digital exchanges had material effects through the entanglement of interface design, response speed, institutional contexts, and agential realism's distributed networks of human and non-human actors.

Collaborative Authorship and Distributed Voice

Insights from Johnson-Eilola and Selber's (2007) work on assemblage rhetoric informed my analysis of voice ownership and collaborative meaning-making in teacher-AI exchanges. The framework of distributed cognition (Hutchins, 1995) illuminated moments of "revoicing"—where I actively recontextualized AI suggestions within my pedagogical framework—and how pedagogical knowledge emerged through dynamic interaction between contextual understanding, algorithmic pattern recognition, and material constraints.

Critical AI Studies and Power Relations

Perspectives from critical AI studies provided frameworks for situating individual AI encounters within broader sociotechnical power structures. Noble's (2018) work on algorithmic oppression, Benjamin's (2022) analysis of technological inequality, and Crawford's (2021) work on AI as planetary extraction guided my attention to embedded biases, patterns of stratification, and ethical implications of seemingly individual pedagogical choices.

Marginality, Resistance, and Cultural Integrity

Anzaldúa's (1987) concept of nepantla and the Korean concept of teum (틈) provided culturally grounded frameworks for analyzing how marginalized subjects navigate technological encounters. Cho's (2007) work on teum illuminated how moments of epistemic dissonance and cultural mismatch became spaces of creative possibility, while decolonial AI scholarship (Mohamed, Png, & Isaac, 2020) guided my analysis of resistance practices, particularly my refusal to accept AI-generated content that erased cultural specificity.

Emergent Selfhood and Ethical Response

Freire's (1970) concept of dialogic pedagogy extended into AI-mediated contexts guided my examination of how human-AI dialogue became a site of critical reflection. Shotter's (2006) concept of "withness" thinking informed my attention to moments of responsive, relational understanding that emerged in the encounter itself rather than being pre-planned or predetermined.

Synthesis: Identity as Relational Achievement

This integrated framework positioned teacher identity as an ongoing relational achievement emerging through dialogic encounter. The theoretical foundation enabled nuanced analysis of how professional identity crystallized through recursive positioning, response, resistance, and answerability—what I term dialogic becoming—within AI-mediated pedagogical spaces.

Synthesis

In sum, the analysis combined Tripp’s micro-level structuring of incidents, an emergent framework for mapping AI response tendencies, and dialogic theoretical anchors to explore connections to aging, uncertainty, and identity. This layered approach provided both descriptive clarity and conceptual depth, enabling me to trace how teacher identity was negotiated in and through AI-mediated exchange.

These moves were applied iteratively, with analytic memos capturing emerging insights, questions, and positional shifts using the analytical procedure. Throughout, I treated ChatGPT not as a neutral tool but as a reflective surface—a discursive medium that both reveals and shapes the contours of professional identity.

I frame my internal movements not as access to an authentic inner self but as another site of performance, a backstage dialogic register that co-constitutes identity alongside the transcripted exchange.

For this paper, click here. 

Comments

Popular posts from this blog

Portfolio for Maria Lisak, EdD

Week 1: Thresholds + Intuition

Gaps and Opportunities in the South Korean Digital Content Creation Landscape