
Virtual Conference Schedule đź“… Date: March 20th, 2026 Times presented in Mountain Daylight Time MDT/GMT-6
8:30am Check-In and Networking
Session Link: [Link to be added]
An informal, optional space to drop in, say hello, ask questions, or chat about recent developments in AI. Come for a few minutes or stay longer, listening is always welcome.
9:00am Keynote Session
Co-Designing Agentic Futures: Context Engineering as Technical Communication Expertise
Speaker: Abram Anders
Session Link: [Link to be added]
As AI systems become more agentic, a central design challenge has emerged: context engineering, the work of assembling the right information, structures, and situational constraints to shape AI behavior effectively. Designing context well determines whether AI applications function effectively, but equally importantly, whether they augment or diminish human agency. This presentation argues that context engineering is fundamentally communication design work, and that technical communication offers disciplinary expertise uniquely suited to it.
Three dimensions of our field’s knowledge map onto the core demands of context engineering. Rhetorical and communication analysis provides methods for determining who needs what information, when, and in what form, which comprise the essential analytical work behind any well-designed AI workflow. Genre knowledge captures how communication should be structured for specific purposes, audiences, and institutional contexts, offering transferable patterns for organizing the reference documents, process guidelines, and evaluative criteria that agentic systems require. Human-centered design and user experience design traditions prioritize user agency and participatory approaches, providing frameworks for ensuring that AI-assisted workflows serve the people they are meant to support rather than optimizing around them.
Drawing on recent research mapping worker preferences against AI capabilities, evidence on creativity and productivity impacts in professional contexts, and emerging parallels between AI-assisted coding and writing practices, this talk demonstrates how these foundations translate into practical design competencies for collaborative AI workflows. The presentation illustrates these connections through situated examples—from scaffolded composing workflows that integrate disciplinary methods and frameworks, to innovation processes structured through design thinking—showing how communication expertise can be operationalized at scales ranging from individual tasks to organizational projects. The future-proof path leads not through proficiency with today’s tools but through the design expertise that positions our field as upstream contributors to AI innovation.

Managing Innovation
Dr. Abram Anders
Dr. Abram Anders is the Jonathan Wickert Professor of Innovation and Associate Director of the Student Innovation Center at Iowa State University, where he leads the AI Innovation Studio. He created a pioneering Artificial Intelligence and Writing course and conducts research on AI literacies in education. His recent Computers & Education: Artificial Intelligence article demonstrates how integrating comprehensive AI literacies—functional, critical, ethical, and creative—with self-regulated learning can promote student agency and human-in-the-loop practices that augment rather than replace disciplinary expertise. His work on technological innovation, communication, and education appears in leading journals including College English, Computers and Composition, and the International Journal of Business Communication. The Association for Business Communication has recognized his scholarship with multiple awards including the 2022 Outstanding Article in IJBC Award. Learn more at abramanders.com.
10:00-11:00AM Teaching and learning Responsible AI Panel
Creating space for AI literacy and AI ethics in technical communication curriculum
Session Link: [Link to be added]
Huiling Ding, Jinzhe Quiao, Wanjun He
We argue that technical communication is uniquely positioned to teach responsible AI to a campus-wide audience with its focus on communicating about technologies to lay audiences and its offering of upper-level professional writing classes to students from other disciplines. To show how this can be done, this panel will examine how responsible AI can be incorporated into technical communication classrooms to help students develop future-proof skills while building urgently needed future-proof pedagogical modules in technical communication curriculum.
Panelists will examine how responsible AI can be incorporated into technical communication classrooms to help students develop future-proof skills while building urgently needed future-proof pedagogical modules in technical communication curriculum. Panelist 1 is now teaching a graduate seminar on Responsible AI and Society after going through two iterations of the class at a 400/500-level class with undergraduate and graduate students. Panelist 2 is teaching an undergraduate version of Responsible AI for the first time this spring. Panelist 3 is taking the graduate seminar with the plan to teach the undergraduate class in a few semesters.
11:00am-11:15am Break/Transition
A brief break to stretch, grab a refreshment, and prepare for the next session on Durable Course Frameworks.
11:15AM-12:15PM Durable Course Frameworks
Session Link: [Link to be added]
This session focuses on instructional frameworks designed to endure rapid changes in AI tools and writing technologies. Presentations highlight course designs and pedagogical structures that foreground decision-making, transparency, and rhetorical awareness over tool-specific skills.
Show Your Work: A Tool-Agnostic Framework for Sustainable AI Integration in Technical Communication
Cynthia Pengilly
This presentation introduces the “Show Your Work” framework—a tool-agnostic approach to ethical AI integration that prioritizes process over product, transparency over efficiency, and human judgment over algorithmic convenience.
Drawing from twenty years as a practitioner-scholar spanning industry and academia, I demonstrate how making AI collaboration radically transparent creates sustainable practices that transcend specific tools. The framework’s three phases of ownership, collaboration, and synthesis establish durable patterns that work whether students use Claude, ChatGPT, or whatever emerges next quarter.
Embedding a Decision-Making Framework into a Technical Writing Course
Percival V. Guevarra and Patrick Hong
Our university requires School of Engineering students to complete a technical writing course that prepares them for communicating in the professional world. The technical writing course’s modules that sought to teach communication skills were at risk of becoming obsolete if students opted to default to AI queries instead of engaging in a process of critical thinking. Both the student learning outcomes and course objectives are tied to ABET accreditation, so we chose to update the course using a similar top-down approach. Rather than constantly modify the instructional material in a game of cat and mouse to stay ahead of generative AI capabilities, we zoomed out to focus on providing context and framing the instructional material, which could then undergo fewer drastic changes.
Given the course’s instructor and student diversity (e.g., biomedical engineering, computer engineering, environmental engineering), an engineering lecturer and writing specialist collaborated on a decision-making framework that informs the course’s structure and exercises. This framework iterates a process of gathering information, analyzing options, making a decision, and drafting writing. This session will introduce participants to the decision-making framework through a tour of the course materials in Canvas, a learning management system. This will include module introduction pages, which provide context and frame the instructional material. The session will conclude with examples of how students have used the framework explicitly in writing exercises, as well as their feedback from using the framework more broadly throughout the course.
12:15-12:45pm Lunch/Extended Break
An extended break for lunch. Please drop in for the Networking & Connections session at 12:45pm.
12:45-1:15pm Networking & Connections
Session Link: [Link to be added]
A mid-day conversational pause designed to help participants connect around shared interests about recent developments, research, teaching, and practice. Attendees can join themed breakout rooms, listen in, or step away as needed. This session is a chance to share some ideas, ask questions, do some thinking together, or just hear some stories and take a break and connect with friends.
1:15-2:00pm Industry Panel 1: AI as a Daily Work Partner
Session Link: [Link to be added]
Moderator: Guiseppe Getto
Panelists: Alan Houser, Senior Solutions Architect, AmeXio
Susan Kelley, Senior Manager of Communications and Engagement, Metidata
Amanda Stockwell, President and Principal Researcher, Stockwell Strategy; Adjunct Associate Professor, Duke University
This panel focuses on how technical communicators are using AI in their everyday work right now. Panelists will share concrete, production-level practices including AI-supported drafting, editing for clarity and tone, research synthesis, note taking, XML and stylesheet development, RAG-based content reuse, and generating sample or test content. Rather than speculative futures, the discussion centers on workflows that reduce friction, address blank page anxiety, and accelerate routine tasks while still relying on professional judgment.
Panelists will also reflect on what AI does not solve, where human expertise remains essential, and how individual practitioners are adapting their skills in response to lowered barriers for surface-level writing quality. Attendees will leave with a grounded understanding of which AI uses are proving durable and transferable across tools and which require careful refinement to remain effective over time.
2:00-2:45pm Industry Panel 2: Governance, Risk, and the Limits of AI in Practice
Session Link: [Link to be added]
Moderator: Guiseppe Getto
Panelists: Christina Mayr, Documentation Team Manager and Senior Information Architect, Epic Games
Jackie Damrau, Product Owner, Independent
Jeffry Handal, Principal SE, Cisco
This panel examines the risks, constraints, and structural consequences of AI adoption in technical communication work. Panelists will address hallucinations, validation practices, operational and security risks, shadow AI, data governance, and the increasing gap between organizational hype and measurable value. The discussion moves beyond individual productivity to consider how AI reshapes labor, accountability, and trust in professional communication environments.
Panelists bring perspectives from enterprise systems, product security, and documentation leadership to challenge common narratives about human-in-the-loop workflows and AI management roles. By foregrounding failure cases, risk exposure, and ethical tensions, this panel aligns closely with the symposium’s focus on future-proofing practices that prioritize resilience, transparency, and long-term responsibility over novelty or speed.
2:45-3:00pm Break/Transition
A short break for transition and preparation before the Durable Judgment session.
3:00-4:15pm Durable Judgment: Rhetoric, Knowledge, and AI
Session Link: [Link to be added]
This session examines how working with generative AI reshapes rhetorical judgment, knowledge production, and classroom practice. Presentations move from pragmatic teaching demonstrations to conceptual frameworks and epistemological critique, offering durable approaches for cultivating critical awareness and judgment in AI-mediated communication.
Teaching as We Learn: Developing Student and Instructor AI Literacy
Jordan Dagenais
One way that educators can adapt to our continuously shifting educational landscape is to lean into interactions with Generative AI in the form of new assignments and in-depth reflection activities. In this presentation, I demonstrate ways that I have adapted Generative AI tools into my first-year technical communication classes at my institution. I showcase how Generative AI tools can be used to assist in lesson planning and assignment building to alleviate the burden of overworked and underpaid educators, while simultaneously enriching our understanding of these tools.
By experimenting with these tools in our own work, we can discover what it means to responsibly and irresponsibly utilize Generative AI in our classrooms, which will in turn help us articulate these same responsible and irresponsible uses to our students. Additionally, direct use of these tools can help us more candidly contend with the educational, data and copyright infringement, and environmental concerns posed by an overreliance on these tools. It is imperative that we instructors, rather than serve as AI police, help our students develop a critical literacy about Generative AI tools and their place in the technical communication classroom while also developing our own critical literacy about these same tools.
Teaching with an Algorithmic Mirror: Designing Future-Proof AI Pedagogy in Technical Communication
Philip B. Gallagher
As generative AI tools become embedded in technical communication classrooms, educators often respond by adapting assignments to new tools or debating policies of adoption and restriction. While these responses are necessary, they can unintentionally prioritize short-term tool management over sustainable pedagogical design. This presentation argues for a future-proof approach to AI pedagogy by reframing generative AI not as a collaborator or replacement for student labor, but as an algorithmic mirror that reflects—and intensifies—our existing pedagogical assumptions about context, agency, and rhetorical judgment.
Drawing on sustained, reflective engagement with a large language model across teaching, research, and writing tasks, this presentation synthesizes insights from recent technical communication scholarship with lived pedagogical practice. Rather than focusing on tool-specific instruction, it examines three durable pedagogical lessons that emerge when GenAI is treated as a socio-technical rhetorical system rather than a neutral assistant.
Rhetoric in Ruins: Teaching Students to Excavate AI’s Epistemological Artifacts
Shiva Hari Mainaly
What happens to rhetorical knowledge when AI transforms the material conditions of its production? Rather than future-proofing our practices against AI disruption, I argue we must teach students to excavate the epistemological ruins AI leaves behind—training them to recognize what knowledge systems erode, mutate, or vanish entirely when algorithmic mediation becomes infrastructural. This presentation introduces durability pedagogy: a framework that treats AI-generated texts not as endpoints requiring evaluation but as archaeological sites demanding investigation, drawing on Sano Franchini’s work on cultural rhetorics methodology and Noble’s critique of algorithmic oppression. Students learn to ask: What rhetorical traditions does this output presume extinct? What situated expertise does it flatten or erase? What cultural logics does it encode as universal? By repositioning AI literacy as epistemological archaeology rather than technological accommodation, we cultivate rhetorical practitioners who can recognize—and resist—the quiet violence of epistemic erasure that Crawford identifies in AI’s extractive logic.
The pedagogical power lies not in teaching students to produce “better” AI outputs but in cultivating what I call epistemic vigilance: the capacity to recognize when algorithmic mediation threatens to naturalize particular ways of knowing while rendering others archaeologically recoverable at best, irrecoverable at worst. Students developed heuristics for identifying these erasures and designed intervention strategies—documentation protocols that foreground knowledge provenance, workflow checkpoints that resist epistemic flattening, governance frameworks that protect vulnerable knowledge systems. This approach extends Rose and Tenenberg’s notion of integrative learning by insisting students trace not just what AI makes possible but what it makes impossible. This approach offers technical communication a durable foundation precisely because it doesn’t depend on any specific AI tool or capability.
4:15-4:30pm Break/Transition
A brief break before the final session on Applied Cases.
4:30-5:45pm Applied Cases
Session Link: [Link to be added]
Focusing on practice in context, this session presents applied case studies of AI use in technical communication, education, and professional work. Speakers highlight approaches to documentation, access, language, and authorship that prioritize usability, inclusion, and long-term sustainability over novelty.
Helper-Doc Based Prompting: A Durable Framework for Teaching AI-Assisted Writing
Jacob Craig
This presentation introduces helper-doc based prompting, a methodology grounding AI instruction in transferable rhetorical skills rather than platform-specific proficiency. Drawing on reflective video essays from 18 students in a Fall 2024 Writing with AI course at the College of Charleston, I demonstrate how adapting the professional documentation practices of the style guide for use as part of an AI prompt-writing process creates a sustainable framework for Human-AI collaboration.
The use of style guides to mediate writers’ purposes and AI outputs helps transform AI from what one student initially called “a magic word box, a mysterious tool that produced results without clarity” into what another described as “creating a detailed brief for a collaborator.” These shifts parallel findings from Cummings, Monroe, and Watkins (2024), who report that structured AI engagement develops students’ attention to sentence structure, word choice, and audience awareness.
The helper-doc prompt writing methodology described in this presentation responds to emerging scholarship mapping where AI can appropriately intervene in writing processes by foregrounding what McKee and Porter (2022) call “rhetorical intelligence”—the capacity to identify exigency, analyze audiences, and create communications that produce meaning and value. The helper-doc approach makes contextual understanding the student’s explicit responsibility and available to AI tools through the automated process of algorithmically powered textual production.
Finetuning Generative AI Tools to Serve ESL Students’ Needs
Geoffrey Sauer
English departments often top American universities’ demographic statistics for diversity. After the Social Justice Turn, we are theoretically prepared to foster counterhegemonic practice. But our programs’ documentation must also change, to more effectively invite and welcome diverse students.
This paper will describe a research study currently gathering data about a locally- hosted, open source generative AI system currently being finetuned to support ESL graduate students as they seek to understand the policies and procedures of R1 TPC graduate degree programs. This paper will argue we should not debate whether materials should be polyglossic, but instead must discuss how TPC can bridge borders less incompletely.
5:45-6:00pm Final Break/Transition
A final short pause before the Town Hall and Closing Remarks.
6:00pm Forging Ahead Town Hall & Closing Remarks
Session Link: [Link to be added]
Closing Remarks and Opportunities
[Host(s)]
🗨️The conference concludes with a brief synthesis of key themes and takeaways from the day. An optional open networking space follows for participants who wish to continue conversations or explore potential connections and collaborations.
