In these 3 presentations, the speakers consider the importance of autonomy, authenticity, and how to make sense of AI outputs.
A Model for Levels of Autonomy in Technical Communication
Michael J. Klein and Philip L. Frana
Department of Interdisciplinary LIberal Studies
James Madison University
The authors propose a pathway for understanding levels of autonomy in technical communication, presenting a four-quadrant contextual model for AI in technical communication: (1) Human beings sharing technical information with other human beings; (2) Human beings sharing technical information with artificial intelligences; (3) Artificial intelligences sharing technical information with human beings; and (4) Artificial intelligences sharing technical information with other artificial intelligences. The authors will share examples of humans and machines operating in each quadrant as well as analyzing the benefits and challenges that surface in the various relationships.
AI Ethics and (In)Authenticity: Preliminary Investigations of GPTs’ Affordances for Routine Production and Their Shortcomings for Symbolic Analytic Labor
Paul Hunter and A. Deptula
This presentation builds on findings from our forthcoming article (Deptula et al., 2024) on AI and authenticity. In that article, we detail how generative pre-trained transformer (GPT) large language models handle commonplace TPC concerns: genres, plain language, and grammatical/mechanical correctness. Our initial analyses reveal that ChatGPT 3.5, as of August 2023, can produce reasonable outlines for standard TPC genres (e.g., scientific articles, business proposals, and feasibility reports), transform sentences according to plain language conventions (evidenced by Flesch-Kincaid grade level scoring), and help writers ensure mechanical and grammatical correctness.