Ambient rhetoric and AI: How unknown authorship reshapes digital environments

An illustration of a woman standing still on a crowded city street, looking overwhelmed by a chaotic, glowing burst of digital notifications, eyes, and demanding text like "URGENT!!!" and "ATTENTION REQUIRED." She is surrounded by a muted, blue-toned crowd of pedestrians who are completely absorbed by their smartphones, representing the isolating and demanding nature of a polluted digital infosphere.

Don’t Believe Everything You See on The Internet

Since ChatGPT was publicly released in 2022, content generated by large language models (LLMs) has increasingly flooded the web. But just how much of the good old internet is AI generated? This is a tough question to answer because there is no real definitive or universally accepted scholarly estimate. Measurement methods vary widely, the web itself is constantly changing, and distinctions between what counts and doesn’t count as AI content are often blurred. As a result of this uncertainty, trustworthy authorship has become an ever more precarious feature of the digital world than it already was.

So what happens when we can’t reliably determine what is and is not AI? What are the effects of a digital environment where authorship is uncertain? We are left with the possibility of polluted infospheres that degrade everyday workflows, a hyperreality in which users struggle to distinguish between what is authentic and what is manipulated, and a risk society where people are constantly forced to navigate unforeseen threats produced by technological advancement.

For educators and students, these challenges are immediate. As Timothy Ponce (2024) argues, AI challenges our very understanding of human identity. Consequently, instructors are tasked with designing meaningful communication tasks and evaluating writing in contexts where authorship is blurred, while also maintaining trust and fairness in their classrooms. Understanding the limitations of AI detection is therefore necessary, but it is not sufficient on its own.

This article argues that Rickert’s concept of ambient rhetoric provides a useful framework for understanding AI-generated content in these conditions. By focusing on how meaning and persuasion emerge through the presence and persistence of messages in our lives, ambient rhetoric can help us see how content remains persuasive even when authorship is ambiguous, unclear, or unavailable. From this perspective, we create space for reflection about how meaning and agency are distributed across systems rather than located in a single human source, and the importance of media and AI literacy in everyday life becomes especially clear.

How Much of The Internet is AI?

Finding an accurate estimate about how much content on the internet is AI generated is a challenge, but some of the figures are eye-popping. Thompson et al. (2024) argue that a significant amount of web content is translated into different languages (from English to another language for example). The poor quality of many of these translations suggests they were produced using machine translation rather than human labor, and they argue that this machine generated content makes up as much as 57% of the total content available in some languages. Other sources make claims like 52% of web articles are AI generated (Smith et al., 2025) or that  30%-40% of text on active web pages may originate from AI-generated sources (Spennemann, 2025). While these figures are striking, they do not point to a clear consensus; instead, they suggest that estimates of AI-generated content vary widely depending on how it is defined and measured.

The lack of agreement becomes even more striking when we compare these estimates to other claims, and depending on what content you encounter, you might be forgiven for believing wilder estimates. One older prognostication claimed that up to 90% of online content could be AI‑generated by 2025 – a prediction made by an industry commentator in 2023 on Yahoo Finance rather than a methodological study (Garfinkle, 2023). When looking at social media activity during the 2024 U.S. election, Chen et al. (2025) found that 12% of images and 1.4% of textual posts on X were AI-generated, and Russell et al. (2025) reported that roughly 9% of recently published U.S. newspaper articles contained partially or fully AI-generated content (and that its use was rarely disclosed). Taken together, claims about AI content do not converge on a single estimate or a reasonable range, but instead produce a wide range of results shaped by differences in method, scope, and definition.

While they suggest that AI-generated content is a significant and growing presence, the variability in these estimates leads to the more pressing issue of whether authorship can be consistently identified in digital environments. Simply stated, these estimates indicate the general unreliability of efforts determining what counts as AI-generated content in the first place. The more immediate questions, then, are why do estimates of AI-generated content vary so widely, and what does that variability what does the challenge to authorship mean for the ways we talk about our relationship to writing?

The Unreliability of AI Detection

authorship cannot be treated as a stable or consistently identifiable characteristic of texts

In short, we cannot reliably determine what is AI-generated and what is not. Because most attempts to identify AI-generated content rely on detection tools, the limitations of these systems have direct implications for how authorship is understood in digital environments. Most claims about the prevalence of AI-generated content depend, either directly or indirectly, on these types of tools. Yet peer reviewed research consistently shows that these tools are deeply unreliable (Cheng et al., 2025). In theory, AI detectors attempt to estimate the likelihood that a given text was produced by a machine, but in practice, these systems routinely produce high rates of both false positives and false negatives.

False positives, cases in which human‑written text is incorrectly flagged as AI‑generated, are especially problematic in educational settings because accusations of AI use can carry serious consequences for students. These accusations also foster a climate of suspicion that undermines trust between instructors and students. Although companies such as Turnitin have claimed false positive rates below 1%, independent reporting suggests much higher rates. For example, a Washington Post investigation found that Turnitin’s AI detector misclassified nearly half of a small sample of student essays, including some fully human texts (Fowler, 2023). Studies also show that AI detectors disproportionately flag work produced by non-native English speakers, misclassifying their human written texts at higher rates than those of native speakers (Liang et al., 2023).

At the same time, false negatives (instances in which AI‑generated text goes undetected) are equally common. Detection tools often fail to identify AI writing when users paraphrase outputs, add emotional language or personal anecdotes, vary sentence structure, or employ secondary tools designed specifically to “humanize” AI‑generated prose. Even detection companies acknowledge this limitation. Turnitin, for example, states that its system may miss roughly 15 % of AI‑generated content in order to minimize false accusations against human writers (Coffey, 2024). Research also shows that such tools frequently misclassify AI text as human‑written, particularly when paraphrased or manipulated (Weber‑Wulff et al., 2023). Moreover, studies of adversarial and prompt‑engineering techniques demonstrate that simple modifications can substantially reduce detection accuracy, with some methods producing very high rates of evasion (Perkins et al., 2024; David & Gervais, 2025).

These shortcomings are structural. AI generators and AI detectors are locked in an ongoing technological arms race, in which improvements to one system prompt rapid counter‑adaptations in the other. There is no stable or permanent solution to the problem of  identifying AI‑generated text, particularly as language models increasingly mimic the variability and imperfection of human writing.  Advances in design continuously outpace one another and detectors become less reliable as language models resemble human patterns more closely (Májovský et al., 2024). Detection and disclosure remain important areas for development, but for now, authorship cannot be treated as a stable or consistently identifiable characteristic of texts.

Taken together, the limits of detection and the prevalence of generated content destabilize authorship as a reliable indicator of origin. Studies show that we should treat claims and accusations about the percentage of AI content on the web should be treated with skepticism, and we should be mindful of the impacts of unreliable detection efforts. If we cannot reliably distinguish between human produced or machine-generated text, then approaches that depend on authorship begin to break down. In these conditions, alternative frameworks are needed to understand how meaning and persuasion operate in information environments shaped by AI.

Experience, Meaning, and Ambient Influences

AI’s influence exists in the background… much like elevator music we might encounter as we walk around the grocery store

So why tell you this? If authorship can no longer be reliably established, then frameworks that depend on identifying individual authors become insufficient for understanding how meaning and persuasion operate. Just because we can’t reliably measure how much of the internet is AI, doesn’t mean it’s not there. And regardless of whether something is AI or not, the content is still persuasive. It’s my observation that AI content is blending into the environment of the web in much the same way algorithms, content filters, site moderation, and bots have long done. AI’s influence exists in the background, as a pervasive presence in the digital spheres we inhabit; much like elevator music we might encounter as we walk around the grocery store.

In this context, Rickert’s concept of ambient rhetoric provides a useful alternative. Rather than focusing on discrete acts of authorship, ambient rhetoric shifts attention to the environments in which texts circulate, where influence emerges through the accumulation of interactions among texts, technologies, and systems. By examining content this way, the value and significance of the text is less dependent on the authorial identity that readers often rely on to evaluate credibility and interpret meaning. Even if we can’t determine who wrote something, it can still have an impact, and ambient rhetoric helps us understand how that happens.

That’s not to say that authorship doesn’t matter, but a key consequence of AI as ambient rhetoric is the destabilization of trust. When we focus on the problem of authorship, traditional markers of ethos are weakened. Readers can no longer consistently rely on authorial identity to assess credibility, which forces them to depend on alternative, and often less reliable, signals when interpreting texts.

In practice, this uncertainty can produce a pervasive sense of suspicion about digital texts. When treating text as closely tied to a specific author’s labor and creativity, readers aware of AI may have an approach characterized by uncertainty and skepticism that fosters a kind of paranoia: a pervasive suspicion that any given text might be machine authored. As a result, even accurate, human-created content can be disbelieved simply because similar-looking AI outputs exist.

For many students and educators, this has translated into real emotional and academic harm when AI detectors are used as evidence of misconduct. Coldwell, in a 2024 Guardian article, lays out how students were accused and subjected to disciplinary meetings despite the fact that their work was entirely their own (Coldwell, 2024). In this sense, the erosion of ethos challenges not only academic confidence but also the social and ethical structures that rely on responsible human communication.

Creating a Hyperreality

One way to understand the effects of these ambient rhetorical environments is through the concept of hyperreality. French philosopher Jean Baudrillard uses the term “hyperreality” to describe a condition in which the boundary between reality and simulation breaks down, such that individuals interact with representations as if they were real and can no longer differentiate what is authentic from what is constructed (Baudrillard, 1983). In this state, simulations do not merely imitate reality; they stand in for or replace lived experience. In environments saturated with AI-generated content, texts can become detached from identifiable human authors, making it increasingly difficult to distinguish between original content and simulation.

In some cases, the effects of ambient rhetoric or hyperreality extends beyond questions of information into emotional and social domains. As AI systems become a part of everyday communication, they change interactions and shape the ways we experience meaning. Two recent cases involving different AI chatbots illustrate these dynamics. In the first case, the parents of a 16-year-old teen allege that ChatGPT not only failed to provide appropriate crisis support but actually reinforced distress and harmful ideation during prolonged conversations (Fountain, 2025). Another eerily similar case involves a 14-year-old teen and the platform Character.AI leading to deep attachment and escalating emotional withdrawal, feelings of abandonment, and self-harm (CBS News, 2025). These stories highlight how AI-mediated environments can foster forms of engagement in which simulated interactions are experienced as highly impactful interventions in lived experience. From the perspective of ambient rhetoric, these interactions suggest AI content can shape perception and affect over time.

Dangers of a Polluted Infosphere

Ambient rhetoric, as a frame, provokes questions about the quality and reliability of information that surrounds us. Luciano Floridi describes the infosphere as an informational environment composed of data, content, systems, and interactions and argues that this environment can be degraded through misinformation, spam, or low-quality automated content (Floridi, 2014). Automated systems can produce large volumes of text that appear polished but lack substantive depth, sometimes described as “workslop,” output that looks superficially polished but lacks meaningful substance (Anthony, 2025; Business Insider, 2025). When surrounded by hollow content, people have greater difficulty making responsible and informed decisions. From the perspective of ambient rhetoric, this shift reflects how environments saturated with automated content reshape not only what information is available, but how it is encountered, evaluated, and understood.

AI-Generated Risks

An view of content that considers the environment and influence helps show how AI systems contribute to new forms of systemic risk regardless, and when content is automated, so too are the consequences. Manufactured risk, another way to consider the lingering persuasive influence of automated content, is generated by technological progress (Beck, 1992). Manufactured risk are problems outcomes of technology that are everywhere, unpredictable, and hard to measure or anticipate. For AI content, risks emerge through the continuous creation and circulation of synthetic media. For a clear example, deepfakes (videos or images created using AI that look real enough to fool many viewers), can blur the boundary between credible information and fabrication. One high profile case involved a deepfake video of Ukraine’s President Volodymyr Zelensky telling his nation to surrender over a national television station and to upload the video to the news stations website (Allyn, 2022). While not entirely convincing, the incident illustrates how generated media can be abused. The possibility of Ukraine prematurely surrendering to Russia is certainly a risk manufactured by human beings. This example only scratches the surface of manufactured risks and deepfakes which have appeared in many other contexts making it harder for people to judge what is genuine and what is synthetic.

The risk is not just that we will believe false information though, since our awareness of generated content changes our behaviors. Ambient AI and uncertainty of authorship can lead to hyper-scrutinizing of information and an unwillingness to believe content. Research by Jakesch, Hancock, and Naaman (2022) demonstrates that humans struggle to accurately distinguish AI generated from human generated content and frequently adopt flawed methods of doing so. When it is unclear whether content was produced by a human or an AI, individuals often rely on intuition and evaluate credibility based on surface level cues rather than methodological verification. This approach can both protect against potential misinformation and mislead as humans overcorrect or misinterpret signals.

Beyond AI Detection

So, what are we going to do about this? There is a lot of money and institutional investment going into AI detection as part of the technological arms race currently happening and that is unlikely to change anytime soon because human beings aren’t going to just abandon idealism. For example, Garcia-Matthewson found that colleges in California alone have spent more than $15 million on Turnitin’s plagiarism and AI detection tools across dozens of institutions, with many individual systems paying extra for AI detection add‑ons (García-Mathewson, 2025). These figures represent only a fraction of the total spending nationwide, highlighting the scale of investment in tools that are often flawed yet deeply embedded in academic integrity efforts.  

Traditional rhetorical theory usually sees authorship as something that comes from a human being. But when we treat authorship as coming only from individual people, we risk overlooking the ways technologies, platforms, and systems shape what gets written, shared, and believed. Students who are taught to focus only on a human author’s intention may struggle to recognize how algorithms influence visibility, how tools shape expression, or how AI affects credibility. 

But ambient rhetoric invites us to rethink authorship and agency. Rather than locating meaning in individual authors, it emphasizes how communication emerges through networks of people, technologies, and systems. This approach is useful because it asks us to recognize that humans have always worked within ecological environments or networks of people, tools, and systems that shape how meaning is created and shared. Our ideas and expressions are shaped by the tools and the environments they create, not just by individual thought alone.

From this perspective, communication is material: it involves our bodies, our actions, and the technologies and systems we use to interact with each other. In other words, meaning does not come from people alone. When we make something, authorship is not the work of a single person rather it emerges from a network of humans, machines, and systems all working together. This perspective does not take authorship or creativity away from people, but it does place it within a larger network of influence. This shift is pedagogically important because it trains students to be attentive not just to what a text says, but to how it comes into being, how it circulates, and how nonhuman actors affect judgment and meaning.

Likewise, media literacy is important. In an age of ambient AI people need skills to recognize how the infosphere shapes the decisions they make and the views they hold. Research consistently shows that media literacy equips individuals with critical thinking skills that help them distinguish credible information from misinformation (Machete & Turpin, 2020; Voitovych et al., 2025). Educational tools and programs aimed at building these competencies, such as apps like Twella, are so valuable because they provide structured practice in evaluating online information and fostering analytical skills that traditional instruction alone may not fully address (Twella, n.d.).   

At the same time, media literacy is essential, like Ponce argues. Understanding how AI systems generate, prioritize, and frame information is now essential for evaluating credibility in digital environments (Ng et al., 2021). Algorithmic manipulation shapes both the content users encounter and how they interpret authority and authenticity, making traditional media literacy insufficient on its own. Targeted literacy interventions are necessary and can improve individuals’ ability to distinguish reliable information from misleading content, reinforcing the importance of education that explicitly addresses AI-mediated communication (Guess et al., 2020).

Media literacy, AI literacy, rhetoric and material communication theory show that the challenges posed by ambient AI are not simply about the presence of machine-generated content, but about a broader transformation of the environment itself. In these environments, authorship is no longer singular or stable, credibility must be continually negotiated, and meaning emerges through networks of humans, technologies, and platforms rather than from individual intention alone. Responding to ambient AI requires an ecological way of thinking, one that attends to systems, relationships, and material conditions and that prepares each of us to communicate thoughtfully within the complex environments that now shape everyday life.

References

Allyn, B. (2022, March 16). Deepfake video of Zelenskyy could be ‘tip of the iceberg’ in info war, experts warn. NPR. https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia

Anthony, J. (2025, September 23). Have you been workslopped? Stanford researchers show how uncritical AI adoption turns work into workslop. Medium. https://medium.com/@WeWillNotBeFlattened/790681bc539b

Beck, U. (1992). Risk society: Towards a new modernity. Sage Publications.

Business Insider. (2025, September 30). Workslop is oozing into every corner of America’s white-collar offices. https://www.businessinsider.com/workslop-oozing-americas-white-collar-offices-generative-ai-2025-9

Baudrillard, J. (1983). Simulations (P. Foss, P. Patton, & P. Beitchman, Trans.). Semiotext(e). (Original work published 1981)

CBS News. (2025, September 16). Parents of teens who died by suicide after AI chatbot interactions testify in Congress. CBS News. https://www.cbsnews.com/news/ai-chatbots-teens-suicide-parents-testify-congress/

Chen, Z., Ye, J., Ferrara, E., & Luceri, L. (2025). Prevalence, sharing patterns, and spreaders of multimodal AI-generated content on X during the 2024 U.S. Presidential Election (arXiv:2502.11248) [Preprint]. arXiv. https://arxiv.org/abs/2502.11248

Cheng, A., Lin, Y., Reedy, G. et al. Ability of AI detection tools and humans to accurately identify different forms of AI-generated written content. Adv Simul 10, 66 (2025). https://doi.org/10.1186/s41077-025-00396-6

Coffey, L. (2024, February 9). Professors proceed with caution using AI‑detection tools. Inside Higher Ed. https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2024/02/09/professors-proceed-caution-using-ai

Coldwell, W. (2024, December 15). ‘I received a first but it felt tainted and undeserved’: Inside the university AI cheating crisis. The Guardian. https://www.theguardian.com/technology/2024/dec/15/i-received-a-first-but-it-felt-tainted-and-undeserved-inside-the-university-ai-cheating-crisis

David, I., & Gervais, A. (2025). AuthorMist: Evading AI text detectors with reinforcement learning. arXiv Preprint. https://arxiv.org/abs/2503.08716

Floridi, L. (2014). The Fourth Revolution: How the infosphere is reshaping human reality. Oxford University Press. PDF retrieved from https://issc.al.uw.edu.pl/wp-content/uploads/sites/2/2022/05/Luciano-Floridi-The-Fourth-Revolution_-How-the-infosphere-is-reshaping-human-reality-Oxford-University-Press-2014.pdf

Fountain, H. (2025, August 27). Parents allege ChatGPT played role in teen’s suicide in lawsuit against OpenAI. Time. https://time.com/7312484/chatgpt-openai-suicide-lawsuit/

Fowler, G. A. (2023, June 2). Turnitin says its AI cheating detector isn’t always reliable. The Washington Post. https://www.washingtonpost.com/technology/2023/06/02/turnitin-ai-cheating-detector-accuracy/

García Mathewson, T. (2025, June 26). California colleges spend millions to catch plagiarism and AI. Is the faulty tech worth it? The Markup / CalMatters. https://themarkup.org/artificial-intelligence/2025/06/26/ai-detector-california

Garfinkle, A. (2023, January 13). 90% of online content could be generated by AI by 2025, expert says. Yahoo Finance. https://finance.yahoo.com/news/90-of-online-content-could-be-generated-by-ai-by-2025-expert-says-201023872.html

Guess, A. M., Lerner, M., Lyons, B., Montgomery, J. M., Nyhan, B., Reifler, J., & Sircar, N. (2020). A digital media literacy intervention increases discernment between mainstream and false news. Proceedings of the National Academy of Sciences, 117(27), 15536–15545. https://doi.org/10.1073/pnas.1920498117

Jakesch, M., Hancock, J., & Naaman, M. (2022). Human heuristics for AI‑generated language are flawed (Preprint). PubMed Central. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10089155/

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non‑native English writers. Patterns, 4(7), Article 100779. https://www.sciencedirect.com/science/article/pii/S2666389923001307

Machete, P., & Turpin, M. (2020). The use of critical thinking to identify fake news: A systematic literature review. PMC. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7134234/

Májovský, M., Černý, M., Netuka, D., & Mikolov, T. (2024). Perfect detection of computer‑generated text faces fundamental challenges. Cell Reports Physical Science, 5(1), Article 101769. https://doi.org/10.1016/j.xcrp.2023.101769

Ng, D. T. K., Leung, J. K. L., Chu, S. K. W., & Qiao, M. S. (2021). AI literacy: Definition, teaching, evaluation and ethical issues. Proceedings of the Association for Information Science and Technology, 58(1), 504–509. https://doi.org/10.1002/pra2.487

Park, S., Nan, X. Generative AI and misinformation: a scoping review of the role of generative AI in the generation, detection, mitigation, and impact of misinformation. AI & Soc 41, 1501–1515 (2026). https://doi.org/10.1007/s00146-025-02620-3

Perkins, M., Roe, J., Vu, B. H., Postma, D., Hickerson, D., McGaughran, J., & Khuat, H. Q. (2024). GenAI detection tools, adversarial techniques and implications for inclusivity in higher education. arXiv Preprint. https://arxiv.org/abs/2403.19148

Ponce, T. (2024, May 10). Beyond resistance: Navigating the ontological shift in AI adoption through empathy, education, and engagement. Techne Forge. https://techneforge.com/features/beyond-resistance/

Russell, J., Karpinska, M., Akinode, D., Thai, K., Emi, B., Spero, M., & Iyyer, M. (2025). AI use in American newspapers is widespread, uneven, and rarely disclosed (arXiv:2510.18774) [Preprint]. arXiv. https://arxiv.org/abs/2510.18774  

Smith et al. (2025). More articles are now created by AI than humans. https://graphite.io/five-percent/more-articles-are-now-created-by-ai-than-humans

Spennemann, D. H. R. (2025). Estimating the proportion of AI-generated text on the web [Preprint]. arXiv. https://arxiv.org/abs/2504.08755

Thompson, B., Dhaliwal, M. P., Frisch, P., Domhan, T., & Federico, M. (2024). A shocking amount of the web is machine translated: Insights from multi-way parallelism. arXiv. https://arxiv.org/abs/2401.05749

Twella. (n.d.). About Twella: Helping young people develop self-awareness and critical thinking skills. https://www.twella.app/about

Voitovych, N., Kitsa, M., & Mudra, I. (2025). Media education and media literacy as a factor in combating disinformation. Media, 6(4), 188. https://doi.org/10.3390/journalmedia6040188

Weber‑Wulff, D., Anohina‑Naumeca, A., Bjelobaba, S., Foltýnek, T., Guerrero‑Dib, J., Popoola, O., Šigut, P., & Waddington, L. (2023). Testing of detection tools for AI-generated text. International Journal for Educational Integrity, 19, Article 26. https://doi.org/10.1007/s40979-023-00146-z