The uncomfortable middle: Living with AI’s promise and its price

A stylized illustration of a person standing at a fork in the road, with one side bright and abstract and the other dark and industrial, representing the promises and risks of AI.

That discomfort, I’ve decided, is the cost of staying engaged.

A note to readers: I wrote this for my fellow content strategists, rhetoric scholars, and technical writers. But, if you’re thinking critically about AI’s role in knowledge work, you’re welcome here too. This is a mixed-genre piece: part research, part reflection, part questions without answers. It’s definitely not a how-to article, and I’m not selling any AI solutions. Just putting my late-night meanderings into writing.

In 1992, technical communication scholar, Steven B. Katz, published an article that haunts me more now than when I first read it in graduate school. “The Ethic of Expediency: Classical Rhetoric, Technology, and the Holocaust” examines how the Nazi regime’s technical documents (Read: Memos about transport logistics and reports on operational efficiency) used the detached, objective language of technical rationalism to obscure the genocide that was actually happening (Katz, 1992). Katz’s argument wasn’t that technical writing caused the Holocaust. What he uncovered was far more subtle and unsettling: When we prioritize efficiency, speed, and precision in our language, and when we strip out our lived experiences and humanistic ethics, we create the conditions for harm. The style of neutrality becomes a mask.

I also want to be clear that I’m not drawing an equivalence between AI layoffs and genocide. What I am saying is that the rhetorical patterns Katz (1992) identified—efficiency as the highest value and human lives abstracted into metrics—are alive and well in current AI discourse. There’s a direct relationship between the stories we tell ourselves about work, time, and money—and how easily those stories allow us to look past the human and environmental costs of AI when “efficiency” is the frame.

I think about Katz’s article a lot when I read press releases or LinkedIn posts about AI. The ethic of expediency saturates the rhetoric we’re seeing from CEOs and VCs about the magic of AI and what it can do for job efficiency, the economy, or even our own quality of life. Just last month, I went to a Claude workshop where, at the end, we spent time thinking about how AI could enhance or change our daily workflows. here was a sense of hope in our reflections, but of course, there was also the possibility of unwelcome consequences to consider like job loss, identity reframing, or the loss of agency. That workshop conversation has stayed with me because the hope-and-unease pattern is something that is playing out at a much larger scale.

I use AI every day. I use it to speed up translation and localization across 40+ languages. I’ve created a series of interconnected agents that will help content operations teams complete rudimentary tasks more quickly. I experiment with agentic workflows to compress weeks of work into minutes or hours. But I also lie awake some nights wondering whether the language we’re using to talk about this technology—optimization, efficiency, productivity gains, headcount reduction—is doing the same work Katz warned us about. In fact, I just used expedient language to describe my own use of AI: “speed up” and “complete tasks more quickly” and “from weeks to hours.” This leads me to ask myself and others who are thinking critically about the AI landscape: Are we using the rhetoric of technical rationalism to obscure consequences we’d rather not say out loud?

This is the uncomfortable middle ground that most think pieces skip past. The AI discourse has bifurcated into two camps: Techno-utopians who promise AI will “augment, not replace” human work, and doomsayers warn of mass unemployment and environmental collapse. Neither perspective tells the whole truth. Both, in their own way, deploy the ethic of expediency. One reduces workers to “resources” awaiting “upskilling,” and the other oversimplifies complex tradeoffs into absolute catastrophe. Both ignore what it actually feels like to be a content professional right now, in this exact moment…deciding whether to lean in or pull back.

I’m choosing to lean in. But not blindly, and not without insisting that we name the costs of AI in human terms, not just technical ones.

The anxiety is real because the data is real

This is not a clean replacement story. It’s a messy transition.

Those aren’t just my late-night anxieties. The broader picture backs them up. Let’s dispense with false comfort: The layoffs are happening, and they’re happening to us and people that we know and care about.

In 2025, tracking estimates suggest over 100,000 U.S. workers lost jobs that employers explicitly attributed to AI (Programs.com, 2026).Just this month, trade reporting indicates Oracle laid off approximately 30,000 employees to fund AI data center expansion—the same quarter it posted a 95% jump in net income (Tech Insider, 2026). By the end of Q1 2026, tech layoffs alone are expected to surpass 59,000, with companies like Block cutting 40% of their workforce as CEO, Jack Dorsey, announced that “intelligence tools have changed what it means to build and run a company” (International Business Times UK, 2026).

Listen to that language. Intelligence tools. Not “we fired 4,000 people.” The passive construction, the abstraction, the technical framing. This is the ethic of expediency doing its quiet work. Workers become headcount. Layoffs become restructuring. Human lives become efficiency gains. The memo writes itself.

A recent analysis of 180 million job postings found that roles for writers and computer graphic artists have declined for two consecutive years, down 33% for graphic artists in 2025 alone (Bloomberry, 2025). And recent college graduates across disciplines report a shrinking in entry-level roles; they just don’t exist. Technical communication expert, Tom Johnson (2026), predicts that in 2026, “hiring numbers will likely remain flat for generalist roles, and many tech writers who leave a team might not have their positions backfilled.”

But here’s the nuance the headlines miss: A study analyzing this wave found that content writers already share 71% skill overlap with the “AI Content Strategist” roles replacing them (JobsPikr, 2026). The same research found that 55% of companies that made AI-attributed layoffs in 2025 reportedly regret the decision, with half expected to quietly rehire (JobsPikr, 2026). This is not a clean replacement story. It’s a messy transition. It’s part genuine transformation, part investor signaling, and part companies chasing headlines. Attributing a layoff to AI isn’t just an operational decision anymore; it’s a market signal. And companies know it.

The question isn’t whether AI will change our work. It already has. The question is who gets to shape that change, and whether we’ll insist on language that keeps the human and environmental consequences visible.

The environmental cost we’d rather not discuss

The lack of transparency makes informed decisions nearly impossible. Data center operators rarely distinguish between AI and non-AI workloads in environmental reporting.

There’s another dimension to the AI conversation that most content professionals avoid, perhaps because it complicates the narrative, or because it makes us complicit in something larger than career anxiety. And here, the ethic of expediency operates at an industrial scale.

AI systems carry a significant environmental footprint. Peer-reviewed research published in late 2025 estimated that AI’s carbon footprint could range from 32.6 to 79.7 million tons of CO2 emissions, with water consumption potentially reaching 312.5 to 764.6 billion liters (de Vries, 2025). That’s comparable to the global annual consumption of bottled water. Google’s data centers alone consumed 5.6 billion gallons of water in 2023, a 24% increase from the prior year (TRENDS Research & Advisory, 2026).

The water issue is particularly acute. Data centers in Texas are projected to use 49 billion gallons of water in 2025, potentially rising to 399 billion gallons by 2030 (Lincoln Institute of Land Policy, 2025). Much of this water doesn’t get recycled. It evaporates during cooling, with centers typically losing about 80% of withdrawn water to evaporation (Net Zero Insights, 2025). An assessment of over 9,000 data center facilities found that by the 2050s, nearly 45% may face high exposure to water stress (MSCI, 2025). I won’t pretend I know how to weigh these costs against AI’s potential benefits. When I use Claude to compress a week’s worth of work into an afternoon, am I contributing to a water crisis in Arizona? Does the productivity gain justify the environmental draw? These are not rhetorical questions. And I genuinely don’t have answers. But I do wonder about how much water displacement I’m causing each time I ask an LLM to complete a series of tasks. Tokens have real costs, and not just the ones associated with subscription fees or usage levels.

What I do know is that companies are making sustainability pledges while simultaneously scaling AI infrastructure that pushes consumption upward. Microsoft pledged to be water-positive by 2030, but its AI rollout has driven ever-greater resource use, with emissions now roughly 30% higher than 2020 levels (Food & Water Watch, 2025). Some reporting suggests actual emissions from major tech company data centers may be significantly higher than their official disclosures (Food & Water Watch, 2025).

The lack of transparency makes informed decisions nearly impossible. Data center operators rarely distinguish between AI and non-AI workloads in environmental reporting. Communities hosting new facilities often face non-disclosure agreements that obscure water and energy usage until after construction is approved (Lincoln Institute of Land Policy, 2025). We’re making trillion-dollar infrastructure decisions with incomplete information.

And when we take a moment to notice the language being used to describe AI’s pull on natural resources, we see terms like water usage, energy consumption, environmental footprint. These are technical terms; they’re seemingly neutral, measured, and professional. They’re doing the work Katz (1992) described: Abstracting consequences into metrics, converting rivers and aquifers into throughput, transforming communities into stakeholders. The technical vocabulary is not wrong, exactly. But it makes certain questions harder to ask. Whose water? Which communities? And what happens when the aquifer runs dry?

I’m not arguing that we should stop using AI. I’m arguing that we should stop pretending the environmental question will resolve itself, or that it’s somehow someone else’s problem to solve. As GenAI users, we have a responsibility to care about its environmental impact and to ask harder questions of our legislators, of our energy companies, and of the contractors (and subcontractors) who are executing the infrastructural plans that enables its functionality.

What I’ve learned from inside the arena

Sitting out doesn’t preserve the status quo; it just cedes the design decisions to people who may care less about quality, voice, or the human elements of content work.

So, okay. The jobs are shifting. The environmental costs are real. And yet here I am, building AI agents and teaching my team to work with these tools rather than against them. Let me explain why…and I want to be honest about the tension here.

When I describe what AI has done for my work, I catch myself reaching for the same expedient language I’ve been critiquing: efficiency, scale, time savings. It’s hard to talk about these tools without slipping into the rhetoric of optimization. But I’ll try.

That 71% overlap between content writers and AI content strategist roles isn’t a coincidence (JobsPikr, 2026). Everything I’ve learned in 15 years of content strategy—understanding the audience, structuring information, maintaining voice and tone consistency across touchpoints—matters more with AI, not less. The tools have changed, but the architecture of good content work hasn’t. What AI has taken from my daily work is mostly what I didn’t want anyway: Formatting grunt work, first-pass translation, and basic summarization. What’s left is the thought and strategy work: Taxonomy design, content modeling, stakeholder navigation, and quality judgment. I spend less time wrestling with spreadsheets and more time solving actual problems. Or at least, that’s the story I tell myself. I’m aware that it’s a convenient one.

I’m not a computer scientist. I didn’t go to school for machine learning. I studied sculpture and rhetoric. But I’ve learned enough prompt engineering, enough workflow orchestration, enough tool integration to become useful to teams that need to scale content intelligently. Six months ago, I didn’t know how to do anything beyond generating content in ChatGPT or Gemini. Now, I use different LLMs for different purposes, consciously connecting related workflows, markdown instructions, and structured content. The learning curve was real, but manageable, particularly with the privileges of time, access, and employer support. I’m aware that’s not everyone’s situation.

And here’s the calculus that keeps me moving forward, even when it feels compromised: If AI integration is coming regardless, I’d rather be shaping how it arrives than reacting to its implementation by others. Sitting out doesn’t preserve the status quo; it just cedes the design decisions to people who may care less about quality, voice, or the human elements of content work. That’s not nobility. It’s a strategy. And maybe a little self-preservation.

Finding a tenable (personal) position

I’m trying to build a career that can adapt to multiple possible futures.

I don’t have a five-step framework for you. That would be dumb. And any decisions you make about AI or how to use it are personal ones. Anyone selling certainty about AI’s trajectory is selling something else. What I can offer is the story of how I’ve arrived at a position that I can live with (at least for now).

Accept complicity without surrendering agency. I’m contributing to AI’s growth by using these tools, and by extension, contributing to its environmental footprint. I’m also contributing to a more thoughtful implementation than might otherwise exist. Both things can be true. I choose to stay engaged because the alternative (total abstention) doesn’t actually improve the outcome; it just removes my voice from the conversation (and potentially reduces my skillset and future employability).

Invest in the skills AI can’t replicate easily. Strategy. Judgment. Stakeholder management. Cross-cultural communication. Deep expertise in specialized domains. Ethical reasoning. Relationship building. These aren’t just “soft skills”. They’re the substrate of work that remains stubbornly human, and they’re the skills that organizations reliably undervalue until they don’t have them. And with some industry estimates suggesting a large share of workers will need reskilling within the next few years (Digital Applied, 2026), now is the time to double down on what makes us irreplaceable.

Build leverage through documentation. I’m documenting everything: What AI does well in my workflows, where it fails, what it costs in time and attention, and what quality tradeoffs I’m making. This isn’t just professional CYA. It’s building the evidence base for better decisions. When the next round of “should we reduce headcount because AI?” conversations happen, I want to have data, not just intuitions.

Demand transparency without expecting perfection. I can advocate for better environmental disclosure from AI companies while still using their products. I can push for more thoughtful workforce transition policies while acknowledging that some roles will genuinely change. Values-based purity isn’t available to those of us who want to participate in shaping outcomes.

Reserve space for genuine uncertainty. The most intellectually honest position I can take is that I don’t know how this will unfold. The doomsday scenarios might be right. The techno-utopian scenarios might also be right. More than likely, the future will be messier than either scenario. Genuine augmentation exists alongside real displacement. Environmental costs are hidden behind slick rhetoric about efficiency gains. Unexpected consequences will surface alongside intended improvements. But for now (and probably always), I’m trying to build a career that can adapt to multiple possible futures. I’m not betting everything on a single prediction, but I’m also learning to operate in an intellectual and moral grey area.

The work ahead

Focus on growing skills that are uniquely human. And when you explain what you’re doing and why you’re doing it, choose words that keep the human and environmental costs visible.

I titled this piece “The uncomfortable middle” because that’s where most of us actually live. We’re not the VCs celebrating every new model release. We’re not the critics calling for moratoriums. We’re the practitioners. Just writers, strategists, designers, and researchers, trying to do good work while the ground shifts beneath our feet.

There’s no exit from this positionality, only navigation through it. The content professionals who will thrive over the next decade will be the ones who can hold multiple truths simultaneously. AI is genuinely useful and its costs are being externalized. The job market is transforming and human expertise still remains essential. Environmental concerns are legitimate and total abstention isn’t a viable strategy. Leadership is often overconfident about AI’s capabilities and some of that confidence reflects real productivity gains. And some of it will prove to be an overshot, but likely after the layoffs have already happened.

This is the kind of double vision that writing professions have always required. We’ve always worked at the intersection of what organizations want to say and what audiences need to hear. We translate between worlds, holding tensions that others would prefer to collapse. AI adds another layer to that translation work. It doesn’t eliminate the need for it. It just further complicates the work.

Katz (1992) ended his essay with a call for professional communicators to reclaim ethics as central to our work—not as an afterthought, not as compliance, but as the core of what we do. Three decades of feminist care ethics — from Gilligan and Noddings to Tronto and Puig de la Bellacasa — have given us a vocabulary for what that looks like in practice: attention to particular people in particular contexts, responsibility understood as relational rather than transactional, and the refusal to let efficiency stand in for judgment. An ethic of care won’t cancel out an ethic of expediency. But it does give us something to counter it with, which is more than the rhetoric of optimization offers on its own. We’re the ones who choose the words. We’re sometimes the ones who decide whether the memo says headcount reduction or we’re letting people go. And sometimes… we’re the ones who can insist that efficiency is not the only value worth measuring.

If we want to live in the messy middle, this is the work: Use the tools, but ask questions and take notes. Focus on growing skills that are uniquely human. And when you explain what you’re doing and why you’re doing it, choose words that keep the human and environmental costs visible.

Here’s what it looks like for me: I’m going to keep experimenting and learning, building my agents and documenting my workflows. I’ll keep advocating for thoughtful implementation and honest cost accounting. I’ll keep watching the data and adjusting my position as evidence accumulates. I’ll keep lying awake some nights, wondering what my workday will look like even just 3-6 months from now. And I’ll keep asking the questions that expedient language is designed to disguise: Who benefits? Who pays? What are we not saying?

That discomfort, I’ve decided, is the cost of staying engaged. It’s better than the false comfort of pretending any of this is simple. Because, it’s not. And it’s only going to get more complex as we stress test new models, discover more uncomfortable truths, and learn how to live in a world where humans and AI coexist in the knowledge-making process.


Erica M. Stone, PhD is a Principal Content Strategist specializing in highly regulated industries, like fintech and healthcare. She writes about content strategy, AI integration, community, positionality, and the intersection of worth and work.


References

Bloomberry. (2025, November 3). I analyzed 180M jobs to see what jobs AI is actually replacing today. https://bloomberry.com/blog/i-analyzed-180m-jobs-to-see-what-jobs-ai-is-actually-replacing-today/

de Vries, A. (2025, December 17). The carbon and water footprints of data centers and what this could mean for artificial intelligence. ScienceDirect. https://www.sciencedirect.com/science/article/pii/S2666389925002788

Digital Applied. (2026, February 21). AI upskilling 2026: Stay relevant as 80% must retrain. https://www.digitalapplied.com/blog/ai-upskilling-workforce-guide-stay-relevant-2026

Food & Water Watch. (2025, April 9). Artificial intelligence: Big Tech’s big threat to our water and climate. https://www.foodandwaterwatch.org/2025/04/09/artificial-intelligence-water-climate/

Gilligan, C. (1982). In a different voice: Psychological theory and women’s development. Harvard University Press.

International Business Times UK. (2026, March 25). Tech layoffs surge to 59,000 in 2026 as Amazon, Meta and Block cut jobs amid AI shift. https://www.ibtimes.co.uk/ai-driven-layoffs-2026-tech-sector-1788111

JobsPikr. (2026, March). AI layoffs 2026: The ROI reality check. https://www.jobspikr.com/report/ai-layoffs-2026-roi-reality-check/

Johnson, T. (2026, January 1). 12 predictions for tech comm in 2026. I’d Rather Be Writing. https://idratherbewriting.com/blog/tech-comm-predictions-for-2026

Katz, S. B. (1992). The ethic of expediency: Classical rhetoric, technology, and the Holocaust. College English, 54(3), 255–275.

Lincoln Institute of Land Policy. (2025, October 17). Data drain: The land and water impacts of the AI boom. https://www.lincolninst.edu/publications/land-lines-magazine/articles/land-water-impacts-data-centers/

MSCI. (2025, December 9). When AI meets water scarcity: Data centers in a thirsty world. https://www.msci.com/research-and-insights/blog-post/when-ai-meets-water-scarcity-data-centers-in-a-thirsty-world/

Net Zero Insights. (2025, November 25). How AI growth is intensifying data center water consumption. https://netzeroinsights.com/resources/how-ai-intensifying-data-center-water-consumption/

Noddings, N. (2013). Caring: A relational approach to ethics and moral education (2nd ed.). University of California Press.

Programs.com. (2026, March). List of companies announcing AI-driven layoffs. https://programs.com/resources/ai-layoffs/

Puig de la Bellacasa, M. (2017). Matters of care: Speculative ethics in more than human worlds. University of Minnesota Press.

Tech Insider. (2026, April 3). Oracle layoffs 2026: 30,000 jobs cut to fund AI data centers. https://tech-insider.org/oracle-30000-layoffs-ai-data-center-restructuring-2026/

TRENDS Research & Advisory. (2026, February 8). Water implications of AI-driven digital infrastructure expansion. https://trendsresearch.org/insight/water-implications-of-ai-driven-digital-infrastructure-expansion/

Tronto, J. C. (1993). Moral boundaries: A political argument for an ethic of care. Routledge.