ct smith

docs goblin

slopaganda countermeasures: part 1

March 17, 2026

We've all seen AI slop online. We've all seen people wailing about the glut of AI slop. We've all seen people concerned about how their AI usage might impact their own ability to create content unaided. I think about this a lot, because I use Claude for so many tasks at work -- I genuinely don't want to draft technical content from scratch anymore.

My professional skills and natural abilities are complemented and rendered hyper-effective with tools like Claude Code, and I feel strongly that I'm allowed to appreciate the boon that these tools have been for my professional efforts but still be concerned about my brain. The problem with these new tools is that everyone else can churn out as much content as I can, with little to no quality checks and very little background in what constitutes "good".

So I want to write a little about how you can run some slopaganda countermeasures in your own lives, both on your colleagues and yourself.

I intend for this to turn into a short series about how I'm using AI, how I'm helping to influence safe practices using AI to generate content, and what I'm doing with my brain instead when I hand over menial tasks to AI.

So let's just get to it.

everyone's a writer

In the times before consumer-grade LLM tools were available, we had content mills and endlessly keyword-stuffed nonsense content. So let's be real: low-effort content has always existed. The problem we're having now is that generative AI has given everyone a tool they can use to very quickly churn out a lot of content that, at first blush, looks competent. One of the problems I see in this is that the struggle to write about something is sometimes what brings the clarity of thought. The struggle imbues the content with accuracy. If it's hard to write, it means you're trying (usually).

So, with tools like ChatGPT and Claude and all them, anyone can churn out reams of industrial-grade swill (or slop, if you prefer). There's no more of the inherent friction that getting your ideas out of your brain used to have. You could have an 80,000 word novel in no time. So, in my opinion, the content is naturally going to be soulless and probably not that great once you start looking beyond the basic sentence structure.

I think the most insidious part of how competent the content looks is that people don't give it a good read before handing it off. We can't force people to proofread and sniff test stuff, though. We need other strategies.

Other folks have talked this AI slop stuff to death, so I want to focus on how documentarians and other content-concerned folks can help be good examples and lead laterally.

agents of change

You can help influence others you work with and install guardrails to ensure the quality of AI-generated or AI-influenced content outputs. There are so many guerrilla-style tactics you can use to help insinuate your own doc quality guidelines into the tools other teams are using.

I've got a list of things I've tried, or heard of other folks trying with varying degrees of success:

Look around your own work situation and figure out where you can fit some unobtrusive best practices in, so everyone can do their best work.

As a documentarian, you may be in a good position to help shepherd folks into using good AI practices and save yourself work (or heartbreak) later. Just getting folks to use a style skill might save you a loooooooooot of editing (or heartbreak).

what's next

Next post, I'll talk about what I'm using AI for (and what I'm not) at work. I've talked about this fairly recently, but this is one of those things that is evolving -- so I think a quarterly update is called for. If there's anything in particular you think I should talk more about, let me know.

I didn't use generative AI when creating this post. I did use Claude Code to remind me of two custom keybindings I recently added in Ghostty though, hahaha