Some Medium writers and editors do applaud the platform’s approach to AI. Eric Pierce, who founded Medium’s largest pop culture publication Fanfare, says he doesn’t have to fend off many AI-generated submissions and that he believes that the human curators of Medium’s boost program help highlight the best of the platform’s human writing. “I can’t think of a single piece I’ve read on Medium in the past few months that even hinted at being AI-created,” he says. “Increasingly, Medium feels like a bastion of sanity amid an internet desperate to eat itself alive.”

However, other writers and editors believe they currently still see a plethora of AI-generated writing on the platform. Content marketing writer Marcus Musick, who edits several publications, wrote a post lamenting how what he suspects to be an AI-generated article went viral. (Reality Defender ran an analysis on the article in question and estimated it was 99 percent “likely manipulated.”) The story appears widely read, with over 13,500 “claps.”

In addition to spotting possible AI content as a reader, Musick also believes he encounters it frequently as an editor. He says he rejects around 80 percent of potential contributors a month because he suspects they’re using AI. He does not use AI detectors, which he calls “useless,” instead relying on his own judgment.

While the volume of likely AI-generated content on Medium is notable, the moderation challenges the platform faces—how to surface good work and keep junk banished—is one that has always plagued the greater web. The AI boom has simply super-charged the problem. While click farms have long been an issue, for example, AI has handed SEO-obsessed entrepreneurs a way to swiftly resurrect zombie media outlets by filling them with AI slop. There’s a whole subgenre of YouTube hustle culture entrepreneurs creating get-rich-quick tutorials encouraging others to create AI slop on platforms like Facebook, Amazon Kindle, and, yes, Medium. (Sample headline: “1-Click AI SEO Medium Empire 🤯.”)

“Medium is in the same place as the internet as a whole right now. Because AI content is so quick to generate that it is everywhere,” says plagiarism consultant Jonathan Bailey. “Spam filters, the human moderators, et cetera—those are probably the best tools they have.”

Stubblebine’s argument—that it doesn’t necessarily matter whether a platform contains a large amount of garbage, as long as it successfully amplifies good writing and limits the reach of said garbage—is perhaps more pragmatic than any attempt to wholly banish AI slop. His moderation strategy may very well be the most savvy approach.

It also suggests a future in which the Dead Internet theory comes to fruition. The theory, once the domain of extremely online conspiratorial thinkers, argues that the vast majority of the internet is devoid of real people and human-created posts, instead clogged with AI-generated slop and bots. As generative AI tools grow more commonplace, platforms that give up on trying to blot out bots will incubate an online world in which work created by humans becomes increasingly harder to find on platforms swamped by AI.

Share.
Exit mobile version