Yesterday, while scrolling through TikTok, I came across something interesting: user after user talking about “Dior bags” and other luxury brands. But they weren’t talking about fashion trends—they were discussing their theories on the mysterious SUV-sized drones sighted over New Jersey last month.
If you’re not someone who’s chronically online, this fascinating use of language may sound weird at first. You might also be wondering why social media users talk like this.
These coded expressions are called algospeak — coded language, euphemisms, and alternative phrases by social media users designed to circumvent content moderation algorithms on social media platforms like TikTok, Instagram, and YouTube.
Why does algospeak exist?
At its core, algospeak is a creative form of self-censorship used by social media users, particularly content creators, to discuss sensitive topics that might otherwise get their accounts banned or their posts taken down.
These topics can range from mental health, sociopolitical issues, and even geopolitical matters—topics that may get the user run afoul of the platform’s automated moderation systems.
Social media platforms enforce their content moderation guidelines through automated algorithms, which can be faulty or inconsistent. Words or phrases flagged as inappropriate, sensitive, or non-advertiser-friendly can result in penalties like post removal, shadow banning, or demonetization.
For example, on TikTok, the watermelon emoji (🍉) has become a subtle symbol of solidarity with Palestine, evoking the colors of the Palestinian flag without triggering censorship.
Some examples of algospeak on TikTok
On TikTok, the use of algospeak goes far beyond emojis. Creators often replace potentially flagged words with harmless-sounding alternatives or intentional misspellings to slip past the platform’s content moderation algorithm (Steen et al., 2023).
In the image below, you’ll find a few examples of some commonly used phrases and codes by content creators on TikTok. (Keep in mind that while I got this from a 2023 study, some items in this list may have already fallen out of fashion, or have already been flagged by content filters.)

Creating bubbles: The disadvantages of algospeak
Algospeak isn’t just a strategy to avoid penalties—it’s also a way to foster community and solidarity among users. The shared understanding of these codes creates an in-group language, building a sense of connection in an environment that might otherwise feel isolating or hostile.
While this allows users to build communities, it can also lead to pockets of conversation bubbles where communication can be less accessible to newcomers.
As users employ coded language to communicate sensitive topics, public discourse can become more fragmented: those who are unfamiliar with the evolving lexicon may find the discussions confusing or even unintelligible, like I did when I first heard of Dior bags.
Such cryptic communication can create barriers to understanding crucial issues, limiting or even hindering meaningful conversations. It can also limit the reach of important discussions, particularly those related to human rights and global conflicts.
Another disadvantage of algospeak is that it may introduce accessibility challenges. Many of these coded phrases are rooted in English-speaking internet culture, leaving non-native speakers and users from other linguistic or cultural backgrounds out of the loop. This creates an uneven playing field where only those fluent in the evolving lexicon of algospeak can fully engage with the content.

How algospeak evolves
Algospeak evolves at a breakneck pace, driven by the constant tug-of-war between users and platform moderation systems. What’s considered safe today might be flagged tomorrow, pushing creators to innovate further.
For example, a phrase like “unalive,” which gained popularity a couple of years ago as a euphemism for suicide and dying, eventually became recognizable to algorithms, prompting users to come up with new terms.
This cycle underscores a fundamental flaw in automated moderation systems: they struggle to understand context. By relying on rigid, keyword-based rules, algorithms often fail to distinguish between harmful content and legitimate discussions about critical topics, inadvertently silencing meaningful conversations.
Algospeak = free expression?
The rise of algospeak sheds light on a deeper ethical issue: Is it even possible to strike a balance between user safety and free expression on social media?
Platforms argue that strict moderation is necessary to create safe and advertiser-friendly spaces. But the reliance on algorithms may have created a chilling effect, where users censor themselves or resort to cryptic language to avoid being penalized. Moreover, it can make it more difficult to have truly free conversations on the platform, and even harder for marginalized voices to be heard.
As machine learning and other technology advances, this dance between algorithms and users will likely continue. Platforms may introduce more sophisticated tools, like AI more capable of analyzing context, but this could also raise concerns about privacy and overreach.
In any case, users will undoubtedly find new ways to adapt, as they always have.
References:
Steen, E., Yurechko, K., & Klug, D. (2023). You can (not) say what you want: Using algospeak to contest and evade algorithmic content moderation on TikTok. Social Media + Society, 9, 20563051231194586. https://doi.org/10.1177/20563051231194586