Discord is about to force some of the people who use its messaging app to make a choice: Use the platform with restricted features, or prove their age. It’s a move that platforms have been slowly approaching, but Discord’s teen-by-default rollout is a stronger clampdown that could offer a glimpse at an age-gated future on the web worldwide.
Starting next month, users who don’t verify their age using a face scan or government ID will no longer be able to access age-restricted servers and channels, can’t speak in Discord’s “stage” channels, and will see filters for any content deemed graphic or sensitive, among other restrictions. They will only be able to skip age checks if Discord’s forthcoming age inference model determines that they’re an adult. The rules are a more extreme version of policies many services are rolling out, often in response to lawsuits and government pressure — even if the technology isn’t ready yet.
Meta, Reddit, Bluesky, and Xbox, for example, have implemented age verification requirements in response to child safety laws in other countries, including the UK and Australia. At a global level, Instagram set the stage in 2022 by asking underage users to take a video selfie upon changing their age to over 18, then ramped up efforts to identify and place teens into a more private account in 2024. It began using AI to scan for signals that a user might be lying about their age, such as someone else on the platform wishing them a happy 14th birthday.
In response to concerns about child safety, Google announced last year that it would start using machine learning to estimate the age of its US users based on the information they’ve searched for, or the YouTube videos they watch. Similar to Instagram, Google places users it identifies as under 18 into a more restrictive account built for teens.
OpenAI has also rolled out an AI-powered age prediction model in ChatGPT to identify users under 18 and restrict access to certain kinds of content, including graphic violence, sexual roleplay, or dangerous viral challenges. The model works by analyzing behavior and account-level signals, such as how old a user says they are, the age of their account, and when the user is active. Other AI platforms, including Anthropic and Character.AI, are using age assurance to detect underage users as well. And Roblox started forcing users to undergo age checks to chat with other players in January.
But where Roblox is heavily child-focused, Discord is popular among people of all ages. And it’s being upfront about all users, not just suspected minors, potentially getting slapped with age checks. Its global rollout will also prevent users from getting around the restrictions by accessing the site from another region using a virtual private network (VPN). Many Discord users have already responded to the incoming change with calls to boycott the app, with some choosing to leave the platform as well as cancel their Nitro subscriptions.
The March rollout will be a clearer test of how well people will accept the idea of having their face scanned by an AI club bouncer or uploading an ID to hang out online, and whether they’ll swallow the risks that entails. Last year, a third-party vendor used by Discord was involved in a data breach, exposing user information and a “small number” of images of IDs used for age verification. Discord says it “immediately stopped” working with that vendor for verification following the breach.
Suzanne Bernstein, a counsel at the Electronic Privacy Information Center, tells The Verge that putting age restrictions on certain features or content isn’t a “silver bullet” for safety. “The way to protect not just kids, but everyone’s safety online, is to design products that are less manipulative and that don’t pose a risk of harm,” Bernstein says.
But age checks, not thoughtful design, are becoming platforms’ standard response to pressure from around the globe. They’ve already been operating in individual countries or behind the scenes through AI — now, they’re front and center on a major social app.






