Social Platforms Failed GLAAD’s Safety Test. What’s The Lesson For Publishers?

social platforms failed glaad’s safety test. what’s the lesson for publishers?

With social platforms flunking GLAAD’s social media safety test, publishers have a chance to raise the curve by delivering the brand-safe, values-driven environments advertisers are looking for.

GLAAD’s 2025 Social Media Safety Index (SMSI) assesses the safety of six major platforms—TikTok, X, YouTube, Facebook, Instagram and Threads—for LGBTQ+ users. And the annual report reveals an obvious gap in LGBTQ+ user safety between web publishers and social media.

Earlier this year, platforms, including Facebook, Instagram and YouTube, rolled back protections and content moderation standards, with many of these changes focused on user discussion of LGBTQ+-related issues.  

For example, Meta now allows users to call LGBTQ+ people “mentally ill” across Facebook, Instagram and Threads, and YouTube has removed gender identity as a protected category in its hate speech policies. X reinstated its policy prohibiting misgendering and deadnaming trans people in 2024 after removing it in 2022—although X owner Elon Musk, who has publicly denounced his trans daughter, assured a prominent anti-trans poster that the policy will not be widely enforced.

Partly due to these platform changes, LGBTQ+ users report being exposed to more harassment and demeaning language on social media. As a result, none of the social media platforms examined in GLAAD’s report successfully passed the organization’s safety test. 

“Most Americans have values that are about pluralism and democracy, and are not in agreement with these kinds of fear-mongering and attacks on people based on their identity,” said Jenni Olson, senior director of social media safety at GLAAD.

While social platforms wrestle with inconsistent moderation and public backlash, publishers have an opportunity to reassert their value as safer, more transparent environments, especially for brands that are serious about avoiding adjacency to harmful content.

A Warning Sign in the Numbers

TikTok leads GLAAD’s 2025 SMSI rankings, but barely—scoring just 56 out of 100. At the bottom, X scored just a 30, while Meta’s properties and Google’s YouTube fall somewhere in between. 

Across the board, GLAAD pointed to a steady backslide in user protections: weaker enforcement, vague policy updates and, in some cases, outright permission for discriminatory content.

GLAAD singled out Meta’s policy changes as particularly surprising. “Meta has essentially repositioned itself as an anti-LGBT brand,” said Olson, “which is baffling for a company that should aim to be inclusive, if not at least neutral.”

Olson added that when a platform fails to enforce its own rules or removes protections for marginalized communities, the consequences don’t stay on the platform. They travel downstream and can even affect the perception of any brand whose ad appears alongside harmful user-generated content. In other words, platform instability has become a brand safety (or suitability) issue.

GLAAD’s report calls on platforms to bring back and actually enforce hate speech protections for LGBTQ+ users. It also calls on the platforms to improve how they train content moderators, be transparent about their content and data practices, cut back on surveillance ads and set clear rules for respectful behavior.

Why Publishers Are the Safer Bet

Of course, the industry should push social media platforms to protect their LGBTQ+ users. But with the social platforms dropping the ball on user safety, publishers have a distinct advantage.

“Platforms simply don’t offer meaningful transparency on many aspects,” said Olson. “Whether that’s content moderation or community guidelines or their enforcement reports.”

Platform moderation often relies on machine learning models, vague enforcement tiers and opaque behind-the-scenes appeals processes. In contrast, publishers can lead with transparency, especially in an industry already plagued by brand safety issues.

Publishers can show how they review content, address harmful posts and safeguard advertiser messaging. That matters more today than it did even a year ago, as brand suitability questions begin to cut deeper than basic keyword adjacency.

Publishers also have an advantage because they’re less reliant on user-generated content. While social platforms can’t control what users post and can only make moderation decisions in retrospect, publishers get to decide what’s allowed on the page.

For publishers serving LGBTQ+ audiences or other underrepresented communities, this moment also presents an opportunity to secure a budget from advertisers seeking trusted, values-aligned environments. 

As Olson argues, there will always be brands that are committed to equal rights and protections for the LGBTQ+ community. And these brands will weigh the suitability of placing their ads against any content that could be harmful to that community.  

So, while it’s hard for many publishers to match the scale of social platforms, they can win ad budgets by leading on a few media quality fundamentals: defined editorial standards, transparent moderation processes and greater accountability.