Is Twitter Competitor Bluesky Failing To Protect Its Black Users?
Following Elon Musk’s announcement to buy Twitter in April 2022, many users, particularly Black users, opposed it. Some have found other platforms, such as Mastodon, Spill, and Bluesky.
Bluesky, however, has recently raised some alarm bells after a moderation policy change following a death threat against a Black woman left many questioning the safety of the platform.
What is Bluesky?
Bluesky is a decentralized social app that started off as a project by Twitter co-founder Jack Dorsey in 2019 when he was Twitter’s CEO. Dorsey chose Jay Graber to lead Bluesky, and Twitter paid Bluesky services income to build an open social protocol for public conversation that could someday become a client.
Bluesky became an independent company in 2021, and since Musk’s Twitter takeover in 2022, the decentralized social platform has been wholly divorced from Twitter.
Today, the invite-only online platform has over 100,000 users and a waitlist of 1.9 million people.
How does Bluesky Work?
Bluesky uses an open-source framework built in-house, the AT Protocol, meaning people outside the company have transparency into how it is built and what is being developed.
Bluesky explains the AT Protocol using the analogy of moving cities. “Every time you create an account on a social platform, it’s like moving to a new city […] on centralized social platforms, if you leave, it’s like leaving all your friends behind with no way to contact them, and leaving your house behind without being able to take anything with you.” In contrast, “the AT Protocol essentially lets people move between cities.”
According to TechCrunch, Bluesky will soon be a federated platform, which means that endless individually operated communities can exist within the open-source network.
How do people use Bluesky?
It’s used similarly to Twitter; you can write posts of up to 256 characters and include photos; there’s additionally a home and discover tab; however, there is no direct messaging (DMs) feature.
Users need to be invited to use the platform, and at the moment, you have to sign up for a waitlist to join.
Why are Black users concerned?
Bluesky recently changed its moderation policies following a death threat against a Black woman left many questioning the safety of the platform. Here’s what happened.
According to TechCrunch, a coding bug resulted in every user in the thread getting a notification whenever someone else responded. This grew into a chaotic, seemingly infinite discussion board with countless subthreads, i.e., a hellthread. It was in this hellthread that the incident occurred.
When one user, Aveta, asked people to stop posting R Kelly memes, another user, Alice, suggested Aveta get shoved off “somewhere real high.” Aveta, who declined to comment out of fear of harassment, described Alice’s comment as a death threat in posts on Bluesky.
Other users additionally reported Alice’s comment as a violation of Bluesky’s policy prohibiting extreme violence.
But this was not the first clash between Aveta and Alice.
Aveta, a software engineer, had been active in inviting many Black people to Bluesky as she hoped to recreate Black Twitter. Alice, who went by the username cererean, was seemingly unhappy about this. She made several racist comments including one saying that Black users are welcome to create their own spaces if they don’t want to be somewhere that “reflects the demographics of the Anglosphere.”
What was Bluesky’s response?
The moderation team at Bluesky did not initially ban Alice, and CEO Jay Graber announced a change in the platform’s policies that users believed appeared to dismiss and excuse comments like Alice’s.
“We do not condone death threats and will continue to remove accounts when we believe their posts represent target harassment or a credible threat of violence,” he said, according to TechCrunch. “But not all heated language crosses the line into a death threat.”
In a weekend thread, he commented, “Many people use violent imagery when arguing or venting.”
With their new policy, any post that threatens violence or physical harm – whether literal or metaphorical – will result in a temporary account suspension.
Repeat offenders will be banned from Bluesky’s server, but once Bluesky finishes the “work required for federation,” Graber said, users can move to a new server with their mutuals and other data.
Although Bluesky isn’t federated at the moment, once it is, any users on any server on AT Protocol – a networking technology created by Bluesky to power the next generation of social applications – will be able to “opt-in” to a community labeling system that would include certain content filters.
This means a user suspended for hate speech or making violent threats would still be able to engage with other servers running on AT Protocol.
How did users respond?
Users were quick to comment on Graber’s thread, such as Ben Perry, also known as tedcruznipples replying, “They shouldn’t be given the opportunity to have federation and proliferate their message.”
The platform swiftly rolled out custom algorithms the day after the new moderation policy was announced, which again caused users to question the platform.
Rudy Fraser, who created a custom algorithm for Black users called Blacksky, said, “As if a new feature would resolve the underlying issue and as if they couldn’t just ban the offending user.”
In April, before the moderation policy changes, a user called Hannah responded to a white user Matt Yglesis saying, “WE ARE GOING TO BEAT YOU WITH HAMMERS,” despite the threat reportedly being a joke, and Hannah was immediately banned.
This occurrence resurfaced after Aveta’s threat and the moderation policy, leaving many Black users questioning, why the platform is not protecting its Black users the same way.