FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Twitter’s decentralized future

By Lucas Matney

This week, Twitter CEO Jack Dorsey finally responded publicly to the company’s decision to ban President Trump from its platform, writing that Twitter had “faced an extraordinary and untenable circumstance” and that he did not “feel pride” about the decision. In the same thread, he took time to call out a nascent Twitter-sponsored initiative called “bluesky,” which is aiming to build up an “open decentralized standard for social media” that Twitter is just one part of.

Researchers involved with bluesky reveal to TechCrunch an initiative still in its earliest stages that could fundamentally shift the power dynamics of the social web.

Bluesky is aiming to build a “durable” web standard that will ultimately ensure that platforms like Twitter have less centralized responsibility in deciding which users and communities have a voice on the internet. While this could protect speech from marginalized groups, it may also upend modern moderation techniques and efforts to prevent online radicalization.

Jack Dorsey, co-founder and chief executive officer of Twitter Inc., arrives after a break during a House Energy and Commerce Committee hearing in Washington, D.C., U.S., on Wednesday, Sept. 5, 2018. Republicans pressed Dorsey for what they said may be the “shadow-banning” of conservatives during the hearing. Photographer: Andrew Harrer/Bloomberg via Getty Images

What is bluesky?

Just as Bitcoin lacks a central bank to control it, a decentralized social network protocol operates without central governance, meaning Twitter would only control its own app built on bluesky, not other applications on the protocol. The open and independent system would allow applications to see, search and interact with content across the entire standard. Twitter hopes that the project can go far beyond what the existing Twitter API offers, enabling developers to create applications with different interfaces or methods of algorithmic curation, potentially paying entities across the protocol like Twitter for plug-and-play access to different moderation tools or identity networks.

A widely adopted, decentralized protocol is an opportunity for social networks to “pass the buck” on moderation responsibilities to a broader network, one person involved with the early stages of bluesky suggests, allowing individual applications on the protocol to decide which accounts and networks its users are blocked from accessing.

Social platforms like Parler or Gab could theoretically rebuild their networks on bluesky, benefitting from its stability and the network effects of an open protocol. Researchers involved are also clear that such a system would also provide a meaningful measure against government censorship and protect the speech of marginalized groups across the globe.

Bluesky’s current scope is firmly in the research phase, people involved tell TechCrunch, with about 40-50 active members from different factions of the decentralized tech community surveying the software landscape and putting together proposals for what the protocol should ultimately look like. Twitter has told early members that it hopes to hire a project manager in the coming weeks to build out an independent team that will start crafting the protocol itself.

A Twitter spokesperson declined to comment on the initiative.

Bluesky’s initial members were invited by Twitter CTO Parag Agrawal early last year. It was later determined that the group should open the conversation up to folks representing some of the more recognizable decentralized network projects, including Mastodon and ActivityPub, which joined the working group hosted on the secure chat platform Element.

Jay Graber, founder of decentralized social platform Happening, was paid by Twitter to write up a technical review of the decentralized social ecosystem, an effort to “help Twitter evaluate the existing options in the space,” she tells TechCrunch.

“If [Twitter] wanted to design this thing, they could have just assigned a group of guys to do it, but there’s only one thing that this little tiny group of people could do better than Twitter, and that’s not be Twitter,” said Golda Velez, another member of the group who works as a senior software engineer at Postmates and co-founded civ.works, a privacy-centric social network for civic engagement.

The group has had some back and forth with Twitter executives on the scope of the project, eventually forming a Twitter-approved list of goals for the initiative. They define the challenges that the bluesky protocol should seek to address while also laying out what responsibilities are best left to the application creators building on the standard.

Parrot.VC Twitter account

Image: TechCrunch

Who is involved

The pain points enumerated in the document, viewed by TechCrunch, encapsulate some of Twitter’s biggest shortcomings. They include “how to keep controversy and outrage from hijacking virality mechanisms,” as well as a desire to develop “customizable mechanisms” for moderation, though the document notes that the applications, not the overall protocol, are “ultimately liable for compliance, censorship, takedowns etc.”

“I think the solution to the problem of algorithms isn’t getting rid of algorithms — because sorting posts chronologically is an algorithm — the solution is to make it an open pluggable system by which you can go in and try different algorithms and see which one suits you or use the one that your friends like,” says Evan Henshaw-Plath, another member of the working group. He was one of Twitter’s earliest employees and has been building out his own decentralized social platform called Planetary.

His platform is based on the secure scuttlebutt protocol, which allows users to browse networks offline in an encrypted fashion. Early on, Planetary had been in talks with Twitter for a corporate investment as well as a personal investment from CEO Jack Dorsey, Henshaw-Plath says, but the competitive nature of the platform prompted some concern among Twitter’s lawyers and Planetary ended up receiving an investment from Twitter co-founder Biz Stone’s venture fund Future Positive. Stone did not respond to interview requests.

After agreeing on goals, Twitter had initially hoped for the broader team to arrive at some shared consensus, but starkly different viewpoints within the group prompted Twitter to accept individual proposals from members. Some pushed Twitter to outright adopt or evolve an existing standard while others pushed for bluesky to pursue interoperability of standards early on and see what users naturally flock to.

One of the developers in the group hoping to bring bluesky onto their standard was Mastodon creator Eugen Rochko, who tells TechCrunch he sees the need for a major shift in how social media platforms operate globally.

“Banning Trump was the right decision though it came a little bit too late. But at the same time, the nuance of the situation is that maybe it shouldn’t be a single American company that decides these things,” Rochko tells us.

Like several of the other members in the group, Rochko has been skeptical at times about Twitter’s motivation with the bluesky protocol. Shortly after Dorsey’s initial announcement in 2019, Mastodon’s official Twitter account tweeted out a biting critique, writing, “This is not an announcement of reinventing the wheel. This is announcing the building of a protocol that Twitter gets to control, like Google controls Android.”

Today, Mastodon is arguably one of the most mature decentralized social platforms. Rochko claims that the network of decentralized nodes has more than 2.3 million users spread across thousands of servers. In early 2017, the platform had its viral moment on Twitter, prompting an influx of “hundreds of thousands” of new users alongside some inquisitive potential investors whom Rochko has rebuffed in favor of a donation-based model.

Image Credits: TechCrunch

Inherent risks

Not all of the attention Rochko has garnered has been welcome. In 2019, Gab, a social network favored by right-wing extremists, brought its entire platform onto the Mastodon network after integrating the platform’s open-source code, bringing Mastodon its single biggest web of users and its most undesirable liability all at once.

Rochko quickly disavowed the network and aimed to sever its ties to other nodes on the Mastodon platform and convince application creators to do the same. But a central fear of decentralization advocates was quickly realized, as the platform type’s first “success story” was a home for right-wing extremists.

This fear has been echoed in decentralized communities this week as app store owners and networks have taken another right-wing social network, Parler, off the web after violent content surfaced on the site in the lead-up to and aftermath of riots at the U.S. Capitol, leaving some developers fearful that the social network may set up home on their decentralized standard.

“Fascists are 100% going to use peer-to-peer technologies, they already are and they’re going to start using it more… If they get pushed off of mainstream infrastructure or people are surveilling them really closely, they’re going to have added motivation,” said Emmi Bevensee, a researcher studying extremist presences on decentralized networks. “Maybe the far-right gets stronger footholds on peer-to-peer before the people who think the far-right is bad do because they were effectively pushed off.”

A central concern is that commoditizing decentralized platforms through efforts like bluesky will provide a more accessible route for extremists kicked off current platforms to maintain an audience and provide casual internet users a less janky path towards radicalization.

“Peer-to-peer technology is generally not that seamless right now. Some of it is; you can buy Bitcoin in Cash App now, which, if anything, is proof that this technology is going to become much more mainstream and adoption is going to become much more seamless,” Bevensee told TechCrunch. “In the current era of this mass exodus from Parler, they’re obviously going to lose a huge amount of audience that isn’t dedicated enough to get on IPFS. Scuttlebutt is a really cool technology but it’s not as seamless as Twitter.”

Extremists adopting technologies that promote privacy and strong encryption is far from a new phenomenon, encrypted chat apps like Signal and Telegram have been at the center of such controversies in recent years. Bevensee notes the tendency of right-wing extremist networks to adopt decentralized network tech has been “extremely demoralizing” to those early developer communities — though she notes that the same technologies can and do benefit “marginalized people all around the world.”

Though people connected to bluesky’s early moves see a long road ahead for the protocol’s development and adoption, they also see an evolving landscape with Parler and President Trump’s recent deplatforming that they hope will drive other stakeholders to eventually commit to integrating with the standard.

“Right at this moment I think that there’s going to be a lot of incentive to adopt, and I don’t just mean by end users, I mean by platforms, because Twitter is not the only one having these really thorny moderation problems,” Velez says. “I think people understand that this is a critical moment.”

Snapchat permanently bans President Trump’s account

By Lucas Matney

Quite a bit has happened since Snapchat announced last week that it was indefinitely locking President Trump’s Snapchat account. After temporary bans from his Facebook, Instagram and YouTube accounts as well as a permanent ban from Twitter, Snap has decided that it will also be making its ban of the President’s Snapchat account permanent.

Thought Trump’s social media preferences as a user are clear, Snapchat gave the Trump campaign a particularly effective platform to target young users who are active on the service. A permanent ban will undoubtedly complicate his future business and political ambitions as he finds himself removed from most mainstream social platforms.

Snap says it made the decision in light of repeated attempted violations of the company’s community guidelines that had been made over the past several months by the President’s account.

“Last week we announced an indefinite suspension of President Trump’s Snapchat account, and have been assessing what long term action is in the best interest of our Snapchat community. In the interest of public safety, and based on his attempts to spread misinformation, hate speech, and incite violence, which are clear violations of our guidelines, we have made the decision to permanently terminate his account,” a Snap spokesperson told TechCrunch.

Snap’s decision to permanently ban the President was first reported by Axios.

Trump circumvents Twitter ban to decry ‘unprecedented assault on free speech’

By Devin Coldewey

Following a comprehensive ban from Twitter and a number of other online services following last week’s assault on the Capitol by his followers, President Trump managed to put out a tweet in the form of a video address touching on the “calamity at the Capitol”… and, of course, his deplatforming.

In the video, Trump instructs his followers to shun violence, calling it un-American. “No true supporter of mine could ever endorse political violence,” he said, days after calling rioters “great patriots” and telling them “we love you, you’re very special” as they despoiled the House and Senate.

He pivoted after a few minutes to the topic that, after his historic second impeachment, is almost certainly foremost on his mind: being banned from his chief instrument of governance, Twitter.

“I also want to say a few words about the unprecedented assault on free speech we have seen in recent days,” he said, although the bans and other actions are all due to documented breaches of the platforms’ rules. “The efforts to censor, cancel and blacklist our fellow citizens are wrong, and they are dangerous. What is needed now is for us to listen to one another, not to silence one another.”

After having his @realdonaldtrump handle suspended by Twitter, Trump attempted to sockpuppet a few other prominent accounts of allies, but was swiftly shut down. What everyone assumed must be plans to join Parler were scuttled along with the social network itself, which has warned it may be permanently taken offline after Amazon and other internet infrastructure companies refused to host it.

In case you’re wondering how Trump was able to slip this one past Twitter’s pretty decisive ban to begin with, we were curious too.

Twitter tells TechCrunch:

This Tweet is not in violation of the Twitter Rules. As we previously made clear, other official administration accounts, including @WhiteHouse, are permitted to Tweet as long as they do not demonstrably engage in ban evasion or share content that otherwise violates the Twitter Rules.

In other words, while Trump the person was banned, Trump the head of the Executive branch may still have some right, in the remaining week he holds the office, to utilize Twitter as a way of communicating matters of importance to the American people.

This gives a somewhat unfortunate impression of a power move, as Twitter has put itself in the position of determining what is a worthwhile transmission and what is a rabble-rousing incitement to violence. I’ve asked the company to clarify how it is determined whether what Trump does on this account is considered ban evasion.

Meanwhile, almost simultaneous with Trump’s surprise tweet, Twitter founder Jack Dorsey unloaded 13 tweets worth of thoughts about the situation:

I believe this was the right decision for Twitter. We faced an extraordinary and untenable circumstance, forcing us to focus all of our actions on public safety. Offline harm as a result of online speech is demonstrably real, and what drives our policy and enforcement above all.

That said, having to ban an account has real and significant ramifications. While there are clear and obvious exceptions, I feel a ban is a failure of ours ultimately to promote healthy conversation. And a time for us to reflect on our operations and the environment around us.

Jack neither reaches any real conclusions nor illuminates any new plans, but it’s clear he is thinking real hard about this. As he notes, however, it’ll take a lot of work to establish the “one humanity working together” he envisions as a sort of stretch goal for Twitter and the internet in general.

YouTube puts a temporary freeze on uploads to Trump’s channel

By Natasha Lomas

YouTube has been the slowest of the big social media platforms to react to the threat of letting president Trump continue to use its platform as a megaphone to whip up insurrection in the wake of the attack on the US capital last week. But it’s now applied a temporary upload ban.

In a short Twitter thread today, the Google-owned service said it had removed new content uploaded to Trump’s YouTube channel “in light of concerns about the ongoing potential violence”.

It also said it’s applied a first strike — triggering a temporary upload ban for at least seven days.

At the time of writing the verified Donald J Trump YouTube channel has some 2.78M subscribers.

“Given the ongoing concerns about violence, we will also be indefinitely disabling comments on President Trump’s channel, as we’ve done to other channels where there are safety concerns found in the comments section,” YouTube adds.

2/ Given the ongoing concerns about violence, we will also be indefinitely disabling comments on President Trump’s channel, as we’ve done to other channels where there are safety concerns found in the comments section. https://t.co/1aBENHGU5z

— YouTubeInsider (@YouTubeInsider) January 13, 2021

We reached out to YouTube with questions about the content that was removed and how it will determine whether to extend the ban on Trump’s ability to post to its platform beyond seven days.

A spokeswoman confirmed content that was uploaded to the channel on January 12 had been taken down for violating its policies on inciting violence, with the platform saying it perceiving an increased risk of violence in light of recent events and due to earlier remarks by Trump.

She did not confirm the specific content of the video that triggered the takedown and strike.

According to YouTube, platform is applying its standard ‘three strikes’ policy — whereby, within a 90 day period, if a channel receives three strikes it gets permanently suspended. Under this policy a first strike earns around a week’s suspension, a second strike earns around two weeks and a third strike triggers a termination of the channel.

At the time of writing, Trump’s official YouTube channel has a series of recent uploads — including five clips from a speech he gave at the Mexican border wall, where he lauded “successful” completion of the pledge during the 2016 election campaign to ‘build the wall’.

In one of these videos, entitled “President Trump addresses the events of last week”, Trump characterizes supporters who attacked the US capital as a “mob” — and claims his administration “believes in the rule of law, not in violence or rioting” — before segueing into a series of rambling comments about the pandemic and vaccine development.

The clip ends with an entreaty by Trump for “our nation to heal”, for “peace and for calm”, and for respect for law enforcement — with the president claiming people who work in law enforcement form the backbone of the “MAGA agenda”.

An earlier clip of Trump speaking to reporters before he left for the tour of the border wall is also still viewable on the channel.

In it the president attacks the process to impeach him a second time as “a continuation of the greatest witch-hunt in the history of politics”. Here Trump name-checks Nancy Pelosi and Chuck Schumer– in what sounds like a veiled but targeted threat.

“[For them] to continue on this path, I think it’s causing tremendous danger to our country and it’s causing tremendous anger,” he says, before tossing a final caveat at reporters that “I want no violence”. (But, well, if you have to add such a disclaimer what does that say about the sentiments you know you’re whipping up?)

While YouTube has opted for a temporary freeze on Trump’s megaphone, Twitter banned the president for good last week after one too many violations of its civic integrity policy.

Facebook has also imposed what it describes as an “indefinite” suspension — leaving open the possibility that it could in future restore Trump’s ability to use its tools to raise hell.

To date, YouTube has managed to avoid being the primary target of ire for those criticizing social media platforms for providing Trump with a carve out from their rules of conduct and a mainstream platform to abuse, bully, lie and (most recently) whip up insurrection.

However the temporary freeze on his account comes after civil rights groups had threatened to organize an advertiser boycott of its platform.

Per Reuters, the Stop Hate for Profit (SHP) campaign — which previously led a major advertisers boycott of Facebook last summer — had demanded that YouTube take down Trump’s verified channel.

“If YouTube does not agree with us and join the other platforms in banning Trump, we’re going to go to the advertisers,” one of SHP’s organizers, Jim Steyer, told the news agency.

In its official comments about the enforcement action against president Trump, YouTube makes no mention of any concern about ramifications from its own advertisers. Though it has faced earlier boycotts from advertisers over hateful and offensive content.

YouTube also claims it consistently enforces its policies, regardless of who owns the channel — and says it makes no exceptions for public figures.

However the platform has been known to reverse a three strike termination — recently reinstating the channel of UK broadcaster TalkRadio, for example, after it received a third strike related to coronavirus misinformation.

In that case the channel’s reinstatement was reported to have followed an intervention by TalkRadio’s owner News Corp’s chairman, Rupert Murdoch. UK ministers had also defended the channel’s right to debate the merits of government policy.

In Trump’s case there are a dwindling number of (GOP) politicians willing to ride to his defense in light of the shocking events in Washington last week and continued violent threats being made online by his supporters.

However concern about the massive market power of tech platforms that means they are in a position to be able to take unilateral action and shut down the US president’s ability to broadcast to millions of people is far more widespread.

Earlier this week Germany’s chancellor, Angela Merkel, called Twitter’s ban on Trump “problematic”, while lawmakers elsewhere in Europe have said it must lead to regulatory consequences for big tech.

So whatever his wider legacy, Trump certainly looks set to have a lasting policy impact on the tech giants he is now busy railing at for putting him on mute.

The Capitol riot and its aftermath makes the case for tech regulation more urgent, but no simpler

By Jonathan Shieber

Last week and throughout the weekend, technology companies took the historic step of deplatforming the president of the United States in the wake of a riot in which the US Capitol was stormed by a collection of white nationalists, QAnon supporters, and right wing activists.

The decision to remove Donald Trump, his fundraising and moneymaking apparatus, and a large portion of his supporters from their digital homes because of their incitements to violence in the nation’s Capitol on January 6th and beyond, has led a chorus of voices to call for the regulation of the giant tech platforms.

They argue that private companies shouldn’t have the sole power to erase the digital footprint of a sitting president.

But there’s a reason why the legislative hearings in Congress, and the pressure from the president, have not created any new regulations. And there’s also a reason why — despite all of the protestations from the president and his supporters — no lawsuits have effectively been brought against the platforms for their decisions.

The law, for now, is on their side.

The First Amendment and freedom of speech (for platforms)

Let’s start with the First Amendment. The protections of speech afforded to American citizens under the First Amendment only apply to government efforts to limit speech. While the protection of all speech is assumed as something enshrined in the foundations of American democracy, the founders appear to have only wanted to shield speech from government intrusions.

That position makes sense if you’re a band of patriots trying to ensure that a monarch or dictator can’t abuse government power to silence its citizens or put its thumb on the lever in the marketplace of ideas.

The thing is, that marketplace of ideas is always open, but publishers and platforms have the freedom to decide what they want to sell into it. Ben Franklin would never have published pro-monarchist sentiments on his printing presses, but he would probably have let Thomas Paine have free rein.

So, the First Amendment doesn’t protect an individuals’ rights to access any platform and say whatever the hell they want. In fact, it protects businesses in many cases from having their freedom of speech violated by having the government force them to publish something they don’t want to on their platforms.

Section 230 and platform liability 

BuT WhAt AbOUt SeCTiOn 230, one might ask (and if you do, you’re not alone)?

Canceling conservative speech is hostile to the free speech foundation America was built on.

There is no reason why social media organizations that pick & choose which speech they allow to be protected by the liability protections in 47 US Code Section 230.

230 must be repealed

— Greg Abbott (@GregAbbott_TX) January 10, 2021

Unfortunately, for Abbott and others who believe that repealing Section 230 would open the door for less suppression of speech by online platforms, they’re wrong.

First, the cancellation of speech by businesses isn’t actually hostile to the foundation America was built on. If a group doesn’t like the way it’s being treated in one outlet, it can try and find another. Essentially, no one can force a newspaper to print their letter to the editor.

Second, users’ speech isn’t what is protected under Section 230; it protects platforms from liability for that speech, which indirectly makes it safe for users to speak freely.

Where things get complicated is in the difference between the letter to an editor in a newspaper and a tweet on Twitter, post on Facebook, or blog on Medium (or WordPress). And this is where U.S. Code Section 230 comes into play.

Right now, Section 230 protects all of these social media companies from legal liability for the stuff that people publish on their platforms (unlike publishers). The gist of the law is that since these companies don’t actively edit what people post on the platforms, but merely provide a distribution channel for that content, then they can’t be held accountable for what’s in the posts.

The companies argue that they’re exercising their own rights to freedom of speech through the algorithms they’ve developed to highlight certain pieces of information or entertainment, or in removing certain pieces of content. And their broad terms of service agreements also provide legal shields that allow them to act with a large degree of impunity.

Repealing Section 230 would make platforms more restrictive rather than less restrictive about who gets to sell their ideas in the marketplace, because it would open up the tech companies to lawsuits over what they distribute across their platforms.

One of the authors of the legislation, Senator Ron Wyden, thinks repeal is an existential threat to social media companies. “Were Twitter to lose the protections I wrote into law, within 24 hours its potential liabilities would be many multiples of its assets and its stock would be worthless,” Senator Wyden wrote back in 2018. “The same for Facebook and any other social media site. Boards of directors should have taken action long before now against CEOs who refuse to recognize this threat to their business.”

Others believe that increased liability for content would actually be a powerful weapon to bring decorum to online discussions. As Joe Nocera argues in Bloomberg BusinessWeek today:

“… I have come around to an idea that the right has been clamoring for — and which Trump tried unsuccessfully to get Congress to approve just weeks ago. Eliminate Section 230 of the Communications Decency Act of 1996. That is the provision that shields social media companies from legal liability for the content they publish — or, for that matter, block.

The right seems to believe that repealing Section 230 is some kind of deserved punishment for Twitter and Facebook for censoring conservative views. (This accusation doesn’t hold up upon scrutiny, but let’s leave that aside.) In fact, once the social media companies have to assume legal liability — not just for libel, but for inciting violence and so on — they will quickly change their algorithms to block anything remotely problematic. People would still be able to discuss politics, but they wouldn’t be able to hurl anti-Semitic slurs. Presidents and other officials could announce policies, but they wouldn’t be able to spin wild conspiracies.”

Conservatives and liberals crowing for the removal of Section 230 protections may find that it would reinstitute a level of comity online, but the fringes will be even further marginalized. If you’re a free speech absolutist, that may or may not be the best course of action.

What mechanisms can legislators use beyond repealing Section 230? 

Beyond the blunt instrument that is repealing Section 230, legislators could take other steps to mandate that platforms carry speech and continue to do business with certain kinds of people and platforms, however odious their views or users might be.

Many of these steps are outlined in this piece from Daphne Keller on “Who do you sue?” from the Hoover Institution.

Most of them hinge on some reinterpretation of older laws relating to commerce and the provision of services by utilities, or on the “must-carry” requirements put in place in the early days of 20th century broadcasting when radio and television were distributed over airways provided by the federal government.

These older laws involve either designating internet platforms as “essential, unavoidable, and monopolistic services to which customers should be guaranteed access”; or treating the companies like the railroad industry and mandating compulsory access, requiring tech companies to accept all users and not modify any of their online speech.

Other avenues could see lawmakers use variations on the laws designed to limit the power of channel owners to edit the content they carried — including things like the fairness doctrine from the broadcast days or net neutrality laws that are already set to be revisited under the Biden Administration.

Keller notes that the existing body of laws “does not currently support must-carry claims against user-facing platforms like Facebook or YouTube, because Congress emphatically declined to extend it to them in the 1996 Telecommunications Act.”

These protections are distinct from Section 230, but their removal would have similar, dramatic consequences on how social media companies, and tech platforms more broadly, operate.

“[The] massive body of past and current federal communications law would be highly relevant,” Keller wrote. “For one thing, these laws provide the dominant and familiar model for US regulation of speech and communication intermediaries. Any serious proposal to legislate must-carry obligations would draw on this history. For another, and importantly for plaintiffs in today’s cases, these laws have been heavily litigated and are still being litigated today. They provide important precedent for weighing the speech rights of individual users against those of platforms.”

The establishment of some of these “must-carry” mandates for platforms would go a long way toward circumventing or refuting platforms’ First Amendment claims, because some cases have already been decided against cable carriers in cases that could correspond to claims against platforms.

This is really happening already so what could legislation look like

At this point the hypothetical scenario that Keller sketched out in her essay, where private actors throughout the technical stack have excluded speech (although the legality of the speech is contested), has, in fact, happened.

The question is whether the deplatforming of the president and services that were spreading potential calls to violence and sedition, is a one-off; or a new normal where tech companies will act increasingly to silence voices that they — or a significant portion of their user base — disagree with.

Lawmakers in Europe, seeing the actions from U.S. companies over the last week, aren’t wasting any time in drafting their own responses and increasing their calls for more regulation.

In Europe, that regulation is coming in the form of the Digital Services Act, which we wrote about at the end of last year.

On the content side, the Commission has chosen to limit the DSA’s regulation to speech that’s illegal (e.g., hate speech, terrorism propaganda, child sexual exploitation, etc.) — rather than trying to directly tackle fuzzier “legal but harmful” content (e.g., disinformation), as it seeks to avoid inflaming concerns about impacts on freedom of expression.

Although a beefed up self-regulatory code on disinformation is coming next year, as part of a wider European Democracy Action Plan. And that (voluntary) code sounds like it will be heavily pushed by the Commission as a mitigation measure platforms can put toward fulfilling the DSA’s risk-related compliance requirements.

EU lawmakers do also plan on regulating online political ads in time for the next pan-EU elections, under a separate instrument (to be proposed next year) and are continuing to push the Council and European parliament to adopt a 2018 terrorism content takedown proposal (which will bring specific requirements in that specific area).

Europe has also put in place rules for very large online platforms that have more stringent requirements around how they approach and disseminate content, but regulators on the continent are having a hard time enforcing htem.

Keller believes that some of those European regulations could align with thinking about competition and First Amendment rights in the context of access to the “scarce” communication channels — those platforms whose size and scope mean that there are few competitive alternatives.

Two approaches that Keller thinks would perhaps require the least regulatory lift and are perhaps the most tenable for platforms to pursue involve solutions that either push platforms to make room for “disfavored” speech, but tell them that they don’t have to promote it or give it any ranking.

Under this solution, the platforms would be forced to carry the content, but could limit it. For instance, Facebook would be required to host any posts that don’t break the law, but it doesn’t have to promote them in any way — letting them sink below the stream of constantly updating content that moves across the platform.

“On this model, a platform could maintain editorial control and enforce its Community Guidelines in its curated version, which most users would presumably prefer. But disfavored speakers would not be banished enitrely and could be found by other users who prefer an uncurated experience,” Keller writes. “Platforms could rank legal content but not remove it.”

Perhaps the regulation that Keller is most bullish on is one that she calls the “magic APIs” scenario. Similar to the “unbundling” requirements from telecommunications companies, this regulation would force big tech companies to license their hard-to-duplicate resources to new market entrants. In the Facebook or Google context, this would mean requiring the companies open up access to their user generated content, and other companies could launch competing services with new user interfaces and content ranking and removal policies, Keller wrote.

“Letting users choose among competing ‘flavors’ of today’s mega-platforms would solve some First Amendment problems by leaving platforms own editorial decisions undisturbed,” Keller writes.

Imperfect solutions are better than none 

It’s clear to speech advocates on both the left and the right that having technology companies control what is and is not permissible on the world’s largest communications platforms is untenable and that better regulation is needed.

When the venture capitalists who have funded these services — and whose politics lean toward the mercenarily libertarian — are calling for some sort of regulatory constraints on the power of the technology platforms they’ve created, it’s clear things have gone too far. Even if the actions of the platforms are entirely justified.

However, in these instances, much of the speech that’s been taken down is clearly illegal. To the point that even free speech services like Parler have deleted posts from their service for inciting violence.

The deplatforming of the president brings up the same points that were raised back in 2017 when Cloudflare, the service that stands out for being more tolerant of despicable speech than nearly any other platform, basically erased the Daily Stormer.

“I know that Nazis are bad, the content [on The Daily Stormer] was so incredibly repulsive, it’s stomach turning how bad it is,” Prince said at the time. “But I do believe that the best way to battle bad speech is with good speech, I’m skeptical that censorship is the right scheme.

“I’m worried the decision we made with respect to this one particular site is not particularly principled but neither was the decision that most tech companies made with respect to this site or other sites. It’s important that we know there is convention about how we create principles and how contraptions are regulated in the internet tech stack,” Prince continued.

“We didn’t just wake up and make some capricious decision, but we could have and that’s terrifying. The internet is a really important resource for everyone, but there’s a very limited set of companies that control it and there’s such little accountability to us that it really is quite a dangerous thing.”

Scraped Parler data is a metadata gold mine

By Zack Whittaker

Embattled social media platform Parler is offline after Apple, Google and Amazon pulled the plug on the site after the violent riot at the U.S. Capitol last week that left five people dead.

But while the site is gone (for now), millions of posts published to the site since the riot are not.

A lone hacker scraped millions of posts, videos and photos published to the site after the riot but before the site went offline on Monday, preserving a huge trove of potential evidence for law enforcement investigating the attempted insurrection by many who allegedly used the platform to plan and coordinate the breach of the Capitol.

The hacker and internet archivist, who goes by the online handle @donk_enby, scraped the social network and uploaded copies to the Internet Archive, which hosts old and historical versions of web pages.

In a tweet, @donk_enby said she scraped data from Parler that included deleted and private posts, and the videos contained “all associated metadata.”

metadata such as https://t.co/f5y6AzZ3km pic.twitter.com/95cXeCbZo6

— crash override (@donk_enby) January 10, 2021

Metadata is information about a file — such as when it was made and on what device. This information is usually embedded in the file itself. The scraped videos from Parler appear to also include the precise location data of where the videos were taken. That metadata could be a gold mine of evidence for authorities investigating the Capitol riot, which may tie some rioters to their Parler accounts or help police unmask rioters based on their location data.

Most web services remove metadata when you upload your photos and videos, but Parler apparently didn’t.

Parler quickly became the social network of choice after President Trump was deplatformed from Twitter and Facebook for inciting the riot on January 6. But the tech giants said Parler violated their rules by not having a content moderation policy — which is what drew many users to the site.

Many of the posts made calls to “burn down [Washington] D.C.,” while others called for violence and the execution of Vice President Mike Pence.

Already several rioters have been arrested and charged with breaking into the Capitol building. Many of the rioters weren’t wearing masks (the pandemic notwithstanding), making it easier for them to be identified. But thanks to Parler’s own security blunder, many more could soon face an unwelcome knock at the door.

Parler is officially offline after AWS suspension

By Darrell Etherington

True to its word, Amazon Web Services (AWS) suspended services to Parler, the right-wing-focused social network that proved a welcoming home for pro-Trump users who called for violence at the nation’s Capitol and beyond. The service suspension went into effect overnight after a 24-hour warning from AWS, which means that if you now go to Parler’s web address you’re greeted with a message saying the requested domain can’t be reached.

Parler’s community had been surging after the permanent suspension of Trump’s official accounts from Twitter and Facebook last week, which also saw removed from those platforms a number of accounts tweeting similar invectives and encouragement of violence aligned with Trump’s sentiments. Apple and Google then removed Parler from their respective app stores for violations of their own terms of service, and AWS follows suit with its own suspension notice.

The company has suggested that it will rebuild its own infrastructure from scratch in order to contend with the various suspensions, but meanwhile other alternative social media sites that continue to exist, and that have typically catered to a more right-wing audience, like Gab, are seeing the benefits of Parler’s deplatforming. Gab has previously seen its hosting revoked, and been removed from Google Play for issues around hate-speech dissemination.

Europe seizes on social media’s purging of Trump to bang the drum for regulation

By Natasha Lomas

Big tech’s decision to pull the plug on president Donald Trump’s presence on their platforms, following his supporters’ attack on the US capital last week, has been seized on in Europe as proof — if proof were needed — that laws have not kept pace with tech market power and platform giants must face consequences over the content they amplify and monetize.

Writing in Politico, the European Commission’s internal market commissioner, Thierry Breton, dubs the 6/1 strike at the heart of the US political establishment as social media’s ‘9/11’ moment — aka, the day the whole world woke up to the real-world impact of unchecked online hate and lies.

Since then Trump has been booted from a number of digital services, and the conservative social media app Parler has also been ejected from the App Store and Google Play over a failure to moderate violent threats, after Trump supporters flocked to the app in the wake of Facebook’s and Twitter’s crackdown.

At the time of writing, Parler is also poised to be booted by its hosting provider AWS, while Stripe has reportedly pulled the plug on Trump’s ability to use its payment tools to fleece supporters. (Although when this reporter asked in November whether Trump was breaching its TOC by using its payment tools for his ‘election defense fund’ Stripe ignored TechCrunch’s emails…)

“If there was anyone out there who still doubted that online platforms have become systemic actors in our societies and democracies, last week’s events on Capitol Hill is their answer. What happens online doesn’t just stay online: It has — and even exacerbates — consequences ‘in real life’ too,” Breton writes.

“Last week’s insurrection marked the culminating point of years of hate speech, incitement to violence, disinformation and destabilization strategies that were allowed to spread without restraint over well-known social networks. The unrest in Washington is proof that a powerful yet unregulated digital space — reminiscent of the Wild West — has a profound impact on the very foundations of our modern democracies.”

The Europe Commission proposed a major update to the rules for digital services and platform giants in December, when it laid out the Digital Services Act (DSA) and Digital Markets Act — saying it’s time to level the regulatory playing field by ensuing content and activity that’s illegal offline is similarly sanctioned online.

The Commission’s proposal also seeks to address the market power of tech giants with proposals for additional oversight and extra rules for the largest platforms that have the potential to cause the greatest societal harm.

Unsurprisingly, then, Breton has seized on the chaotic scenes in Washington to push this already-formed tech policy plan — with his eye on a domestic audience of European governments and elected members of the European Parliament whose support is needed to pass the legislation and reboot the region’s digital rules.

“The fact that a CEO can pull the plug on POTUS’s loudspeaker without any checks and balances is perplexing. It is not only confirmation of the power of these platforms, but it also displays deep weaknesses in the way our society is organized in the digital space,” he warns.

“These last few days have made it more obvious than ever that we cannot just stand by idly and rely on these platforms’ good will or artful interpretation of the law. We need to set the rules of the game and organize the digital space with clear rights, obligations and safeguards. We need to restore trust in the digital space. It is a matter of survival for our democracies in the 21st century.”

The DSA will force social media to clean up its act on content and avoid the risk of arbitrary decision-making by giving platforms “clear obligations and responsibilities to comply with these laws, granting public authorities more enforcement powers and ensuring that all users’ fundamental rights are safeguarded”, Breton goes on to argue.

The commissioner also addresses US lawmakers directly — calling for Europe and the US to join forces on Internet regulation and engage in talks aimed at establishing what he describes as “globally coherent principles”, suggesting the DSA as a starting point for discussions. So he’s not wasting the opportunity of #MAGA-induced chaos to push a geopolitical agenda for EU tech policy too.

Last month the Commission signalled a desire to work with the incoming Biden administration on a common approach to tech governance, saying it hoped US counterparts would work with to shape global standards for technologies like AI and to force big tech to be more responsible, among other areas. And recent events in Washington do seem to be playing into that hand — although it remains to be seen how the incoming Biden administration will approach regulating big tech.

“The DSA, which has been carefully designed to answer all of the above considerations at the level of our Continent, can help pave the way for a new global approach to online platforms — one that serves the general interest of our societies. By setting a standard and clarifying the rules, it has the potential to become a paramount democratic reform serving generations to come,” Breton concludes.

Twitter’s decision to (finally) pull the plug on Trump also caught the eye of UK minister Matt Hancock, the former  secretary of state for the digital brief (now the health secretary). Speaking to the BBC this weekend, he suggested the unilateral decision “raises questions” about how big tech is regulated that would result in “consequences”.

“The scenes, clearly encouraged by President Trump — the scenes at the Capitol — were terrible — and I was very sad to see that because American democracy is such a proud thing. But there’s something else that has changed, which is that social media platforms are making editorial decisions now. That’s clear because they’re choosing who should and shouldn’t have a voice on their platform,” he told the Andrew Marr program.

The BBC reports that Hancock also told Sky News Twitter’s ban on Trump means social media platforms are taking editorial decisions — which he said “raises questions about their editorial judgements and the way that they’re regulated”.

Hancock’s remarks are noteworthy because back in 2018, during his time as digital minister, he said the government would legislate to introduce a statutory code of conduct on social media platforms forcing them to act against online abuse.

More than two years’ later, the UK’s safety-focused plan to regulate the Internet is still yet to be put before parliament — but late last year ministers committed to introducing an Online Safety Bill this year. 

Under the plan, the UK’s media regulator, Ofcom, will gain new powers to oversee tech platforms — including the ability to levy fines for non-compliance with a safety-focused duty of care of up to 10% of a company’s annual turnover.

The proposal covers a wide range of digital services, not just social media. Larger platforms are also slated to have the greatest responsibility for moderating content and activity. And — at least in its current form — the proposed law is intended to apply not just to content that’s illegal under UK law but also the fuzzier category of ‘harmful’ content.

That’s something the European Commission proposal has steered clear of — with more subjective issues like disinformation set to be tackled via a beefed-up (but still voluntary) code of practice, instead of being baked into digital services legislation. So online speech looks set to be one area of looming regulatory divergence in Europe, with the UK now outside the bloc.

Last year, the government said larger social media platforms — such as Facebook, TikTok, Instagram and Twitter — are likely to “need to assess the risk of legal content or activity on their services with ‘a reasonably foreseeable risk of causing significant physical or psychological harm to adults’” under the forthcoming Online Safety Bill.

“They will then need to make clear what type of ‘legal but harmful’ content is acceptable on their platforms in their terms and conditions and enforce this transparently and consistently,” it added, suggesting the UK will in fact legislate to force platforms to make ‘editorial’ decisions.

The consequences Hancock thus suggests are coming for tech platforms look rather akin to the ‘editorial’ decisions they have been making in recent days.

Albeit, the uncomfortable difference he seems to have been articulating is between tech platforms that have massive unilateral power to silence the US president at a stroke and at a point of their own choosing vs tech platforms being made to comply with a pre-defined rules-based order set by legislators and regulators.

San Francisco police are prepping for a pro-Trump rally at Twitter headquarters

By Jonathan Shieber

San Francisco police are preparing for a pro-Trump protest at Twitter’s headquarters, a building which has been essentially abandoned since the start of the pandemic last year, with most Twitter employees working remotely.

The potential protest comes days after Twitter banned the president from using its service — his favorite form of communication to millions of followers — following what the company called his continued incitements to violence in the wake of the January 6 assault on the Capitol last week by a mob of his followers.

“The San Francisco Police Department is aware of the possibility of a demonstration on the 1300 block of Market Street (Twitter) tomorrow, Monday January 11, 2021. SFPD has been in contact with representatives from Twitter. We will have sufficient resources available to respond to any demonstrations as well as calls for service citywide,” a police department spokesperson wrote in an email. “The San Francisco Police Department is committed to facilitating the public’s right to First Amendment expressions of free speech. We ask that everyone exercising their First Amendment rights be considerate, respectful, and mindful of the safety of others.”

The San Francisco Chronicle, which first reported the preparations from SF police, noted that posts on a popular internet forum for Trump supporters who have relocated from Reddit called for the president’s adherents to protest his Twitter ban outside of the company’s headquarters on Monday.

Twitter is one of several tech companies to deplatform the president and many of his supporters in the wake of the riot at the Capitol on Wednesday.

 

Stripe reportedly joins the tech platforms booting President Trump from their services

By Jonathan Shieber

It might be easier at this point to ask which tech platforms President Donald Trump can still use.

Payment-processing company Stripe is the latest tech company to kick Donald Trump off of its platform, according to a report in The Wall Street Journal.

That means the president’s campaign website and online fundraising arms will no longer have access to the payment processor’s services, cutting off the Trump campaign from receiving donations.

Sources told the Journal that the reason for the company’s decision was the violation of company policies against encouraging violence.

The move comes as the president has remained largely silent through the official channels at his disposal in the wake of last week’s riot at the Capitol building.

While Trump has been silent, technology companies have been busy repudiating the president’s support by cutting off access to a range of services.

The deplatforming of the president has effectively removed Trump from all social media outlets including Snap, Facebook, Twitter, Pinterest, Spotify and TikTok.

The technology companies that power most financial transactions online have also blocked the president. Shopify and PayPal were the first to take action against the extremists among President Trump supporters who participated in the riot.

As we wrote earlier this week, PayPal has been deactivating the accounts of some groups of Trump supporters who were using the money-transfer fintech to coordinate payments to underwrite the rioters’ actions on Capitol Hill.

The company has actually been actively taking steps against far-right activists for a while. After the Charlottesville protests and subsequent rioting in 2017, the company banned a spate of far-right organizations. These bans have so far not extended directly to the president himself from what TechCrunch can glean.

On Thursday, Shopify announced that it was removing the storefronts for both the Trump campaign and Trump’s personal brand. That’s an evolution on policy for the company, which years ago said that it would not moderate its platform, but in recent years has removed some controversial stores, such as some right-wing shops in 2018.

Now, Stripe has joined the actions against the president, cutting off a lucrative source of income for his political operations.

As the Journal reported, the Trump campaign launched a fundraising blitz to raise money for the slew of lawsuits that the president brought against states around the country. The lawsuits were almost all defeated, but the effort did bring in hundreds of millions of dollars for the Republican party.

 

Apple suspends Parler from App Store

By Sarah Perez

Apple confirmed that it has suspended the conservative social media app Parler from the App Store, shortly after Google banned it from Google Play. The app, which became a home to Trump supporters and several high-profile conservatives in the days leading up the Capitol riots, had been operating in violation of Apple’s rules.

The company tells TechCrunch,

We have always supported diverse points of view being represented on the App Store, but there is no place on our platform for threats of violence and illegal activity. Parler has not taken adequate measures to address the proliferation of these threats to people’s safety. We have suspended Parler from the App Store until they resolve these issues.

In the wake of its decision Apple sent Parler’s developers the following note,

To the developers of the Parler app,

Thank you for your response regarding dangerous and harmful content on Parler. We have determined that the measures you describe are inadequate to address the proliferation of dangerous and objectionable content on your app.

Parler has not upheld its commitment to moderate and remove harmful or dangerous content encouraging violence and illegal activity, and is not in compliance with the App Store Review Guidelines.

In your response, you referenced that Parler has been taking this content “very seriously for weeks.” However, the processes Parler has put in place to moderate or prevent the spread of dangerous and illegal content have proved insufficient. Specifically, we have continued to find direct threats of violence and calls to incite lawless action in violation of Guideline 1.1 – Safety – Objectionable Content.

Your response also references a moderation plan “for the time being,” which does not meet the ongoing requirements in Guideline 1.2 – Safety – User Generated content. While there is no perfect system to prevent all dangerous or hateful user content, apps are required to have robust content moderation plans in place to proactively and effectively address these issues. A temporary “task force” is not a sufficient response given the widespread proliferation of harmful content.

For these reasons, your app will be removed from the App Store until we receive an update that is compliant with the App Store Review Guidelines and you have demonstrated your ability to effectively moderate and filter the dangerous and harmful content on your service.

Regards,

App Review Board

Conservative commentator and Parler investor Dan Bongino posted about Apple’s decision on the site,

The tech tyrants at Apple have pulled the app from their App Store. Apple is no different than the Chinese communist party in their preference for totalitarian thought control. I’m proud of the remaining liberty-loving people of this great country. And I’m embarrassed, and horrified by the tech totalitarians who’ve taken control of it.

Bongino was among those recently suspended from Twitter. He noted, however, that he had no intention to return to the site.

While Parler is no longer available through the store at present, it seems it will still be available to access for those who have already downloaded it. As The New York Times noted earlier this week,

If Apple pulls Parler from the App Store, people would not be able to download the app to their iPhones or iPads. People who had already downloaded the Parler iPhone app would still be able to use it, but the company would not be able to update the app, meaning it would eventually be rendered obsolete as Apple updated the iPhone software.

But Parler’s future remains more uncertain than most, as there’s a growing push inside Amazon to pull the plug on Parler, too.

The news comes shortly after Google banned it from Google Play. The app, which became a home to Trump supporters and several high-profile conservatives in the days leading up the Capitol riots, had been operating in violation of Apple’s rules, we understand. Apple’s App Store guidelines require apps hosting user-generated content to have moderation policies to remove content that incites violence.

Despite these policies, neither Apple nor Google had taken action to remove Parler in prior weeks, even though Trump supporters and other far-right users had used the app to call for violence and organize their plans to storm the Capitol. The insurrection left five people dead, over 50 police officers injured, and more than a dozen facing federal charges, in addition to the growing number of arrests emerging as suspects are identified.

Image Credits: Parler via the App Store

BuzzFeed News on Friday reported Parler had received a letter from Apple which warned that the app would be removed from the App Store within 24 hours, unless the company submitted a content moderation improvement plan.

Apple’s notice read:

We have received numerous complaints regarding objectionable content in your Parler service, accusations that the Parler app was used to plan, coordinate, and facilitate the illegal activities in Washington D.C. on January 6, 2021 that led (among other things) to loss of life, numerous injuries, and the destruction of property. The app also appears to continue to be used to plan and facilitate yet further illegal and dangerous activities.

(TechCrunch additionally confirmed BuzzFeed’s reporting.)

Parler CEO John Matze posted about Apple’s ultimatum to his own Parler account, saying he would not cave to “those authoritarians who hate free speech.” Earlier today, it was noted that the service reportedly removed a post from Trump associate Lin Wood over calls for violence against Vice President, Mike Pence.

Ahead of its removal, Parler had ranked No. 1 in News on the iPhone App Store and No. 13 Overall, according to data from App Annie. On Friday, it was ranking as high as No. 1, at times, on the iPhone’s Top Charts of free non-game apps, though final data was not available.

Image Credits: App Annie

Currently, the app is hosted by Amazon Web Services (AWS), but it appears to be in violation of the AWS Acceptable Use Policy which could serve as grounds for its removal.

The collective action of tech company employees is playing a key role in some of the decisions being made regarding Trump and his supporters’ access to platforms to communicate and organize in the days following the Capitol riots. According to The Washington Post, for example, over 350 Twitter employees signed a letter urging CEO Jack Dorsey and other execs to permanently suspend Trump’s account before the company followed through.

Trump has now lost his ability to post to Facebook, Twitter, Snapchat, and Twitch, to name a few. Meanwhile, Parler’s removal from both app stores will limit the reach of the more radical and violent Trump supporter movement to some extent, forcing them to more obscure corners of the web. However, many argue these measures have come too late, as the damage to not only Capitol, but to the nation’s psyche as whole, has already been done.

 

Parler reportedly removed posts by Trump affiliate Lin Wood calling for execution of VP Mike Pence

By Jonathan Shieber

It seems that even the “free speech” social network Parler has its limits.

The social network that has attracted scores of conservative commentators because of its commitment to free speech has taken down several posts from Trump affiliate Lin Wood, according to a report in Mediaite.

In one of the posts removed from the social media platform, Wood called for the execution of Vice President Mike Pence .

Wood adds on Parler that he is currently serving a Twitter suspension. pic.twitter.com/JqvF0VM2gu

— Zachary Petrizzo (@ZTPetrizzo) January 7, 2021

In a statement to Mediaite, Parler chief executive John Matze confirmed that the service had taken action against Wood’s posts to the platform.

“Yes, some of his parleys that violated our rules were taken down,” Matze told Mediaite. “Including the ones you are talking about.”

The move from Parler is significant because it would mark one of the first instances of a high profile conservative figure having their content removed from the service.

Parler, despite its reputation as a social platform dedicated to free speech, does have some rules governing content.

And, as Mediaite flagged, the posts from Wood likely ran afoul of a rule in the company’s terms of service that states “reported parleys, comments, or messages sent using our service will be deemed a violation of these Guidelines if they contain: an explicit or implicit encouragement to use violence, or to commit a lawless action, such that: (a) the Parleyer intends his or her speech to result in the use of violence or lawless action, and (b) the imminent use of violence or lawless action is the likely result of the parley, comment, or message.”

Wood, whose account remains active on Parler, had his Twitter account suspended on Thursday, as Forbes reported at the time.

Meanwhile, the incitements to execute Pence seem to have been animating factor for at least some of the rioters who stormed the Capitol building on Wednesday. Reuters Photo News Editor Jim Bourg tweeted about hearing at least three different rioters hoping to “find Vice President Mike Pence and execute him by hanging him from a Capitol Hill tree as a traitor.”

I heard at least 3 different rioters at the Capitol say that they hoped to find Vice President Mike Pence and execute him by hanging him from a Capitol Hill tree as a traitor. It was a common line being repeated. Many more were just talking about how the VP should be executed. https://t.co/fxHREouEWF

— Jim Bourg (@jimbourg) January 8, 2021

 

Parler jumps to No. 1 on App Store after Facebook and Twitter ban Trump

By Jonathan Shieber

Users are surging on small, conservative, social media platforms after President Donald Trump’s ban from the world’s largest social networks, even as those platforms are seeing access throttled by the app marketplaces of tech’s biggest players.

The social network, Parler, a network that mimics Twitter, is now the number one app in Apple’s app store and Gab, another conservative-backed service, claimed that it was seeing an explosion in the number of signups to its web-based platform as well.

Parler’s ballooning user base comes at a potentially perilous time for the company. It has already been removed from Google’s Play store and Apple is considering suspending the social media app as well if it does not add some content moderation features.

Both Parler and Gab have billed themselves as havens for free speech, with what’s perhaps the most lax content moderation online. In the past the two companies have left up content posted by an alleged Russian disinformation campaign, and allow users to traffic in conspiracy theories that other social media platforms have shut down.

The expectation with these services is that users on the platforms are in charge of muting and blocking trolls or offensive content, but, by their nature, those who join these platforms will generally find themselves among like-minded users.

Their user counts might be surging, but would-be adopters may soon have a hard time finding the services.

On Friday night, Google said that it would be removing Parler from their Play Store immediately — suspending the app until the developers committed to a moderation and enforcement policy that could handle objectionable content on the platform.

In a statement to TechCrunch, a Google spokesperson said:

“In order to protect user safety on Google Play, our longstanding policies require that apps displaying user-generated content have moderation policies and enforcement that removes egregious content like posts that incite violence. All developers agree to these terms and we have reminded Parler of this clear policy in recent months. We’re aware of continued posting in the Parler app that seeks to incite ongoing violence in the US. We recognize that there can be reasonable debate about content policies and that it can be difficult for apps to immediately remove all violative content, but for us to distribute an app through Google Play, we do require that apps implement robust moderation for egregious content. In light of this ongoing and urgent public safety threat, we are suspending the app’s listings from the Play Store until it addresses these issues.“

On Friday, Buzzfeed News reported that Parler had received a letter from Apple informing them that the app would be removed from the App Store within 24 hours unless the company submitted an update with a moderation improvement plan. Parler CEO John Matze confirmed the action from Apple in a post on his Parler account where he posted a screenshot of the notification from Apple.

“We want to be clear that Parler is in fact responsible for all the user generated content present on your service and for ensuring that this content meets App Store requirements for the safety and protection of our users,” text from the screenshot reads. “We won’t distribute apps that present dangerous and harmful content.

Parler is backed by the conservative billionaire heiress Rebekah Mercer, according to a November report in The Wall Street Journal. Founded in 2018, the service has experienced spikes in user adoption with every clash between more social media companies and the outgoing President Trump. In November, Parler boasted some 10 million users, according to the Journal.

Users like Fox Business anchor Maria Bartiromo and the conservative talk show host Dan Bongino, a wildly popular figure on Facebook who is also an investor in Parler, have joined the platform. In the Journal article Bongino called the company “a collective middle finger to the tech tyrants.”

It’s worth noting that Parler and Gab aren’t the only companies to see users numbers soar after the Trump bans. MeWe Network, OANN, Newsmax and Rumble have also seen adoption soar, according to data from the analytics company Apptopia.

The company noted that Parler was the #1 app on the iOS app store for two days surging from 18th on Thursday and 592 on Wednesday. Overall, the app was the 10th most downloaded social media app in 2020 with 8.1 million new installs.

“It is an event driven app though,” a company analyst noted. “After events like the election, BLM protests, Twitter first applying labels to Trump’s Tweets, we see bursts of downloads and usage but it will then drop off.”

Sarah Perez and Lucas Matney contributed additional reporting to this article. 

 

Why Twitter says it banned President Trump

By Taylor Hatmaker

Twitter permanently banned the U.S. president Friday, taking a dramatic step to limit Trump’s ability to communicate with his followers. That decision, made in light of his encouragement for Wednesday’s violent invasion of the U.S. Capitol, might seem sudden for anyone not particularly familiar with his Twitter presence.

In reality, Twitter gave Trump many, many second chances over his four years as president, keeping him on the platform due to the company’s belief that speech by world leaders is in the public interest, even if it breaks the rules.

Now that Trump’s gone for good, we have a pretty interesting glimpse into the policy decision making that led Twitter to bring the hammer down on Friday. The company first announced Trump’s ban in a series of tweets from its @TwitterSafety account but also linked to a blog post detailing its thinking.

In that deep dive, the company explains that it gave Trump one last chance after suspending and then reinstating his account for violations made on Wednesday. But the following day, a pair of tweets the president made pushed him over the line. Twitter said those tweets, pictured below, were not examined on a standalone basis, but rather in the context of his recent behavior and this week’s events.

“… We have determined that these Tweets are in violation of the Glorification of Violence Policy and the user @realDonaldTrump should be immediately permanently suspended from the service,” Twitter wrote.

Screenshot via Twitter

This is how the company explained its reasoning, point by point:

  • “President Trump’s statement that he will not be attending the Inauguration is being received by a number of his supporters as further confirmation that the election was not legitimate and is seen as him disavowing his previous claim made via two Tweets (1, 2) by his Deputy Chief of Staff, Dan Scavino, that there would be an ‘orderly transition’ on January 20th.
  • “The second Tweet may also serve as encouragement to those potentially considering violent acts that the Inauguration would be a ‘safe’ target, as he will not be attending.
  • “The use of the words ‘American Patriots’ to describe some of his supporters is also being interpreted as support for those committing violent acts at the US Capitol.
  • “The mention of his supporters having a ‘GIANT VOICE long into the future’ and that ‘They will not be disrespected or treated unfairly in any way, shape or form!!!’ is being interpreted as further indication that President Trump does not plan to facilitate an ‘orderly transition’ and instead that he plans to continue to support, empower, and shield those who believe he won the election.
  • “Plans for future armed protests have already begun proliferating on and off-Twitter, including a proposed secondary attack on the US Capitol and state capitol buildings on January 17, 2021.”

All of that is pretty intuitive, though his most fervent supporters aren’t likely to agree. Ultimately these decisions, as much as they do come down to stated policies, involve a lot of subjective analysis and interpretation. Try as social media companies might to let algorithms make the hard calls for them, the buck stops with a group of humans trying to figure out the best course of action.

Twitter’s explanation here offers a a rare totally transparent glimpse into how social networks decide what stays and what goes. It’s a big move for Twitter — one that many people reasonably believe should have been made months if not years ago — and it’s useful to have what is so often an inscrutable high-level decision making process laid out plainly and publicly for all to see.

President Trump responds to Twitter account ban in tweet storm from @POTUS account

By Lucas Matney

After Twitter took the major step Friday of permanently banning President Trump’s @realdonaldtrump Twitter account, the President aimed to get the last word in through his government account @POTUS which has a fraction of the Twitter followers but still offered the President a megaphone on the service to send out a few last tweets.

The tweets were deleted within minutes by Twitter which does not allow banned individuals to circumvent a full ban by tweeting under alternate accounts.

In screenshots captured by TechCrunch, Trump responds to the account ban by accusing Twitter employees of conspiring with his political opponents. “As I have been saying for a long time, Twitter has gone further and further in banning free speech, and tonight, Twitter employees have coordinated with the Democrats and Radical Left in removing my account from their platform, to silence me — and YOU, the 75,000,000 great patriots who voted for me.”

The President’s rant further contends that he will soon be joining a new platform or starting his own. Many contended Trump would join right wing social media site Parler following the ban, though Friday afternoon the site was removed from the Google Play Store with the company saying Apple had threatened to suspend them as well.

“We have been negotiating with various other sites, and will have a big announcement soon, while we also look at the possibilities of building out our own platform in the near future,” other tweets from the @POTUS account read in part.

The same messages were later tweeted from the President’s @Teamtrump campaign account, which Twitter subsequently suspended.

In a statement to TechCrunch, A Twitter spokesperson wrote, “As we’ve said, using another account to try to evade a suspension is against our rules. We have taken steps to enforce this with regard to recent Tweets from the @POTUS account. For government accounts, such as @POTUS and @WhiteHouse, we will not suspend those accounts permanently but will take action to limit their use.”

Parler removed from Google Play store as Apple App Store suspension reportedly looms

By Lucas Matney

Shortly after Twitter announced Friday afternoon that they were permanently suspending the account of President Trump, Google shared that they were removing Parler, a conservative social media app, from their Play Store immediately, saying in a statement that they were suspending the app until the developers committed to a moderation and enforcement policy that could handle objectionable content on the platform.

In a statement to TechCrunch, a Google spokesperson said:

“In order to protect user safety on Google Play, our longstanding policies require that apps displaying user-generated content have moderation policies and enforcement that removes egregious content like posts that incite violence. All developers agree to these terms and we have reminded Parler of this clear policy in recent months. We’re aware of continued posting in the Parler app that seeks to incite ongoing violence in the US. We recognize that there can be reasonable debate about content policies and that it can be difficult for apps to immediately remove all violative content, but for us to distribute an app through Google Play, we do require that apps implement robust moderation for egregious content. In light of this ongoing and urgent public safety threat, we are suspending the app’s listings from the Play Store until it addresses these issues.“

Parler’s Play Store page is currently down.

The conservative platform garnered attention this week after posts surfaced detailing threats of violence and planning around Tuesday’s chaotic Capitol building riots which led to the deaths of 5 people including a Capitol police officer. While more mainstream social media sites raced to take down violent content related to the riots, death threats and violence were easy to find across the Parler platform.

The app hosts accounts from a variety of conservative figures including many in the President’s family, though not the President himself.

On Friday, Buzzfeed News reported that Parler had received a letter from Apple informing them that the app would be removed from the App Store within 24 hours unless the company submitted an update with a moderation improvement plan. Parler CEO John Matze confirmed the action from Apple in a post on his Parler account where he posted a screenshot of the notification from Apple.

“We want to be clear that Parler is in fact responsible for all the user generated content present on your service and for ensuring that this content meets App Store requirements for the safety and protection of our users,” text from the screenshot reads. “We won’t distribute apps that present dangerous and harmful content.

The app remains available in the App Store, though users are currently complaining of technical issues.

We have reached out to Apple for additional comment.

Twitter permanently bans President Trump

By Taylor Hatmaker

Twitter permanently removed the president of the United States from its platform Friday, citing concerns over the “risk of further incitement of violence” and Trump’s previous transgressions.

“In the context of horrific events this week, we made it clear on Wednesday that additional violations of the Twitter Rules would potentially result in this very course of action,” Twitter wrote. “… We made it clear going back years that these accounts are not above our rules and cannot use Twitter to incite violence.”

Trump will not be able to get around Twitter’s ban by making a new account or using an alias, a Twitter spokesperson clarified to TechCrunch. If the president attempts to evade his suspension, any account he uses will also be subject to a ban for breaking Twitter’s rules.

Update: Trump appeared to do just that Friday night, popping up on @POTUS. “We will not be SILENCED! Twitter is not about FREE SPEECH,” Trump tweeted through that account, indicating that his team might build his own platform in the “near future.”

Twitter emphasized that it made the threat of an impending ban clear and called this week’s events “horrific.” While Trump has previously broken the platform’s rules, the company’s maintained his account under its special guidance for world leaders and information in the public interest.

In an in-depth breakdown, Twitter published the assessments of Trump’s tweets that led to his suspension. Two of his tweets on Thursday appear to have pushed the account over the edge, and Twitter interpreted them as potentially inciting violence in the context of the week’s events.

On Wednesday, Twitter suspended President Trump’s account until he deleted three tweets that the company flagged as violating its rules. Trump’s account was set to reactivate 12 hours after those deletions, and he returned to the platform on Thursday night with a video in which he appeared to concede his election loss for the first time.

Trump crossed a line with Twitter when he failed to condemn a group of his supporters who staged a violent riot at the Capitol building while Congress met to certify the election results. In one tweet, Trump shared a video in which he gently encouraged the group to return home, while reassuring his agitated followers that he loved them and that they were “special.”

At that time, Twitter said that Trump’s tweets contained “repeated and severe violations” of its policy on civic integrity and threatened that any future violations would result in “permanent suspension” of the president’s account.

After close review of recent Tweets from the @realDonaldTrump account and the context around them we have permanently suspended the account due to the risk of further incitement of violence.https://t.co/CBpE1I6j8Y

— Twitter Safety (@TwitterSafety) January 8, 2021

Wednesday, January 6:

  • 1 PM ET: Trump wraps up a rally near the White House protesting the legitimate election results. During the event he urges attendees to march toward Congress.
  • 2:15 PM: Trump supporters breach the interior of the Capitol building.
  • 4:15 PM: Trump tweets a video gently telling rioters that they need to go home and “we love you.”
  • 5 PM: Twitter places a large warning label on the video.
  • 6 PM: Trump tweets again, failing to denounce the violence and urging his supporters to “Remember this day forever!”
  • 7 PM: Twitter locks Trump out his account until he deletes three tweets and waits for a 12-hour period.

As a result of the unprecedented and ongoing violent situation in Washington, D.C., we have required the removal of three @realDonaldTrump Tweets that were posted earlier today for repeated and severe violations of our Civic Integrity policy. https://t.co/k6OkjNG3bM

— Twitter Safety (@TwitterSafety) January 7, 2021

Thursday, January 7:

Friday, January 8:

  • 9:45 AM: Trump tweets again with a less conciliatory tone, declaring that anyone who voted for him will “not be treated unfairly in any way, shape or form!!!”
  • 10:45 AM: Trump tweets that he will not attend President-elect Joe Biden’s inauguration.
  • 6:20 PM: Twitter announces that @realDonaldTrump is suspended permanently.

While Facebook initially took more drastic action against Trump’s account in the aftermath of Wednesday’s chaotic siege on Capitol Hill, Twitter has a longer history of friction with the outgoing president. In early 2020, Twitter’s decision to add a contextual label to a Trump tweet calling mail-in voting “fraudulent” prompted the president to craft a retaliatory though largely toothless executive order targeting social media companies.

Trump held the same grudge through the end of the year, trying to push a doomed repeal of Section 230 of the Communications Decency Act — the law that protects online companies from liability for user-generated content — through Congress in increasingly unusual ways.

Twitter’s move Friday to suspend the sitting U.S. president from its platform is a historic decision — and one the company avoided making for the last four years. In the wake of Wednesday’s insurrectionist violence, and Trump’s role in inciting it, tech’s biggest social networks appear to have at last had enough.

But as with election conspiracies, dangerous COVID-19 misinformation and the camo-clad extremists who attacked the Capitol this week, it’s too late to undo the chaos that real-time Trump unleashed over the last four years, 280 characters at a time.

Reddit bans r/donaldtrump following violence at the US Capitol

By Brian Heater

Days after a mob broke into the U.S. Capitol amid protests against Donald Trump’s 2020 election loss, another major social media platform has banned a popular pro-Trump forum. Reddit this morning confirmed that it shut down the r/donaldtrump subreddit due to “repeated policy violations” on the forum since Wednesday.

A spokesperson for the site tells TechCrunch:

Reddit’s site-wide policies prohibit content that promotes hate, or encourages, glorifies, incites, or calls for violence against groups of people or individuals. In accordance with this, we have been proactively reaching out to moderators to remind them of our policies and to offer support or resources as needed. We have also taken action to ban the community r/donaldtrump given repeated policy violations in recent days regarding the violence at the U.S. Capitol.”

While r/donaldtrump didn’t approach the infamy of some other pro-Trump subreddits, the site was a current hub for the Stop the Steal movement. The subreddit also encouraged people to attend the D.C. rally that turned into deadly violence at the Capitol: A side banner on r/donaldtrump showed an image of President Trump as Uncle Sam. “POTUS wants you in D.C. on 1/06/21” it read.

Reddit declined to provide more specifics about why the subreddit was removed, but did note that its actions against r/donaldtrump were isolated and no other subreddits were banned today. The subreddit was not related to President Trump in any official capacity.

Prior to the invasion of the Capitol, many Trump supporters attended Trump’s own event, a rally near the White House. While that event was stationary around a stage, Trump eventually encouraged its attendees to march toward Congress to continue expressing their outrage at the election results.

Reddit has historically hosted large pro-Trump communities known for their toxic behavior and open violent threats against public figures. Last June, Reddit banned the most prominent of those, controversial subreddit r/The_Donald, amid a larger sweep of pages. A year prior, Reddit quarantined r/The_Donald, making it more difficult to discover and requiring an opt-in screen for anyone seeking to visit.

Reddit’s decision to close the hub for Trump supporters follows several other de-platformings on major services, including Facebook, Twitch, Twitter, Instagram, Snapchat and Shopify.

Chris Krebs and Alex Stamos have started a cyber consulting firm

By Zack Whittaker

Former U.S. cybersecurity official Chris Krebs and former Facebook chief security officer Alex Stamos have founded a new cybersecurity consultancy firm, which already has its first client: SolarWinds .

The two have been hired as consultants to help the Texas-based software maker recover from a devastating breach by suspected Russian hackers, which used the company’s software to set backdoors in thousands of organizations and to infiltrate at least 10 U.S. federal agencies and several Fortune 500 businesses.

At least the Treasury Dept., State Dept. and the Department of Energy have been confirmed breached, in what has been described as likely the most significant espionage campaign against the U.S. government in years. And while the U.S. government has already pinned the blame on Russia, the scale of the intrusions are not likely to be known for some time.

Krebs was one of the most senior cybersecurity officials in the U.S. government, most recently serving as the director of Homeland Security’s CISA cybersecurity advisory agency from 2018, until he was fired by President Trump for his efforts to debunk false election claims — many of which came from the president himself. Stamos, meanwhile, joined the Stanford Internet Observatory after holding senior cybersecurity positions at Facebook and Yahoo. He also consulted for Zoom amid a spate of security problems.

In an interview with the Financial Times, which broke the story, Krebs said it could take years before the hackers are ejected from infiltrated systems.

SolarWinds chief executive Sudhakar Ramakrishna acknowledged in a blog post that it had brought on the consultants to help the embattled company to be “transparent with our customers, our government partners, and the general public in both the near-term and long-term about our security enhancements.”

Trump returns to Twitter with what sounds like a concession speech

By Taylor Hatmaker

It’s been a long couple of days for the country, but President Trump only had to wait 12 hours before returning to his social network of choice.

In an uncharacteristically scripted three-ish minute speech, the president denounced the “heinous attack” on the Capitol. “The demonstrators who infiltrated the Capitol have defiled the seat of American democracy,” Trump said, warning the individuals involved that they will “pay.”

The previous day, Trumped directed a crowd of his supporters to march to the Capitol. After that event turned into a violent riot that disrupted Congress as it worked to certify election results, Trump encouraged the rioters, telling them they were “special” and “we love you” in a video posted to Twitter .

pic.twitter.com/csX07ZVWGe

— Donald J. Trump (@realDonaldTrump) January 8, 2021

After yesterday’s video, Twitter locked Trump’s account and required him to delete a handful of tweets before having his access restored. On Thursday, Facebook froze his account for the remainder of his time in office.

Noting that he had explored “every legal avenue” to stay in power, Trump appeared to throw in the towel Thursday in his undemocratic crusade to overturn the legitimate results of the American election.

In the video Trump concedes for the first time, claiming that he will willingly leave office on January 20. “My focus now turns to ensuring a smooth, orderly and seamless transition of power,” Trump said.

 

❌