Facebook’s dating feature expands after a regulatory delay, we review the new Amazon Echo and President Donald Trump has an on-the-nose Twitter password. This is your Daily Crunch for October 22, 2020.
The big story: Facebook Dating comes to Europe
Back in February, Facebook had to call off the European launch date of its dating service after failing to provide the Irish Data Protection Commission with enough advanced notice of the launch. Now it seems the regulator has given Facebook the go-ahead.
Facebook Dating (which launched in the U.S. last year) allows users to create a separate dating profile, identify secret chats and go on video dates.
As for any privacy and regulatory concerns, the commission told us, “Facebook has provided detailed clarifications on the processing of personal data in the context of the Dating feature … We will continue to monitor the product as it launches across the EU this week.”
The tech giants
Amazon Echo review: Well-rounded sound — This year’s redesign centers on another audio upgrade.
Facebook adds hosting, shopping features and pricing tiers to WhatsApp Business — Facebook is launching a way to shop for and pay for goods and services in WhatsApp chats, and it said it will finally start to charge companies using WhatsApp for Business.
Spotify takes on radio with its own daily morning show — The new program will combine news, pop culture, entertainment and music personalized to the listener.
Startups, funding and venture capital
Chinese live tutoring app Yuanfudao is now worth $15.5 billion — The homework tutoring app founded in 2012 has surpassed Byju’s as the most valuable edtech company in the world.
E-bike subscription service Dance closes $17.7M Series A, led by HV Holtzbrinck Ventures — The founders of SoundCloud launched their e-bike service three months ago.
Freelancer banking startup Lili raises $15M — It’s only been a few months since Lili announced its $10 million seed round, and it’s already raised more funding.
Advice and analysis from Extra Crunch
How unicorns helped venture capital get later, and bigger — Q3 2020 was a standout period for how high late-stage money stacked up compared to cash available to younger startups.
Ten Zurich-area investors on Switzerland’s 2020 startup outlook — According to official estimates, the number of new Swiss startups has skyrocketed by 700% since 1996.
Four quick bites and obituaries on Quibi (RIP 2020-2020) — What we can learn from Quibi’s amazing, instantaneous, billions-of-dollars failure.
(Reminder: Extra Crunch is our membership program, which aims to democratize information about startups. You can sign up here.)
President Trump’s Twitter accessed by security expert who guessed password “maga2020!” — After logging into President Trump’s account, the researcher said he alerted Homeland Security and the password was changed.
For the theremin’s 100th anniversary, Moog unveils the gorgeous Claravox Centennial — With a walnut cabinet, brass antennas and a plethora of wonderful knobs and dials, the Claravox looks like it emerged from a prewar recording studio.
Announcing the Agenda for TC Sessions: Space 2020 — Our first-ever dedicated space event is happening on December 16 and 17.
The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.
“The Social Dilemma” is opening eyes and changing digital lives for Netflix bingers across the globe. The filmmakers explore social media and its effects on society, raising some crucial points about impacts on mental health, politics and the myriad ways firms leverage user data. It interweaves interviews from industry executives and developers who discuss how social sites can manipulate human psychology to drive deeper engagement and time spent within the platforms.
Despite the glaring issues present with social media platforms, people still crave digital attention, especially during a pandemic, where in-person connections are strained if not impossible.
So, how can the industry change for the better? Here are three ways social media should adapt to create happier and healthier interpersonal connections and news consumption.
On most platforms, like Facebook and Instagram, the company determines some of the information presented to users. This opens the platform to manipulation by bad actors and raises questions about who exactly is dictating what information is seen and what is not. What are the motivations behind those decisions? And some of the platforms dispute their role in this process, with Mark Zuckerberg saying in 2019, “I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online.”
Censorship can be absolved with a restructured type of social platform. For example, consider a platform that does not rely on advertiser dollars. If a social platform is free for basic users but monetized by a subscription model, there is no need to use an information-gathering algorithm to determine which news and content are served to users.
This type of platform is not a ripe target for manipulation because users only see information from people they know and trust, not advertisers or random third parties. Manipulation on major social channels happens frequently when people create zombie accounts to flood content with fake “likes” and “views” to affect the viewed content. It’s commonly exposed as a tactic for election meddling, where agents use social media to promote false statements. This type of action is a fundamental flaw of social algorithms that use AI to make decisions about when and what to censor as well as what it should promote.
The issues raised by “The Social Dilemma” should reinforce the need for social platforms to self-regulate their content and user dynamics and operate ethically. They should review their most manipulative technologies that cause isolation, depression and other issues and instead find ways to promote community, progressive action and other positive attributes.
A major change required to bring this about is to eliminate or reduce in-platform advertising. An ad-free model means the platform does not need to aggressively push unsolicited content from unsolicited sources. When ads are the main driver for a platform, then the social company has a vested interest in using every psychological and algorithm-based trick to keep the user on the platform. It’s a numbers game that puts profit over users.
More people multiplied by more time on the site equals ad exposure and ad engagement and that means revenue. An ad-free model frees a platform from trying to elicit emotional responses based on a user’s past actions, all to keep them trapped on the site, perhaps to an addictive degree.
A common form of clickbait is found on the typical social search page. A user clicks on an image or preview video that suggests a certain type of content, but upon clicking they are brought to unrelated content. It’s a technique that can be used to spread misinformation, which is especially dangerous for viewers who rely on social platforms for their news consumption, instead of traditional outlets. According to the Pew Research Center, 55% of adults get their news from social media “often” or “sometimes.” This causes a significant problem when clickbait articles make it easier to offer distorted “fake news” stories.
Unfortunately, when users engage with clickbait content, they are effectively “voting” for that information. That seemingly innocuous action creates a financial reason for others to create and disseminate further clickbait. Social media platforms should aggressively ban or limit clickbait. Management at Facebook and other firms often counter with a “free speech” argument when it comes to stopping clickbait. However, they should consider the intent is not to act as censors that are stopping controversial topics but protecting users from false content. It’s about cultivating trust and information sharing, which is much easier to accomplish when post content is backed by facts.
“The Social Dilemma” is rightfully an important film that encourages a vital dialogue about the role social media and social platforms play in everyday life. The industry needs to change to create more engaged and genuine spaces for people to connect without preying on human psychology.
A tall order, but one that should benefit both users and platforms in the long term. Social media still creates important digital connections and functions as a catalyst for positive change and discussion. It’s time for platforms to take note and take responsibility for these needed changes, and opportunities will arise for smaller, emerging platforms taking a different, less-manipulative approach.
Facebook’s external body of decision makers will start reviewing cases about what stays on the platform and what goes beginning today.
The new system will elevate some of the platform’s content moderation decisions to a new group called the Facebook Oversight Board, which will make decisions and influence precedents about what kind of content should and shouldn’t be allowed.
But as we’ve reported previously, the board’s decisions won’t just magically enact changes on the platform. Instead of setting policy independently, each recommended platform policy change from the oversight board will get kicked back to Facebook, which will “review that guidance” and decide what changes, if any, to make.
The oversight board’s specific case decisions will remain, but that doesn’t mean they’ll really be generalized out to the social network at large. Facebook says it is “committed to enforcing the Board’s decisions on individual pieces of content, and to carefully considering and transparently responding to any policy recommendations.”
The groups’ focus on content taken down rather than content already allowed on the social network will also skew its purview. While a vocal subset of its conservative critics in Congress might disagree, Facebook’s real problems are about what stays online — not what gets taken down.
Whether it’s violent militias connecting and organizing, political figures spreading misleading lies about voting or misinformation from military personnel that fuels an ethnic cleansing, content that spreads on Facebook has the power to reshape reality in extremely dangerous ways.
Noting the criticism, Facebook claims that decisions about content still up on Facebook are “very much in scope from Day 1” because the company can directly refer those cases to the Oversight Board. But with Facebook itself deciding which cases to elevate, that’s another major strike against the board’s independence from the outset.
Facebook says that the board will focus on reviewing content removals initially because of the way its existing systems are set up, but it aims “to bring all types of content outlined in the bylaws into scope as quickly as possible.”
According to Facebook, anyone who has appealed “eligible” Facebook and Instagram content moderation decisions and has already gone through the normal appeal process will get a special ID that they can take to the Oversight Board website to submit their case.
Facebook says the board will decide which cases to consider, pulling from a combination of user-appealed cases and cases that Facebook will send its way. The full slate of board members, announced in May, grew out of four co-chairs that Facebook itself named to the board. The international group of 20 includes former journalists, U.S. appeals court judges, digital rights activists, the ex-prime minister of Denmark and one member from the Cato Institute, the libertarian think tank.
“We expect them to make some decisions that we, at Facebook, will not always agree with – but that’s the point: they are truly autonomous in their exercise of independent judgment,” the company wrote in May.
Critics disagree. Facebook skeptics from every corner have seized on the oversight effort, calling it a charade and pointing out that its decision aren’t really binding.
Facebook was not happy when a group of prominent critics calling itself the “Real Facebook Oversight Board” launched late last month. And earlier this year, a tech watchdog group called for the board’s five U.S.-based members to demand they be given more real power or resign.
Facebook also faced a backlash when it said the Oversight Board, which has been in the works for years, wouldn’t be up and running until “late fall.” But with just weeks to go before election day, Facebook has suddenly scrambled to get new policies and protections in place on issues that it’s dragged its feet on for years — the Oversight Board included, apparently.
A Dutch security researcher says he accessed President Trump’s @realDonaldTrump Twitter account last week by guessing his password: “maga2020!”.
Victor Gevers, a security researcher at the GDI Foundation and chair of the Dutch Institute for Vulnerability Disclosure, which finds and reports security vulnerabilities, told TechCrunch he guessed the president’s account password and was successful on the fifth attempt.
The account was not protected by two-factor authentication, granting Gevers access to the president’s account.
After logging in, he emailed US-CERT, a division of Homeland Security’s cyber unit Cybersecurity and Infrastructure Security Agency (CISA), to disclose the security lapse, which TechCrunch has seen. Gevers said the president’s Twitter password was changed shortly after.
A screenshot from inside Trump’s Twitter account. (Image: Victor Gevers)
It’s the second time Gevers has gained access to Trump’s Twitter account.
The first time was in 2016, when Gevers and two others extracted and cracked Trump’s password from the 2012 LinkedIn breach. The researchers took his password — “yourefired” — his catchphrase from the television show “The Apprentice” — and found it let them into his Twitter account. Gevers reported the breach to local authorities in the Netherlands, with suggestions on how Trump could improve his password security. One of the passwords he suggested at the time was “maga2020!” he said. Gevers said he “did not expect” the password to work years later.
Dutch news outlet Vrij Nederland first reported the story.
In a statement, Twitter spokesperson Ian Plunkett said: “We’ve seen no evidence to corroborate this claim, including from the article published in the Netherlands today. We proactively implemented account security measures for a designated group of high-profile, election-related Twitter accounts in the United States, including federal branches of government.”
Twitter said last month that it would tighten the security on the accounts of political candidates and government accounts, including encouraging but not mandating the use of two-factor authentication.
Trump’s account is said to be locked down with extra protections after he became president, though Twitter has not said publicly what those protections entail. His account was untouched by hackers who broke into Twitter’s network in July in order to abuse an “admin tool” to hijack high-profile accounts and spread a cryptocurrency scam.
A spokesperson for the White House and the Trump campaign did not immediately comment, but White House deputy press secretary Judd Deere reportedly said the story is “absolutely not true,” but declined to comment on the president’s social media security. A spokesperson for CISA did not immediately confirm the report.
“It’s unbelievable that a man that can cause international incidence and crash stock markets with his Tweets has such a simple password and no two-factor authentication,” said Alan Woodward, a professor at the University of Surrey. “Bearing in mind his account was hacked in 2016 and he was saying only a couple of days ago that no one is hacked the irony is vintage 2020.”
Updated with Twitter comment, and corrected the name of publication which first published the news.
Facebook has been making a big play to be a go-to partner for small and medium businesses that use the internet to interface with the wider world, and its messaging platform WhatsApp, with some 50 million businesses and 175 million people messaging them (and more than 2 billion users overall), has been a central part of that pitch.
Now, the company is making three big additions to WhatsApp to fill out that proposition.
It’s launching a way to shop for and pay for goods and services in WhatsApp chats; it’s going head to head with the hosting providers of the world with a new product called Facebook Hosting Services to host businesses’ online assets and activity; and — in line with its expanding product range — Facebook said it will finally start to charge companies using WhatsApp for Business.
Facebook announced the news in a short blog post light on details. We have reached out to the company for more information on pricing, availability of the services, and whether Facebook will provide hosting itself or work with third parties, and we will update this post as we learn more.
Here is what we know for now:
In-chat Shopping. Companies are already using WhatsApp to present product information and initiate discussions for transactions. One of the more recent developments in that area was the addition of QR codes and the ability to share catalog links in chats, added in July. At the same time, Facebook has been expanding the ways that businesses can display what they are selling on Facebook and Instagram, most recently with the launch in August of Facebook Shop, following a similar product roll out on Instagram before that.
Today’s move sounds like a new way for businesses in turn to use WhatsApp both to link through to those Facebook-native catalogs, as well as other products, and then purchase items, while still staying in the chat.
At the same time, Facebook will be making it possible for merchants to add “buy” buttons in other places that will take shoppers to WhatsApp chats to complete the purchase. “We also want to make it easier for businesses to integrate these features into their existing commerce and customer solutions,” it notes. “This will help many small businesses who have been most impacted in this time.”
Although Facebook is not calling this WhatsApp Pay, it seems that this is the next step ahead for the company’s ambitions to bring payments into the chat flow of its messaging app. That has been a long and winding road for the company, which finally launched WhatsApp Payments, using Facebook Pay, in Brazil, in June of this year only to have it shut down by regulators for failing to meet their requirements. (The plan has been to expand it to India, Indonesia and Mexico next.)
Facebook Hosting Services: Thse will be available in the coming months, but no specific date to share right now. “We’re sharing our plans now while we work with our partners to make these services available,” the company said in a statement to TechCrunch.
No! This is not about Facebook taking on AWS. Or… not yet at least? The idea here appears that it is specifically aimed at selling hosting services to the kind of SMBs who already use Facebook and WhatsApp messaging, who either already use hosting services for their online assets, whether that be their online stores or other things, or are finding themselves now needing to for the first time, now that business is all about being “online.”
“Today, all businesses using our API are using either an on-premise solution or leverage a solutions provider, both of which require costly servers to maintain,” Facebook said. “With this change, businesses will be able to choose to use Facebook’s own secure hosting infrastructure for free, which helps remove a costly item for every company that wants to use the WhatsApp Business API, including our business service providers, and will help them all save money.” It added that it will share more info about where data will be hosted closer to launch.
This is a very interesting move, since the SMB hosting market is pretty fragmented with a number of companies, including the likes of GoDaddy, Dream Host, HostGator, BlueHost and many others also offering these services. That fragmentation spells opportunity for a huge company like Facebook with a global profile, a burgeoning amount of connections through to other online services for these SMBs, and a pretty extensive network of data centers around the world that it’s built for itself and can now use to provide services to others — which is, indeed, a pretty strong parallel with how Amazon and AWS have done business.
Facebook already has an “app store” of sorts of partners it works with to provide marketing and related services to businesses using its platform. It looks like it plans to expand this, and will sell the hosting alongside all of that, with the kicker that hosting natively on Facebook will speed up how everything works.
“Providing this option will make it easier for small and medium size businesses to get started, sell products, keep their inventory up to date, and quickly respond to messages they receive – wherever their employees are,” it notes.
Charging tiers: As you would expect, to encourage more adoption, Facebook has not been charging for WhatsApp Business up to now, but it has charged for some WhatsApp business messages — for example when businesses send a boarding pass or e-commerce receipt to a customer over Facebook’s rails. (These prices vary and a list of them is published here.) Now, with more services coming into the mix, and businesses tying their fates more strongly to how well they are performing on Facebook’s platforms, it’s not surprise to see Facebook converting that into a pay to play scenario.
“What we’ve heard over the past couple years is how the conversational nature of business messaging is really valuable to people. So in the future we may look at ways to update how we charge businesses that better reflect how it’s used,” the company told us. Important to note that this will relate to how businesses send messages. “As always, it’s free for people to send a business a message,” Facebook added.
Frustratingly, there seems so far to be no detail on which services will be charged, nor how much, nor when, so this is more of a warning than a new requirement.
“We will charge business customers for some of the services we offer, which will help WhatsApp continue building a business of our own while we provide and expand free end-to-end encrypted text, video and voice calling for more than two billion people,” it notes.
For those who might find that annoying, on the plus side, for those who are concerned about an ever-encroaching data monster, it will, at the least, help WhatsApp and Facebook continue to stick to its age-old commitment to stay away from advertising as a business model.
The new services come at a time when Facebook is doubling down on providing services for businesses, spurred in no small part by the coronavirus pandemic, which has driven physical retailers and others to close their actual doors, shifting their focus to using the internet and mobile services to connect with and sell to customers.
Citing that very trend, last month the company’s COO Sheryl Sandberg announced the Facebook Business Suite, bringing together all of the tools it has been building for companies to better leverage Facebook, Instagram and WhatsApp profiles both to advertise themselves as well as communicate with and sell to customers. And the fact that Sandberg was leading the announcement says something about how Facebook is prioritizing this: it’s striking while the iron is hot with companies using its platform, but it sees/hopes that business services can a key way to diversify its business model while also helping buffer it — since many businesses building Pages may also advertise.
Facebook has also been building more functionality across Facebook and Instagram specifically aimed at helping power users and businesses leverage the two in a more efficient way. Adding in more tools to WhatsApp is the natural progression of all of this.
To be sure, as we pointed out earlier this year, even while there is a lot of very informal use of WhatsApp by businesses all around the world, WhatsApp Business remains a fairly small product, most popular in India and Brazil. Facebook launching more tools for how to use it will potentially drive more business not just in those markets but help the company convert more businesses to using it in other places, too.
Smaller businesses have been on Facebook’s radar for a while now. Even before the pandemic hit, in many cases retailers or restaurants do not have websites of their own, opting for a Facebook Page or Instagram Profile as their URL and primary online interface with the world; and even when they do have standalone sites, they are more likely to update people and spread the word about what they are doing on social media than via their own URLs.
Facebook’s also made a video to help demonstrate how it sees these WhatsApp Business in action, which you can here:
Facebook’s dating bolt-on to its eponymous social networking service has finally launched in Europe, more than nine months after an earlier planned launched was pulled at the last minute over privacy concerns.
From today, European Facebook users in Austria, Belgium, Bulgaria, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Croatia, Hungary, Ireland, Italy, Lithuania, Luxembourg, Latvia, Malta, Netherlands, Poland, Portugal, Romania, Sweden, Slovenia, Slovakia, Iceland, Liechtenstein, Norway, Spain, Switzerland and the UK can opt into Facebook Dating by creating a profile at facebook.com/dating.
Among the dating product’s main features are the ability to share Stories on your profile; a Secret Crush feature that lets you select up to nine of your Facebook friends or Instagram followers who you’d like to date (without them knowing unless they also add you — when you then get a match notification); the ability to see people with similar interests if you add your Facebook Events and Groups to your Dating profile; and a video chat feature called Virtual Dates.
Image credit: Facebook
Of course if you opt in to Facebook Dating you’re going to be plugging even more of your personal data into Facebook’s people profiling machine. And it was concerns about how the dating product would be processing European users’ information that led to a regulatory intervention by the company’s lead data regulator in the EU, the Irish Data Protection Commission (DPC).
Back in February Facebook agreed to postpone the regional launch of Facebook Dating after the DPC’s agents paid a visit to its Dublin office — saying Facebook had not provided it with enough advanced warning of the product launch, nor adequate documentation about how it would work.
More than nine months later the regulator seems satisfied it now understands how Facebook Dating is processing people’s personal data — although it also says it will be monitoring the EU launch.
Additionally, the DPC says Facebook has made some changes to the product in light of concerns it raised (full details below).
Deputy commissioner, Graham Doyle, told TechCrunch: “As you will recall, the DPC became aware of Facebook’s plans to launch Facebook Dating a number of days prior to its planned launch in February of this year. Further to the action taken by the DPC at the time (which included an on-site inspection and a number of queries and concerns being put to Facebook), Facebook has provided detailed clarifications on the processing of personal data in the context of the Dating feature. Facebook has also provided details of changes that they have made to the product to take account of the issues raised by the DPC. We will continue to monitor the product as it launches across the EU this week.”
“Much earlier engagement on such projects is imperative going forward,” he added.
Since the launch of Facebook’s dating product in 20 countries around the world — including the US and a number of markets in Asia and LatAm — the company says more than 1.5 billion matches have been “created”.
In a press release about the European launch, Facebook writes that it has “built Dating with safety, security and privacy at the forefront”, adding: “We worked with experts in these areas to provide easy access to safety tips and build protections into Facebook Dating, including the ability to report and block anyone, as well as stopping people from sending photos, links, payments or videos in messages.”
It also links to an update about Facebook Dating’s privacy which emphasizes the product is an “opt-in experience”. This document includes a section explaining how use of the product impacts Facebook’s data collection and the ads users see across its suite of products.
“Facebook Dating may suggest matches for you based on your activities, preferences and information in Dating and other Facebook Products,” it writes. “We may also use your activity in Dating to personalize your experience, including ads you may see, across Facebook Products. The exception to this is your religious views and the gender(s) you are interested in dating, which will not be used to personalize your experience on other Facebook Products.”
One key privacy-related change flowing from the DPC intervention looks to be that Facebook has committed to excluding the use of Dating users’ religious and sexual orientation information for ad targeting purposes.
Under EU law this type of personal information is classed as ‘special category’ data — and consent to process it requires a higher bar of explicit consent from the user. (And Facebook probably didn’t want to harsh Dating users’ vibe with pop-ups asking them to agree to ads targeting them for being gay or Christian, for example.)
Asked about the product changes, the DPC confirmed a number of changes related to special category data, along with some additional clarifications.
Here’s the full list of “changes and clarifications”:
Using social networks to connect with neighbors and local services has surged during the Covid-19 pandemic, and Facebook — with 2.7 billion users globally — is now looking at how it can tap into that in a more direct way. In the same week that it was reported that Nextdoor is reportedly gearing up to go public, Facebook has started to test a Nextdoor clone, Neighborhoods, which suggests Facebook-generated Neighborhood groups (with a capital N, more on that below) local to you to join to connect with people, activities and things being sold in the area.
“More than ever, people are using Facebook to participate in their local communities. To help make it easier to do this, we are rolling out a limited test of Neighborhoods, a dedicated space within Facebook for people to connect with their neighbors,” said a spokesperson in a written statement provided to TechCrunch.
Facebook said that Neighborhoods currently is live only in Calgary, Canada, where it is being tested before getting rolled out more broadly.
The feature — which appears in the Menu of the main Facebook app, alongside tiles for Marketplace, Groups, Friends, Pages, Events and the rest — was first seen widely via a post on Twitter from social media strategy guy Matt Navarra, who in turn had been tipped off by a social media strategist from Calgary, Leon Grigg from Grigg Digital.
From Grigg’s public screenshots, it appears that Neighborhood groups — that is, local groups that are part of this new Neighborhood feature — are like those on Nextdoor, based on actual geographical areas on a map.
From the looks of it, these Neighborhood groups appear to be triggered to “open” once there are enough people in the area to have joined, just like on Nextdoor. But unlike those on Nextdoor, and unlike Facebook groups, they are not created, built and run by admins, nor do they have “Community Ambassadors” (Nextdoor’s term). They are instead generated by Facebook itself.
Facebook said it will also suggest other local groups, although it’s not clear if these will simply be other Neighborhood groups, or local Groups that already exist on the platform, nor what this would mean for all those neighborhood Groups (small n) were Facebook’s new feature to launch more widely. We’re asking and will update as we hear back.
For now, Neighborhood groups require more permissions from you the user, and seem to be more presented rather than something you would organically find as you might a Group today.
Screenshots from Grigg’s Facebook post also show that after you click on Neighborhoods, you are asked to confirm your location to Facebook (sharing your location data being also a way to provide more data points for the company to profile you for advertising and marketing purposes).
It then suggests a Neighborhood to you to join, and also provides a list of other Neighborhood groups that are nearby, plus some ground rules for good behavior. If a Neighborhood isn’t live yet because not enough people have joined, you can invite more people to join it.
Facebook notes that when you post in a Neighborhood group, people see your specific Neighborhood profile and your posts there, but it doesn’t automatically mean they see your normal Facebook profile. You can change what gets seen in privacy settings.
Facebook then takes you through some suggested posts that you might make for other Neighborhoods, or to populate yours once it is live. (Examples in the screenshots include sharing pictures of carved pumpkins, and offering tips on local places.)
Through Neighborhoods, Facebook is doubling down on one of the most popular ways that the social network is already being used — and by an increasing number of people, one of the only ways that it’s being used these days — via Groups, which bypass your own social graph and connect you with other kinds of communities.
Earlier this month during Facebook’s Communities Summit, CEO Mark Zuckerberg said that there were more than 1.8 billion people engaging with Groups at least once a month on the social network, with more than 70 million group admins and moderators putting in unpaid hours to manage them (hello, fellow mods and admins).
“We’re going to make communities as central to the FB experience as friends and family,” Zuckerberg said back in 2019 and repeated again this month.
As Sarah pointed out back in 2014, when Groups had a mere 500 million users and communities was not at the core of Facebook’s mission statement, Facebook Groups sometimes feels like you’re on a whole different social network, where you are establishing connections with people outside of your personal “social graph” of friends, family and colleagues, and are more broadly connecting with specific communities, whether they are based on where you live or a specific interest.
That role has only grown in 2020, with many people turning to local groups during the Covid-19 global health pandemic to connect with local resources, mutual aid groups, and simply to check in with each other.
Or, to complain: my own local group that I help admin did all of the above, but also a place for people to virtually hand-wring about the crowded (and illegal) festival atmosphere in the local park, and then to galvanise feedback and support, which helped us as a community present the problem to our local councillors to get the situation (sort of, finally) resolved.
A lot of Groups use is at its best organic, not prompted or productized by Facebook, so with Neighborhoods, it seems the company is now exploring ways to more proactively, inorganically dig into that role.
That may not be a surprise. On one side, consider how many people have decided to stop sharing as much on Facebook as before, and the role that Facebook has been playing in the great misinformation-disguised-as-news heist of the century. On the other, consider how Facebook has been building out its Marketplace and providing more resources for local businesses to spur them to advertise. Building an anchor for all that with Neighborhoods makes complete commercial sense.
The timing of the feature is also notable for another reason. While Facebook is vast in size and scope compared to Nextdoor, the latter has found a kind of groove in recent times. The public swing towards looking for more local resources online has meant that Nextdoor, fighting its own bad reputation as a place where people go to confirm their worst fears, make racist comments in the name of public service, and look for lost pets, has found a second life.
Things like building neighborhood assistance programs and taking a public stand on social issues has helped Nextdoor reinvent itself as the good guy. Now covering some 268,000 neighborhoods, the company is riding that wave and reportedly eyeing a public listing via SPAC at a $4 billion – $5 billion valuation.
Yes, maybe that’s just a button compared to the full suit that is Facebook. But given that Facebook already has so many of the threads of a Nextdoor-type product already there on its platform, it’s a no-brainer that it would try to knit them together.
Google Photos is reviving its photo printing subscription service and introducing same-day prints. The company earlier this year had briefly tested a new program that used A.I. to suggest the month’s 10 best photos, which were then shipped to your home automatically. But Google ended the test on June 30.
During the trial, Google had offered users a $7.99 per month subscription that would automatically select 10 photos from one of three themes, including people and pets, landscapes, or “a little bit of everything” mix. The 4×6 photos were printed on matte, white cardstock with a 1/8-inch border.
Image Credits: Google
The new subscription, launching soon, leverages feedback from the early tests to now give users more control over which prints they receive and how they look. It also drops the price to $6.99 per month, including shipping and before tax.
With the new Premium Print Series, as the subscription is called, Google Photos will use machine learning techniques to pick 10 of your recent photos to print. But users can edit the photo selection and they can choose either a matte or glossy finish or add a border before the photos ship.
The photos can optionally be turned into postcards, thanks to the cardstock paper backing, Google notes.
Subscribers can also opt to skip a month and can easily cancel the service, if they’re no longer using it.
This updated version of service was recently discovered by reverse engineer Jane Manchun Wong, who detailed the new customization options and the lower price point.
Google Photos is working on “Premium Print Series”,
a subscription service for shipping prints of your photos that Google suggested for you monthly for $6.99/month… nice!
the finish and border are adjustable. you can also skip a month of prints if you’d like
— Jane Manchun Wong (@wongmjane) October 7, 2020
Google says the Premium Print Series will make its ways to Google Photos users in the next few weeks.
The company today is also launching same-day printing at Walgreens, available immediately. This expands Google Photos’ existing same-day options, which already included same-day pickup from CVS and Walmart.
Using the Google Photos app, customers can now order 4×6, 5×7, or 8×10 photo prints for same-day pickup at Walgreens . This nearly doubles the number of stores offering same-day prints to Google Photos users, Google says.
Image Credits: Google
The launch of the expanded photo printing services and subscription comes at a time when people are traveling less often, due to the pandemic, and are attending fewer large events where photo-taking may take place — like parties or concerts, for example.
But even if times have changed, people are continuing to take photos — though they may not be posting them across social media in order to avoid judgement.The subject of the photos may have changed, too, to now include more family and pets or nature scenes, instead of large, crowded places or big social gatherings, for instance.
The nostalgia for pre-pandemic times could see users turning to prints to help them relive fond memories, too.
Google didn’t say exactly when the new subscription will launch, but said users should be able to access the feature in the coming weeks.
Instagram is today introducing a new way for creators to make money. The company is now rolling out badges in Instagram Live to an initial group of over 50,000 creators, who will be able to offer their fans the ability to purchase badges during their live videos to stand out in the comments and show their support.
The idea to monetize using fan badges is not unique to Instagram. Other live streaming platforms, including Twitch and YouTube, have similar systems. Facebook Live also allows fans to purchase stars on live videos, as a virtual tipping mechanism.
Instagram users will see three options to purchase a badge during live videos: badges that cost $0.99, $1.99, or $4.99.
On Instagram Live, badges will not only call attention to the fans’ comments, they also unlock special features, Instagram says. This includes a placement on a creator’s list of badge holders and access to a special heart badge.
The badges and list make it easier for creators to quickly see which fans are supporting their efforts, and give them a shout-out, if desired.
Image Credits: Instagram
To kick off the roll out of badges, Instagram says it will also temporarily match creator earnings from badge purchases during live videos, starting in November. Creators @ronnebrown and @youngezee are among those who are testing badges.
The company says it’s not taking a revenue share at launch, but as it expands its test of badges it will explore revenue share in the future.
“Creators push culture forward. Many of them dedicate their life to this, and it’s so important to us that they have easy ways to make money from their content,” said Instagram COO Justin Osofsky, in a statement. “These are additional steps in our work to make Instagram the single best place for creators to tell their story, grow their audience, and make a living,” she added.
Additionally, Instagram today is expanding access to its IGTV ads test to more creators. This program, introduced this spring, allows creators to earn money by including ads alongside their videos. Today, creators keep at least 55% of that revenue, Instagram says.
The introduction of badges and IGTV ads were previously announced, with Instagram saying it would test the former with a small group of creators earlier this year.
The changes follow what’s been a period of rapid growth on Instagram’s live video platform, as creators and fans sheltered at home during the coronavirus pandemic, which had cancelled live events, large meetups, concerts, and more.
During the pandemic’s start, for example, Instagram said Live creators saw a 70% increase in video views from Feb. to March, 2020. In Q2, Facebook also reported monthly active user growth (from 2.99B to 3.14B in Q1) that it said reflected increased engagement from consumers who were spending more time at home.
College graduates this year (and perhaps in the near-term) have been looking for work in what is one of the most challenging job markets in a decade due to the coronavirus and its impacts on the economy and how people can interact with each other. Today, a startup that’s helping them with that job hunting process is announcing a big round of funding to grow its business.
Handshake, which provides a platform for college-aged students to register their interest and skills and search for suitable work, and for recruiters to search for candidates and advertise entry-level openings, has raised $80 million in a growth round of funding.
Handshake is not disclosing its valuation but a reliable source close to the startup said that the valuation has more than doubled since its last round. That was at $275 million, putting the likely valuation now between $550 million and $600 million.
The company has been around since 2014 and has built its profile in part as a more inclusive version of LinkedIn aimed at people only start starting out in the job market, and it’s using the funding to double down on that.
It now covers 17 million job seekers, 1,000 institutions of higher learning and nearly 500,000 employers, with partnerships with some 120 minority-serving institutions, which include Historically Black Colleges and Universities, and Hispanic Serving Institutions in the U.S., to help them and their students better tackle the job-hunting-recruitment market.
And in this year, Handshake has been using its latest funding — which actually closed in November 2019 — to expand to also including community colleges in its network, and expand its virtual events services.
The Series D is being led by GGV and also includes participation from all of its existing investors. Handshake already had an illustrious list of backers: its last round, a $40 million Series C in 2018, was led by EQT and also included the Chan Zuckerberg Initiative, Omidyar Network and Reach Capital, as well as True Ventures, Kleiner Perkins, Lightspeed Venture Partners, Spark Capital and KPCB Edge.
Garrett Lord, Handshake’s CEO who co-founded the company with Scott Ringwelski (CTO) and Ben Christensen (a board member), said that the coronavirus has not just impacted the job market, but also the job-hunting market.
“The pandemic, as you can imagine, has really reshaped the hiring economy,” he said. “Companies can no longer go to campus to recruit” — traditionally a huge part of how companies connect with those just entering the job market, by way of events where they can meet many people en masse — “so we’ve seen an unprecedented shift to virtual recruiting.”
Virtual events had, he added, been gaining more popularity “prior to Covid,” but suddenly it became the only game in town. He said that currently some 20,000 employers have managed virtual recruitment events at institutions using the Handshake platform. These take the form of online mixers and fairs, where it provides five 30-minute group meetings with up to 50 students in each, with recruiters providing presentations and talking with students; and/or 10-minute 1:1 meetings with students with up to 15 recruiters.
All well and good, except that the job market itself is still rocky. Lord said that there was a 20-30% drop in listings at the start of the pandemic, with particular sectors like hospitality leading that decline, with those still hiring pulling away from proactive campus recruitment. Now, seven months on, many of those realize that they have to continue to be visible and are slowly coming back.
“They need Handshake more than ever before, to replace boots on ground experience with digital and immersive experiences,” Lord said.
While managing the macroeconomic contraction, the expansion this year to including community colleges on Handshake has been a huge deal.
There has long been a perceived prestige and expertise divide between 2-year and 4-year institutions, but as our concept of higher education continues to evolve, with many students foregoing college altogether, or opting for vocational degrees that do not extend to four years of study at a university or college, and college becomes ever more expensive, it’s about time that platforms that are helping one tier of students also helps the other.
And for its investors, at a time when companies are not just talking about wanting to build more diverse work forces, but putting money where their mouths are, and internalizing that change is something that you sometimes need to be proactive to effect, Handshake is a compelling startup to invest in.
“Since its founding, Handshake has been laser focused on delivering on its vision to democratize job opportunity by connecting employers with job seeking students at institutions of higher education, and has built a rich network of 17 million job seekers, 1,000 institutions of higher learning and nearly 500,000 employers,” said Jeff Richards, Managing Partner of GGV, in a statement. “We’re delighted to join forces with the Handshake team to help the company further expand its impact by delivering innovative, industry-leading recruitment solutions and expanding into new markets.”
Vectary, a design platform for 3D and Augmented Reality (AR), has raised a $7.3 million round led by European fund EQT Ventures. Existing investor BlueYard (Berlin) also participated.
Vectary makes high-quality 3D design more accessible for consumers, garnering over one million creators worldwide, and has more than a thousand digital agencies and creative studios as users.
With the coronavirus pandemic shifting more people online, Vectary says it has seen a 300% increase in AR views as more businesses start showcasing their products in 3D and AR.
Vectary was founded in 2014 by Michal Koor (CEO) and Pavol Sovis (CTO), who were both from the design and technology worlds.
The complexity of using and sharing content created by traditional 3D design tools has been a barrier to the adoption of 3D, which is what Vectary addresses.
Although Microsoft, Facebook and Apple are making it easier for consumers, the creative tools remain lacking. Vectary believes that seamless 3D/AR content creation and sharing will be key to mainstream adoption.
Designers and creatives can use Vectary to apply 2D design on a 3D object in Figma or Sketch; create 3D customizers in Webflow with Embed API; and add 3D interactivity to decks.
TikTok returns to Pakistan, Apple launches a music-focused streaming station and SpaceX launches more Starlink satellites. This is your Daily Crunch for October 19, 2020.
The big story: Pakistan un-bans TikTok
The Pakistan Telecommunication Authority blocked the video app 11 days ago, over what it described as “immoral,” “obscene” and “vulgar” videos. The authority said today that it’s lifting the ban after negotiating with TikTok management.
“The restoration of TikTok is strictly subject to the condition that the platform will not be used for the spread of vulgarity/indecent content & societal values will not be abused,” it continued.
This isn’t the first time this year the country tried to crack down on digital content. Pakistan announced new internet censorship rules this year, but rescinded them after Facebook, Google and Twitter threatened to leave the country.
The tech giants
Apple launches a US-only music video station, Apple Music TV — The new music video station offers a free, 24-hour live stream of popular music videos and other music content.
Google Cloud launches Lending DocAI, its first dedicated mortgage industry tool — The tool is meant to help mortgage companies speed up the process of evaluating a borrower’s income and asset documents.
Facebook introduces a new Messenger API with support for Instagram — The update means businesses will be able to integrate Instagram messaging into the applications and workflows they’re already using in-house to manage their Facebook conversations.
Startups, funding and venture capital
SpaceX successfully launches 60 more Starlink satellites, bringing total delivered to orbit to more than 800 — That makes 835 Starlink satellites launched thus far, though not all of those are operational.
Singapore tech-based real estate agency Propseller raises $1.2M seed round — Propseller combines a tech platform with in-house agents to close transactions more quickly.
Ready Set Raise, an accelerator for women built by women, announces third class — Ready Set Raise has changed its programming to be more focused on a “realistic fundraising process” vetted by hundreds of women.
Advice and analysis for Extra Crunch
Are VCs cutting checks in the closing days of the 2020 election? — Several investors told TechCrunch they were split about how they’re making these decisions.
Disney+ UX teardown: Wins, fails and fixes — With the help of Built for Mars founder and UX expert Peter Ramsey, we highlight some of the things Disney+ gets right and things that should be fixed.
Late-stage deals made Q3 2020 a standout VC quarter for US-based startups — Investors backed a record 88 megarounds of $100 million or more.
(Reminder: Extra Crunch is our subscription membership program, which aims to democratize information about startups. You can sign up here.)
US charges Russian hackers blamed for Ukraine power outages and the NotPetya ransomware attack — Prosecutors said the group of hackers, who work for the Russian GRU, are behind the “most disruptive and destructive series of computer attacks ever attributed to a single group.”
Stitcher’s podcasts arrive on Pandora with acquisition’s completion — SiriusXM today completed its previously announced $325 million acquisition of podcast platform Stitcher from E.W. Scripps, and has now launched Stitcher’s podcasts on Pandora.
Original Content podcast: It’s hard to resist the silliness of ‘Emily in Paris’ — The show’s Paris is a fantasy, but it’s a fantasy that we’re happy to visit.
The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.
Social media platforms have repeatedly found themselves in the United States government’s crosshairs over the last few years, as it has been progressively revealed just how much power they really wield, and to what purposes they’ve chosen to wield it. But unlike, say, a firearm or drug manufacturer, there is no designated authority that says what these platforms can and can’t do. So who regulates them? You might say everyone and no one.
Now, it must be made clear at the outset that these companies are by no means “unregulated,” in that no legal business in this country is unregulated. For instance Facebook, certainly a social media company, received a record $5 billion fine last year for failure to comply with rules set by the FTC. But not because the company violated its social media regulations — there aren’t any.
Facebook and others are bound by the same rules that most companies must follow, such as generally agreed-upon definitions of fair business practices, truth in advertising, and so on. But industries like medicine, energy, alcohol and automotive have additional rules, indeed entire agencies, specific to them; not so with social media companies.
I say “social media” rather than “tech” because the latter is much too broad a concept to have a single regulator. Although Google and Amazon (and Airbnb, and Uber, and so on) need new regulation as well, they may require a different specialist, like an algorithmic accountability office or online retail antitrust commission. (Inasmuch as tech companies act within regulated industries, such as Google in broadband, they are already regulated as such.)
Social media can roughly be defined as platforms where people sign up to communicate and share messages and media, and that’s quite broad enough already without adding in things like ad marketplaces, competition quashing and other serious issues.
Who, then, regulates these social media companies? For the purposes of the U.S., there are four main directions from which meaningful limitations or policing may emerge, but each one has serious limitations, and none was actually created for the task.
The Federal Communications Commission and Federal Trade Commission are what people tend to think of when “social media” and “regulation” are used in a sentence together. But one is a specialist — not the right kind, unfortunately — and the other a generalist.
The FCC, unsurprisingly, is primarily concerned with communication, but due to the laws that created it and grant it authority, it has almost no authority over what is being communicated. The sabotage of net neutrality has complicated this somewhat, but even the faction of the Commission dedicated to the backwards stance adopted during this administration has not argued that the messages and media you post are subject to their authority. They have indeed called for regulation of social media and big tech — but are for the most part unwilling and unable to do so themselves.
The Commission’s mandate is explicitly the cultivation of a robust and equitable communications infrastructure, which these days primarily means fixed and mobile broadband (though increasingly satellite services as well). The applications and businesses that use that broadband, though they may be affected by the FCC’s decisions, are generally speaking none of the agency’s business, and it has repeatedly said so.
The only potentially relevant exception is the much-discussed Section 230 of the Communications Decency Act (an amendment to the sprawling Communications Act), which waives liability for companies when illegal content is posted to their platforms, as long as those companies make a “good faith” effort to remove it in accordance with the law.
But this part of the law doesn’t actually grant the FCC authority over those companies or define good faith, and there’s an enormous risk of stepping into unconstitutional territory, because a government agency telling a company what content it must keep up or take down runs full speed into the First Amendment. That’s why although many think Section 230 ought to be revisited, few take seriously Trump’s feeble executive actions along these lines.
The agency did announce that it will be reviewing the prevailing interpretation of Section 230, but until there is some kind of established statutory authority or Congress-mandated mission for the FCC to look into social media companies, it simply can’t.
The FTC is a different story. As watchdog over business practices at large, it has a similar responsibility toward Twitter as it does toward Nabisco. It doesn’t have rules about what a social media company can or can’t do any more than it has rules about how many flavors of Cheez-It there should be. (There are industry-specific “guidelines” but these are more advisory about how general rules have been interpreted.)
On the other hand, the FTC is very much the force that comes into play should Facebook misrepresent how it shares user data, or Nabisco overstates the amount of real cheese in its crackers. The agency’s most relevant responsibility to the social media world is that of enforcing the truthfulness of material claims.
You can thank the FTC for the now-familiar, carefully worded statements that avoid any real claims or responsibilities: “We take security very seriously” and “we think we have the best method” and that sort of thing — so pretty much everything that Mark Zuckerberg says. Companies and executives are trained to do this to avoid tangling with the FTC: “Taking security seriously” isn’t enforceable, but saying “user data is never shared” certainly is.
In some cases this can still have an effect, as in the $5 billion fine recently dropped into Facebook’s lap (though for many reasons that was actually not very consequential). It’s important to understand that the fine was for breaking binding promises the company had made — not for violating some kind of social-media-specific regulations, because again, there really aren’t any.
The last point worth noting is that the FTC is a reactive agency. Although it certainly has guidelines on the limits of legal behavior, it doesn’t have rules that when violated result in a statutory fine or charges. Instead, complaints filter up through its many reporting systems and it builds a case against a company, often with the help of the Justice Department. That makes it slow to respond compared with the lightning-fast tech industry, and the companies or victims involved may have moved beyond the point of crisis while a complaint is being formalized there. Equifax’s historic breach and minimal consequences are an instructive case:
So: While the FCC and FTC do provide important guardrails for the social media industry, it would not be accurate to say they are its regulators.
States are increasingly battlegrounds for the frontiers of tech, including social media companies. This is likely due to frustration with partisan gridlock in Congress that has left serious problems unaddressed for years or decades. Two good examples of states that lost their patience are California’s new privacy rules and Illinois’s Biometric Information Privacy Act (BIPA).
The California Consumer Privacy Act (CCPA) was arguably born out the ashes of other attempts at a national level to make companies more transparent about their data collection policies, like the ill-fated Broadband Privacy Act.
California officials decided that if the feds weren’t going to step up, there was no reason the state shouldn’t at least look after its own. By convention, state laws that offer consumer protections are generally given priority over weaker federal laws — this is so a state isn’t prohibited from taking measures for their citizens’ safety while the slower machinery of Congress grinds along.
The resulting law, very briefly stated, creates formal requirements for disclosures of data collection, methods for opting out of them, and also grants authority for enforcing those laws. The rules may seem like common sense when you read them, but they’re pretty far out there compared to the relative freedom tech and social media companies enjoyed previously. Unsurprisingly, they have vocally opposed the CCPA.
BIPA has a somewhat similar origin, in that a particularly far-sighted state legislature created a set of rules in 2008 limiting companies’ collection and use of biometric data like fingerprints and facial recognition. It has proven to be a huge thorn in the side of Facebook, Microsoft, Amazon, Google and others that have taken for granted the ability to analyze a user’s biological metrics and use them for pretty much whatever they want.
Many lawsuits have been filed alleging violations of BIPA, and while few have produced notable punishments like this one, they have been invaluable in forcing the companies to admit on the record exactly what they’re doing, and how. Sometimes it’s quite surprising! The optics are terrible, and tech companies have lobbied (fortunately, with little success) to have the law replaced or weakened.
What’s crucially important about both of these laws is that they force companies to, in essence, choose between universally meeting a new, higher standard for something like privacy, or establishing a tiered system whereby some users get more privacy than others. The thing about the latter choice is that once people learn that users in Illinois and California are getting “special treatment,” they start asking why Mainers or Puerto Ricans aren’t getting it as well.
In this way state laws exert outsize influence, forcing companies to make changes nationally or globally because of decisions that technically only apply to a small subset of their users. You may think of these states as being activists (especially if their attorneys general are proactive), or simply ahead of the curve, but either way they are making their mark.
This is not ideal, however, because taken to the extreme, it produces a patchwork of state laws created by local authorities that may conflict with one another or embody different priorities. That, at least, is the doomsday scenario predicted almost universally by companies in a position to lose out.
State laws act as a test bed for new policies, but tend to only emerge when movement at the federal level is too slow. Although they may hit the bullseye now and again, like with BIPA, it would be unwise to rely on a single state or any combination among them to miraculously produce, like so many simian legislators banging on typewriters, a comprehensive regulatory structure for social media. Unfortunately, that leads us to Congress.
What can be said about the ineffectiveness of Congress that has not already been said, again and again? Even in the best of times few would trust these people to establish reasonable, clear rules that reflect reality. Congress simply is not the right tool for the job, because of its stubborn and willful ignorance on almost all issues of technology and social media, its countless conflicts of interest and its painful sluggishness — sorry, deliberation — in actually writing and passing any bills, let alone good ones.
Companies oppose state laws like the CCPA while calling for national rules because they know that it will take forever and there’s more opportunity to get their finger in the pie before it’s baked. National rules, in addition to coming far too late, are much more likely also to be watered down and riddled with loopholes by industry lobbyists. (This is indicative of the influence these companies wield over their own regulation, but it’s hardly official.)
But Congress isn’t a total loss. In moments of clarity it has established expert agencies like those in the first item, which have Congressional oversight but are otherwise independent, empowered to make rules, and kept technically — if somewhat limply — nonpartisan.
Unfortunately, the question of social media regulation is too recent for Congress to have empowered a specialist agency to address it. Social media companies don’t fit neatly into any of the categories that existing specialists regulate, something that is plainly evident by the present attempt to stretch Section 230 beyond the breaking point just to put someone on the beat.
Laws at the federal level are not to be relied on for regulation of this fast-moving industry, as the current state of things shows more than adequately. And until a dedicated expert agency or something like it is formed, it’s unlikely that anything spawned on Capitol Hill will do much to hold back the Facebooks of the world.
Of course, however central it considers itself to be, the U.S. is only a part of a global ecosystem of various and shifting priorities, leaders and legal systems. But in a sort of inside-out version of state laws punching above their weight, laws that affect a huge part of the world except the U.S. can still have a major effect on how companies operate here.
The most obvious example is the General Data Protection Regulation, or GDPR, a set of rules, or rather augmentation of existing rules dating to 1995, that has begun to change the way some social media companies do business.
But this is only the latest step in a fantastically complex, decades-long process that must harmonize the national laws and needs of the EU member states in order to provide the clout it needs to compel adherence to the international rules. Red tape seldom bothers tech companies, which rely on bottomless pockets to plow through or in-born agility to dance away.
Although the tortoise may eventually in this case overtake the hare in some ways, at present the GDPR’s primary hindrance is not merely the complexity of its rules, but the lack of decisive enforcement of them. Each country’s Data Protection Agency acts as a node in a network that must reach consensus in order to bring the hammer down, a process that grinds slow and exceedingly fine.
When the blow finally lands, though, it may be a heavy one, outlawing entire practices at an industry-wide level rather than simply extracting pecuniary penalties these immensely rich entities can shrug off. There is space for optimism as cases escalate and involve heavy hitters like antitrust laws in efforts that grow to encompass the entire “big tech” ecosystem.
The rich tapestry of European regulations is really too complex of a topic to address here in the detail it deserves, and also reaches beyond the question of who exactly regulates social media. Europe’s role in that question of, if you will, speaking slowly and carrying a big stick promises to produce results on a grand scale, but for the purposes of this article it cannot really be considered an effective policing body.
(TechCrunch’s EU regulatory maven Natasha Lomas contributed to this section.)
As you can see, the regulatory ecosystem in which social media swims is more or less free of predators. The most dangerous are the small, agile ones — state legislatures — that can take a bite before the platforms have had a chance to brace for it. The other regulators are either too slow, too compromised or too involved (or some combination of the three) to pose a real threat. For this reason it may be necessary to introduce a new, but familiar, species: the expert agency.
As noted above, the FCC is the most familiar example of one of these, though its role is so fragmented that one could be forgiven for forgetting that it was originally created to ensure the integrity of the telephone and telegraph system. Why, then, is it the expert agency for orbital debris? That’s a story for another time.
What is clearly needed is the establishment of an independent expert agency or commission in the U.S., at the federal level, that has statutory authority to create and enforce rules pertaining to the handling of consumer data by social media platforms.
Like the FCC (and somewhat like the EU’s DPAs), this should be officially nonpartisan — though like the FCC it will almost certainly vacillate in its allegiance — and should have specific mandates on what it can and can’t do. For instance, it would be improper and unconstitutional for such an agency to say this or that topic of speech should be disallowed from Facebook or Twitter. But it would be able to say that companies need to have a reasonable and accessible definition of the speech they forbid, and likewise a process for auditing and contesting takedowns. (The details of how such an agency would be formed and shaped is well beyond the scope of this article.)
Even the likes of the FAA lags behind industry changes, such as the upsurge in drones that necessitated a hasty revisit of existing rules, or the huge increase in commercial space launches. But that’s a feature, not a bug. These agencies are designed not to act unilaterally based on the wisdom and experience of their leaders, but are required to perform or solicit research, consult with the public and industry alike, and create evidence-based policies involving, or at least addressing, a minimum of sufficiently objective data.
Sure, that didn’t really work with net neutrality, but I think you’ll find that industries have been unwilling to capitalize on this temporary abdication of authority by the FCC because they see that the Commission’s current makeup is fighting a losing battle against voluminous evidence, public opinion and common sense. They see the writing on the wall and understand that under this system it can no longer be ignored.
With an analogous authority for social media, the evidence could be made public, the intentions for regulation plain, and the shareholders — that is to say, users — could make their opinions known in a public forum that isn’t owned and operated by the very companies they aim to rein in.
Without such an authority these companies and their activities — the scope of which we have only the faintest clue to — will remain in a blissful limbo, picking and choosing by which rules to abide and against which to fulminate and lobby. We must help them decide, and weigh our own priorities against theirs. They have already abused the naïve trust of their users across the globe — perhaps it’s time we asked them to trust us for once.
A New York Post story forces social platforms to make (and in Twitter’s case, reverse) some difficult choices, Sony announces a new 3D display and fitness startup Future raises $24 million. This is your Daily Crunch for October 16, 2020.
The big story: Twitter walks back New York Post decision
A recent New York Post story about a cache of emails and other data supposedly originating from a laptop belonging to Joe Biden’s son Hunter looked suspect from the start, and more holes have emerged over time. But it’s also put the big social media platform in an awkward position, as both Facebook and Twitter took steps to limit the ability of users to share the story.
Twitter, in particular, took a more aggressive stance, blocking links to and images of the Post story because it supposedly violated the platform’s “hacked materials policy.” This led to predictable complaints from Republican politicians, and even Twitter’s CEO Jack Dorsey said that blocking links in direct messages without an explanation was “unacceptable.”
As a result, the company said it’s changing the aforementioned hacked materials policy. It will no longer remove hacked content unless it’s been shared directly by hackers or those “acting in direct concert with them.” Otherwise, it will label tweets to provide context. As of today, it’s also allowing users to share links to the Post story.
The tech giants
Sony’s $5,000 3D display (probably) isn’t for you — The company is targeting creative professionals with its new Spatial Reality Display.
EU’s Google-Fitbit antitrust decision deadline pushed into 2021 — EU regulators now have until January 8, 2021 to take a decision.
Startups, funding and venture capital
Elon Musk’s Las Vegas Loop might only carry a fraction of the passengers it promised — Planning files reviewed by TechCrunch seem to show that The Boring Company’s Loop system will not be able to move anywhere near the number of people the company agreed to.
Future raises $24M Series B for its $150/mo workout coaching app amid at-home fitness boom — Future offers a pricey subscription that virtually teams users with a real-life fitness coach.
Lawmatics raises $2.5M to help lawyers market themselves — The San Diego startup is building marketing and CRM software for lawyers.
Advice and analysis from Extra Crunch
How COVID-19 and the resulting recession are impacting female founders — The sharp decline in available capital is slowing the pace at which women are founding new companies in the COVID-19 era.
Startup founders set up hacker homes to recreate Silicon Valley synergy — Hacker homes feel like a nostalgic attempt to recreate some of the synergies COVID-19 wiped out.
Private equity firms can offer enterprise startups a viable exit option — The IPO-or-acquisition question isn’t always an either/or proposition.
(Reminder: Extra Crunch is our subscription membership program, which aims to democratize information about startups. You can sign up here.)
FAA streamlines commercial launch rules to keep the rockets flying — With rockets launching in greater numbers and variety, and from more providers, it makes sense to get a bit of the red tape out of the way.
We need universal digital ad transparency now — Fifteen researchers propose a new standard for advertising disclosures.
The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.
Twitter has taken another step back from its initial decision to block users from sharing links to or images of a New York Post story reporting on emails and other data supposedly originating on a laptop belonging to Democratic presidential nominee Joe Biden’s son Hunter.
The story, which alleged that Hunter Biden had set up meeting between a Ukrainian energy firm and his father back when Biden was vice president, looked shaky from the start, and more holes have emerged over time. Both Facebook and Twitter took action to slow its spread — but Twitter seemed to take the more aggressive stance, not just including warning labels whenever someone shared the story, but actually blocking links.
These moves have drawn a range of criticism. There have been predictable cries of censorship from Republican politicians and pundits, but there have also been suggestions that Facebook and Twitter inadvertently drew more attention to the story. And even Twitter’s CEO Jack Dorsey suggested that it was “unacceptable” to block links in DMs without an explanation.
Casey Newton, on the other hand, argued that the platforms had successfully slowed the story’s spread: “The truth had time to put its shoes on before Rudy Giuliani’s shaggy-dog story about a laptop of dubious origin made it all the way around the world.”
Twitter initially justified its approach by citing its hacked materials policy, then later said it was blocking the Post article for including “personal and private information — like email addresses and phone numbers — which violate our rules.”
The controversy did prompt Twitter to revise its hacked materials policy, so that content and links obtained through dubious means will now come with a label, rather than being removed entirely, unless it’s being shared directly by hackers or those “acting in concert with them.”
And now, as first reported by The New York Times, Twitter is also allowing users to share links to the Post story itself (something I’ve confirmed through my own Twitter account).
Why the reversal? Again, the official justification for blocking the link was to prevent the spread of private information, so the company said that the story has now spread so widely, online and in the press, that the information can no longer be considered private.
Dear Mr. Zuckerberg, Mr. Dorsey, Mr. Pichai and Mr. Spiegel: We need universal digital ad transparency now!
The negative social impacts of discriminatory ad targeting and delivery are well-known, as are the social costs of disinformation and exploitative ad content. The prevalence of these harms has been demonstrated repeatedly by our research. At the same time, the vast majority of digital advertisers are responsible actors who are only seeking to connect with their customers and grow their businesses.
Many advertising platforms acknowledge the seriousness of the problems with digital ads, but they have taken different approaches to confronting those problems. While we believe that platforms need to continue to strengthen their vetting procedures for advertisers and ads, it is clear that this is not a problem advertising platforms can solve by themselves, as they themselves acknowledge. The vetting being done by the platforms alone is not working; public transparency of all ads, including ad spend and targeting information, is needed so that advertisers can be held accountable when they mislead or manipulate users.
Our research has shown:
While it doesn’t take the place of strong policies and rigorous enforcement, we believe transparency of ad content, targeting and delivery can effectively mitigate many of the potential harms of digital ads. Many of the largest advertising platforms agree; Facebook, Google, Twitter and Snapchat all have some form of an ad archive. The problem is that many of these archives are incomplete, poorly implemented, hard to access by researchers and have very different formats and modes of access. We propose a new standard for universal ad disclosure that should be met by every platform that publishes digital ads. If all platforms commit to the universal ad transparency standard we propose, it will mean a level playing field for platforms and advertisers, data for researchers and a safer internet for everyone.
The public deserves full transparency of all digital advertising. We want to acknowledge that what we propose will be a major undertaking for platforms and advertisers. However, we believe that the social harms currently being borne by users everywhere vastly outweigh the burden universal ad transparency would place on ad platforms and advertisers. Users deserve real transparency about all ads they are bombarded with every day. We have created a detailed description of what data should be made transparent that you can find here.
We researchers stand ready to do our part. The time for universal ad transparency is now.
Jason Chuang, Mozilla
Kate Dommett, University of Sheffield
Laura Edelson, New York University
Erika Franklin Fowler, Wesleyan University
Michael Franz, Bowdoin College
Archon Fung, Harvard University
Sheila Krumholz, Center for Responsive Politics
Ben Lyons, University of Utah
Gregory Martin, Stanford University
Brendan Nyhan, Dartmouth College
Nate Persily, Stanford University
Travis Ridout, Washington State University
Kathleen Searles, Louisiana State University
Rebekah Tromble, George Washington University
Abby Wood, University of Southern California
Dee Goens and Jacob Horne have both the exact and precisely opposite background that you’d expect to see from two people building a way for creators to build a sustainable economy for their followers to participate in. Coinbase, crypto-hack projects at university, KPMG, Merrill Lynch. But where’s the art?
“Believe it or not, I used to have dreams of being a rapper,” laughs Goens. “There’s a SoundcCloud out there somewhere. With that passion you explore the inner workings of the music industry. I would excitedly ask industry friends about the advance and 360 deal models only to realize they were completely broken.”
And, while many may be well-intentioned, these deal structures often exploit artistry. In many cases taking the majority of an artist’s ownership. “I grew curious why artists were unable to resource themselves from their community in an impactful way — but instead, were forced to seek out potentially predatory relationships. To me, this was bullshit.”
Horne says that he’d always wanted to create a fashion brand.
“I always thought a fashion brand would be something I’d do after crypto,” he tells me. “I love crypto but it felt overly focused on just finance and felt like it was missing something. Then I started to play with the idea of combining these two passions and starting Saint Fame.”
While at Coinbase, Horne hacked on Saint Fame, a side project that leveraged some of the ideas on display in Zora. It was a marketplace that allowed people to sell and trade items with cryptocurrency, buying intermediate variable-value tokens redeemable for future goods.
“I realized that culture itself was shaped and built upon an old financial system that is systemically skewed against artists and communities,” says Horne. “The operating system of ownership was built in the 1600s with the Dutch East India Trading Company and early Nation States. Like what the fuck is up with that?”
We have the internet now, we can literally create and share information to billions of people all at once, and the ownership system is the same as when people had to get on a boat for six months to send a letter. It’s time for an upgrade. Any community on the internet should be able to come together, with capital, and work towards any shared vision. That starts with empowering creators and artists to create and own the culture they’re creating. In the long term this moves to internet communities taking on societal endeavors.”
The answer that they’re working on is called Zora. It’s a marketplace with two main components but one philosophy: sustainable economics for creators.
All too often creators are involved in reaping the rewards for their work only once, but the secondary economy continues to generate value out of their reach. Think of an artist, as an example, that creates a piece and sells it for market value. That’s great, but thereafter, every ounce of work that the artist puts into future work, into building a name and a brand and a community for themselves puts additional value into that piece. The artist never sees a dime from that, relying instead on the value of future releases to pay dividends on the work.
Image Credits: Zora
That’s basically the way it has always worked. I have a little background in this as I used to exhibit and was involved in running a gallery and my father is a fine artist. If he sells a painting today for $300, gets a lot better, more popular and more valued over time, the owner of that painting may re-sell it for hundreds or thousands more. He will never see a dime of that. And God forbid that an artist like him gets too locked into the gallery system, which slices off enormous chunks of the value of a piece for a square of wall space and the marketing cachet of a curator or storefront.
The same story can be told across the recording industry, fashion, sports and even social media. Lots of middle-people and lots of vigs to pay. And, unsurprisingly, the same creators of color that drive so much of The Culture are the biggest losers, hands down.
The primary Zora product is a market that allows creators or artists to launch products and then continue to participate in their second market value.
Here’s how the Zora team explains it:
On Zora, creators have the ability to set two prices: start price and max price. As community members buy and sell a token, it moves the price up or down. This makes the price dynamic as it opens price discovery on the items by the market. When people buy the token it moves the price closer to its maximum. When they sell, it moves closer to its minimum.
For an excited community like Jeff [Staple’s], this new dynamic price can cause a quick increase in the value of his sneakers. As a creator, they capture the value from selling on a price curve as well as getting a take on trading fees from the market which they now own. What used to trade on StockX is now about to trade on a creator owned market.
There have been some early successes. Designer and marketer Jeff Staple launched a run of 30 Coca-Cola x Staple SB Dunk customs by Reverseland and their value is trending up around 234% since release. A Benji Taylor x Kevin Doan vinyl figure is up 210%.
I have seen some other stabs at this. When he was still at StockX, founder Josh Luber launched their Initial Product Offerings, a Blind Dutch Auction system that allowed the market to set a price for an item, with some of the cut of pricing above market going back to the manufacturer or brand making the offering. The focus there was brands versus individual creators (though they did launch with a Ben Baller slide). Allowing brands to tap into second market value for limited goods is a lot less of a revolution play, but the thesis is similar. I thought that was a good idea then, and I like it even better when it’s being used to democratize rather than maximize returns.
Side note: I love that this team is messing around with interesting ideas like dogfooding their own marketplace with the value of being in their own TestFlight group. I’m sort of like, is that allowed, but at the same time it’s dope and I’ve never seen anything like it.
Zora was founded in May of 2020 (right in the middle of this current panny-palooza). The team is Goens (Creators and Community), Horne (Product), Slava Kim (Design), Dai Hovey (Engineering), Ethan Daya (Engineering) and Tyson Battistella (Engineering).
Zora has raised a $2 million seed round led by Kindred Ventures, with participation from Trevor McFedries of Brud, Alice Lloyd George, Jeff Staple, Coinbase Ventures and others.
But this idea that physical goods or even digitally packaged works have to exist as finite containers of value is not a given either. Goens and Horne are pushing to challenge that too with the first big new product for Zora: community tokens. Built on Ethereum, the $RAC token is the first of its kind from Zora. André Allen Anjos, stage name RAC, is a Portuguese-American musician and producer who makes remixes that stream on the web, original music and has had commercial work featured in major brand ads.
Though he is popular and has a following in the tens of thousands, RAC is not a social media superpower. The token distribution and subsequent activity in trades and sales is purely driven by the buy-in that his fans feel. This is a key learning for a lot of players in this new economy: raw numbers are the social media equivalent of a billboard that people drive by. It may get you eyeballs, but it doesn’t guarantee action. The modern creator is living in a house with their fans, offering them access and interacting via Discord and Snap and comments.
Image Credits: Zora
But those houses are all other people’s houses, which leads into the reason that Zora is launching a token.
The token drop serves multiple purposes:
The future of Zora most immediately involves spinning up a self-service version of the marketplace, allowing creators and entrepreneurs to launch their products without a direct partnership and onboarding. There are many, many uncertainties here and the team has a lot of challenges ahead on the traction and messaging front. But as mentioned, some early releases have shown promise, and the philosophy is sound and much needed. As the creator universe/passion economy/whatever you call it depends on how old you are/fandom merchant wave rises, there is definitely an opportunity to rethink how the value of their contributions are assigned and whether there is a way to turn the long-term labor of building a community into long-term value.
The last traded price of RAC’s tape, BOY, by the way? $3,713, up 18,465%.
Instead of blocking such content/links from being shared on its service it says it will label tweets to “provide context”.
Wider Twitter rules against posting private information, synthetic and manipulated media, and non-consensual nudity all still apply — so it could still, for example, remove links to hacked material if the content being linked to violates other policies. But just tweeting a link to hacked materials isn’t an automatic takedown anymore.
Over the last 24 hours, we’ve received significant feedback (from critical to supportive) about how we enforced our Hacked Materials Policy yesterday. After reflecting on this feedback, we have decided to make changes to the policy and how we enforce it.
— Vijaya Gadde (@vijaya) October 16, 2020
The move comes hard on the heels of the company’s decision to restrict sharing of a New York Post article this week — which reported on claims that laptop hardware left at a repair shop contained emails and other data belonging to Hunter Biden, the son of U.S. presidential candidate Joe Biden.
The decision by Twitter to restrict sharing of the Post article attracted vicious criticism from high profile Republican voices — with the likes of senator Josh Hawley tweeting that the company is “now censoring journalists”.
Twitter’s hacked materials policy do explicitly allow “reporting on a hack, or sharing press coverage of hacking” but the company subsequently clarified that it had acted because the Post article contained “personal and private information — like email addresses and phone numbers — which violate our rules”. (Plus the Post wasn’t reporting on a hack; but rather on the claim of the discovery of a cache of emails and the emails themselves.)
At the same time the Post article itself is highly controversial. The scenario of how the data came to be in the hands of a random laptop repair shop which then chose to hand it over to a key Trump ally stretches credibility — bearing the hallmarks of an election-targeting disops operation, as we explained on Wednesday.
Given questions over the quality of the Post’s fact-checking and journalistic standards in this case, Twitter’s decision to restrict sharing of the article actually appears to have helped reduce the spread of disinformation — even as it attracted flak to the company for censoring ‘journalism’.
(It has also since emerged that the harddrive in question was manufactured shortly before the laptop was claimed to have been dropped off at the shop. So the most likely scenario is Hunter Biden’s iCloud was hacked and doctored emails planted on the drive where the data could be ‘discovered’ and leaked to the press in a ham-fisted attempt to influence the U.S. presidential election. But Twitter is clearly uncomfortable that enforcing its policy led to accusations of censoring journalists.)
In a tweet thread explaining the change to its policy, Twitter’s legal, policy and trust & safety lead, Vijaya Gadde, writes: “We want to address the concerns that there could be many unintended consequences to journalists, whistleblowers and others in ways that are contrary to Twitter’s purpose of serving the public conversation.”
She also notes that when the hacked materials policy was first introduced, in 2018, Twitter had fewer tools for policy enforcement than it does now, saying: “We’ve recently added new product capabilities, such as labels to provide people with additional context. We are no longer limited to Tweet removal as an enforcement action.”
Twitter began adding contextual labels to policy-breaching tweets by US president Donald Trump earlier this year, rather than remove his tweets altogether. It has continued to expand usage of these contextual signals — such as by adding fact-checking labels to certain conspiracy theory tweets — giving itself a ‘more speech to counteract bad speech’ enforcement tool vs the blunt instrument of tweet takedowns/account bans (which it has also applied recently to the toxic conspiracy theory group, QAnon).
“We believe that labeling Tweets and empowering people to assess content for themselves better serves the public interest and public conversation. The Hacked Material Policy is being updated to reflect these new enforcement capabilities,” Gadde also says, adding: “Content moderation is incredibly difficult, especially in the critical context of an election. We are trying to act responsibly & quickly to prevent harms, but we’re still learning along the way.”
The updated policy is clearly not a free-for-all, given all other Twitter Rules against hacked material apply (such as doxxing). Though there’s a question of whether tweets linking to the Post article would still be taken down under the updated policy if the story did indeed contain personal info (which remains against Twitter’s policy).
But the new ‘third way’ policy for hacked materials does potentially leave Twitter’s platform as a conduit for the spread of political disinformation — in instances where it’s been credulously laundered by the press. (Albeit, Twitter can justifiably point the finger of blame at poor journalist standards at that point.)
The new policy also raises the question of how Twitter will determine whether or not a person is working ‘in concert’ with hackers? Just spitballing here but if — say — on the poll’s eve, Trump were to share some highly dubious information that smeared his key political rival and which he said he’d been handed by Russian president, Vladimir Putin, would Twitter step in and remove it?
We can only hope we don’t have to find out.
FCC Chairman Ajit Pai has announced his intention to pursue a reform of Section 230 of the Communications Act, which among other things limits the liability of internet platforms for content they host. Commissioner Rosenworcel described the timing — immediately after Conservative outrage at Twitter and Facebook limiting the reach of an article relating to Hunter Biden — as “absurd.” But it’s not necessarily the crackdown the Trump administration clearly desires.
In a statement, Chairman Pai explained that “members of all three branches of the federal government have expressed serious concerns about the prevailing interpretation of the immunity set forth in Section 230,” and that there is broad support for changing the law — in fact there are already several bills under consideration that would do so.
At issue is the legal protections for platforms when they decide what content to allow and what to block. Some say they are clearly protected by the First Amendment (this is how it is currently interpreted), while others assert that some of those choices amount to violations of users’ right to free speech.
Though Pai does not mention specific recent circumstances in which internet platforms have been accused of having partisan bias in one direction or the other, it is difficult to imagine they — and the constant needling of the White House — did not factor into the decision.
In fact the push to reform Section 230 has been progressing for years, with the limitations of the law and the FCC’s interpretation of its pertinent duties discussed candidly by the very people who wrote the original bill and thus have considerable insight into its intentions and shortcomings.
In June Commissioner Starks disparaged pressure from the White House to revisit the FCC’s interpretation of the law, saying that the First Amendment protections are clear and that Trump’s executive order “seems inconsistent with those core principles.” That said, he proposed that the FCC take the request to reconsider the law seriously.
“And if, as I suspect it ultimately will, the petition fails at a legal question of authority,” he said, “I think we should say it loud and clear, and close the book on this unfortunate detour. Let us avoid an upcoming election season that can use a pending proceeding to, in my estimation, intimidate private parties.”
The latter part of his warning seems especially prescient given the choice by the Chairman to open proceedings less than three weeks before the election, and the day after Twitter and Facebook exercised their authority as private platforms to restrict the distribution of articles which, as Twitter belatedly explained, clearly broke guidelines on publishing private information. (The New York Post article had screenshots of unredacted documents with what appeared to be Hunter Biden’s personal email and phone number, among other things.)
Commissioner Rosenworcel did not mince words, saying “The timing of this effort is absurd. The FCC has no business being the President’s speech police.” Starks echoed her, saying “We’re in the midst of an election… the FCC shouldn’t do the President’s bidding here.” (Trump has repeatedly called for the “repeal” of Section 230, which is just part of a much larger and important set of laws.)
Considering the timing and the utter impossibility of reaching any kind of meaningful conclusion before the election — rulemaking is at a minimum a months-long process — it is hard to see Pai’s announcement as anything but a pointed warning to internet platforms. Platforms which, it must be stressed, the FCC has essentially no regulatory powers over.
The Chairman telegraphed his desired outcome clearly in the announcement, saying “Many advance an overly broad interpretation that in some cases shields social media companies from consumer protection laws in a way that has no basis in the text of Section 230… Social media companies have a First Amendment right to free speech. But they do not have a First Amendment right to a special immunity denied to other media outlets, such as newspapers and broadcasters.”
Whether the FCC has anything to do with regulating how these companies exercise that right remains to be seen, but it’s clear that Pai thinks the agency should, and doesn’t. With the makeup of the FCC currently 3:2 in favor of the Conservative faction, it may be said that this rulemaking is a forgone conclusion; the net neutrality debacle showed that these Commissioners are willing to ignore and twist facts in order to justify the end they choose, and there’s no reason to think this rulemaking will be any different.
The process will be just as drawn out and public as previous ones, however, which means that a cavalcade of comments may yet again indicate that the FCC ignores public opinion, experts, and lawmakers alike in its decision to invent or eliminate its roles as it sees fit. Be ready to share your feedback with the FCC, but no need to fire up the outrage just yet — chances are this rulemaking won’t even exist in draft form until after the election, at which point there may be something of a change in the urgency of this effort to reinterpret the law to the White House’s liking.
YouTube today joined social media platforms like Facebook and Twitter in taking more direct action to prohibit the distribution of conspiracy theories like QAnon.
The company announced that it is expanding its hate and harassment policies to ban videos “that [target] an individual or group with conspiracy theories that have been used to justify real-world violence,” according to a statement.
YouTube specifically pointed to videos that harass or threaten someone by claiming they are complicit in the false conspiracy theories promulgated by adherents to QAnon.
YouTube isn’t going as far as either of the other major social media outlets in an establishing an outright ban on videos or articles that promote the outlandish conspiracies, instead focusing on the material that targets individuals.
“As always, context matters, so news coverage on these issues or content discussing them without targeting individuals or protected groups may stay up,” the company said in a statement. “We will begin enforcing this updated policy today, and will ramp up in the weeks to come.”
It’s the latest step in social media platforms efforts to combat the spread of disinformation and conspiracy theories that are increasingly linked to violence and terrorism in the real world.
In 2019, the FBI for the first time identified fringe conspiracy theories like QAnon as a domestic terrorist threat and adherents to the conspiracy theory that falsely claims famous celebrities and Democratic politicians are part of a secret, Satanic, child-molesting cabal plotting to undermine Donald Trump.
In July, Twitter banned 7,000 accounts associated with the conspiracy theory, and last week Facebook announced a ban on the distribution of QAnon related materials or propaganda across its platforms.
These actions by the social media platforms may be too little, too late, considering how widely the conspiracy theories have spread… and the damage they’ve already done thanks to incidents like the attack on a pizza parlor in Washington DC that landed the gunman in prison.
The recent steps at YouTube followed earlier efforts to stem the distribution of conspiracy theories by making changes to its recommendation algorithm to avoid promoting conspiracy related materials.
However as TechCrunch noted previously, it was over the course of 2018 and the last year that QAnon conspiracies really took root.
As TechCrunch noted previously, it’s now a shockingly mainstream political belief system that has its own Congressional candidates.
So much for YouTube’s vaunted 70% drop in views coming from the company’s search and discovery systems. The company said that when it looked at QAnon content, it saw the number of views coming from non-subscribed recommendations dropping by over 80% since January 2019.
YouTube noted that it may take additional steps going forward as it loowks to combat conspiracy theories that lead to real-world violence.
“Due to the evolving nature and shifting tactics of groups promoting these conspiracy theories, we’ll continue to adapt our policies to stay current and remain committed to taking the steps needed to live up to this responsibility,” the company said.