Hello friends, and welcome back to Week in Review.
Last week, we dove into the truly bizarre machinations of the NFT market. This week, we’re talking about something that’s a little bit more impactful on the current state of the web — Apple’s NeuralHash kerfuffle.
In the past month, Apple did something it generally has done an exceptional job avoiding — the company made what seemed to be an entirely unforced error.
In early August — seemingly out of nowhere** — the company announced that by the end of the year they would be rolling out a technology called NeuralHash that actively scanned the libraries of all iCloud Photos users, seeking out image hashes that matched known images of child sexual abuse material (CSAM). For obvious reasons, the on-device scanning could not be opted out of.
This announcement was not coordinated with other major consumer tech giants, Apple pushed forward on the announcement alone.
Researchers and advocacy groups had almost unilaterally negative feedback for the effort, raising concerns that this could create new abuse channels for actors like governments to detect on-device information that they regarded as objectionable. As my colleague Zach noted in a recent story, “The Electronic Frontier Foundation said this week it had amassed more than 25,000 signatures from consumers. On top of that, close to 100 policy and rights groups, including the American Civil Liberties Union, also called on Apple to abandon plans to roll out the technology.”
(The announcement also reportedly generated some controversy inside of Apple.)
The issue — of course — wasn’t that Apple was looking at find ways that prevented the proliferation of CSAM while making as few device security concessions as possible. The issue was that Apple was unilaterally making a massive choice that would affect billions of customers (while likely pushing competitors towards similar solutions), and was doing so without external public input about possible ramifications or necessary safeguards.
A long story short, over the past month researchers discovered Apple’s NeuralHash wasn’t as air tight as hoped and the company announced Friday that it was delaying the rollout “to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”
Having spent several years in the tech media, I will say that the only reason to release news on a Friday morning ahead of a long weekend is to ensure that the announcement is read and seen by as few people as possible, and it’s clear why they’d want that. It’s a major embarrassment for Apple, and as with any delayed rollout like this, it’s a sign that their internal teams weren’t adequately prepared and lacked the ideological diversity to gauge the scope of the issue that they were tackling. This isn’t really a dig at Apple’s team building this so much as it’s a dig on Apple trying to solve a problem like this inside the Apple Park vacuum while adhering to its annual iOS release schedule.
Image Credits: Bryce Durbin / TechCrunch /
Apple is increasingly looking to make privacy a key selling point for the iOS ecosystem, and as a result of this productization, has pushed development of privacy-centric features towards the same secrecy its surface-level design changes command. In June, Apple announced iCloud+ and raised some eyebrows when they shared that certain new privacy-centric features would only be available to iPhone users who paid for additional subscription services.
You obviously can’t tap public opinion for every product update, but perhaps wide-ranging and trail-blazing security and privacy features should be treated a bit differently than the average product update. Apple’s lack of engagement with research and advocacy groups on NeuralHash was pretty egregious and certainly raises some questions about whether the company fully respects how the choices they make for iOS affect the broader internet.
Delaying the feature’s rollout is a good thing, but let’s all hope they take that time to reflect more broadly as well.
** Though the announcement was a surprise to many, Apple’s development of this feature wasn’t coming completely out of nowhere. Those at the top of Apple likely felt that the winds of global tech regulation might be shifting towards outright bans of some methods of encryption in some of its biggest markets.
Back in October of 2020, then United States AG Bill Barr joined representatives from the UK, New Zealand, Australia, Canada, India and Japan in signing a letter raising major concerns about how implementations of encryption tech posed “significant challenges to public safety, including to highly vulnerable members of our societies like sexually exploited children.” The letter effectively called on tech industry companies to get creative in how they tackled this problem.
Here are the TechCrunch news stories that especially caught my eye this week:
LinkedIn kills Stories
You may be shocked to hear that LinkedIn even had a Stories-like product on their platform, but if you did already know that they were testing Stories, you likely won’t be so surprised to hear that the test didn’t pan out too well. The company announced this week that they’ll be suspending the feature at the end of the month. RIP.
FAA grounds Virgin Galactic over questions about Branson flight
While all appeared to go swimmingly for Richard Branson’s trip to space last month, the FAA has some questions regarding why the flight seemed to unexpectedly veer so far off the cleared route. The FAA is preventing the company from further launches until they find out what the deal is.
Apple buys a classical music streaming service
While Spotify makes news every month or two for spending a massive amount acquiring a popular podcast, Apple seems to have eyes on a different market for Apple Music, announcing this week that they’re bringing the classical music streaming service Primephonic onto the Apple Music team.
TikTok parent company buys a VR startup
It isn’t a huge secret that ByteDance and Facebook have been trying to copy each other’s success at times, but many probably weren’t expecting TikTok’s parent company to wander into the virtual reality game. The Chinese company bought the startup Pico which makes consumer VR headsets for China and enterprise VR products for North American customers.
Twitter tests an anti-abuse ‘Safety Mode’
The same features that make Twitter an incredibly cool product for some users can also make the experience awful for others, a realization that Twitter has seemingly been very slow to make. Their latest solution is more individual user controls, which Twitter is testing out with a new “safety mode” which pairs algorithmic intelligence with new user inputs.
Some of my favorite reads from our Extra Crunch subscription service this week:
Our favorite startups from YC’s Demo Day, Part 1
“Y Combinator kicked off its fourth-ever virtual Demo Day today, revealing the first half of its nearly 400-company batch. The presentation, YC’s biggest yet, offers a snapshot into where innovation is heading, from not-so-simple seaweed to a Clearco for creators….”
“…Yesterday, the TechCrunch team covered the first half of this batch, as well as the startups with one-minute pitches that stood out to us. We even podcasted about it! Today, we’re doing it all over again. Here’s our full list of all startups that presented on the record today, and below, you’ll find our votes for the best Y Combinator pitches of Day Two. The ones that, as people who sift through a few hundred pitches a day, made us go ‘oh wait, what’s this?’
All the reasons why you should launch a credit card
“… if your company somehow hasn’t yet found its way to launch a debit or credit card, we have good news: It’s easier than ever to do so and there’s actual money to be made. Just know that if you do, you’ve got plenty of competition and that actual customer usage will probably depend on how sticky your service is and how valuable the rewards are that you offer to your most active users….”
Cohort analysis is a way of evaluating your business that involves grouping customers into “cohorts” and observing how they behave over time. A commonly used approach is monthly cohort analysis, where customers are grouped by the month they signed up, allowing you to observe how someone who joined in November compares to someone who signed up the month before.
Cohort analysis gives you a multivariable, forward-looking view of your business compared to more simple and static values like averages or totals.
Cohort analysis is flexible and can be used to analyze a variety of performance metrics including revenue, acquisition costs and churn.
Let’s imagine you’re the CMO of the “Bluetooth Coffee Company.” You sell a tech-enabled “coffee composer” that brews coffee, tracks consumption and orders replacement coffee when users are running low. The longer your customers are subscribers, the more money you make. You recently ran a Black Friday feature on a popular deals site and you’re interested to know if you should run it again.
The chart below is a simple analysis you might do to gauge your marketing performance. It shows the total customers added each month, and a clear spike in November following the Black Friday promotion. At first glance, things look good — you brought in more than double the monthly customers in November compared to October.
Image Credits: Sagard & Portage Ventures
But before you rebook the promotion, you should ask if these new Black Friday consumers are as valuable as they seem. Comparing monthly customer percentage is a good way to find out.
Below is a monthly cohort analysis of new customers between September 2020 and February 2021. Like our previous chart, we’ve listed the monthly cohort size, but we’ve also included the customer engagement rate (calculated by dividing daily active users by monthly active users or DAU/MAU for each month (M1 is month 1, M2 is month 2, and so on).
This analysis lets us see how the customer engagement of each monthly cohort compares to the next.
Image Credits: Sagard & Portage Ventures
From the figures above, we see that most cohorts have a customer engagement rate in their first month (M1, 42%-46%), meaning 42%-46% of new customers use the coffee composer everyday. The November cohort however has materially lower engagement (M1, 30%), and remains lower in subsequent months (M2, 26%) and (M3, 27%). Interestingly, the customer engagement rate only drops with the November cohort, returning to normal with the December cohort (M1, 45%).
It’s been a long time coming but Facebook is finally feeling some heat from Europe’s much trumpeted data protection regime: Ireland’s Data Protection Commission (DPC) has just announced a €225 million (~$267M) for WhatsApp.
The Facebook-owned messaging app has been under investigation by the Irish DPC, its lead data supervisor in the European Union, since December 2018 — several months after the first complaints were fired at WhatsApp over how it processes user data under Europe’s General Data Protection Regulation (GDPR), once it begun being applied in May 2018.
Despite receiving a number of specific complaints about WhatsApp, the investigation undertaken by the DPC that’s been decided today was what’s known as an “own volition” enquiry — meaning the regulator selected the parameters of the investigation itself, choosing to fix on an audit of WhatsApp’s ‘transparency’ obligations.
A key principle of the GDPR is that entities which are processing people’s data must be clear, open and honest with those people about how their information will be used.
The DPC’s decision today (which runs to a full 266 pages) concludes that WhatsApp failed to live up to the standard required by the GDPR.
Its enquiry considered whether or not WhatsApp fulfils transparency obligations to both users and non-users of its service (WhatsApp may, for example, upload the phone numbers of non-users if a user agrees to it ingesting their phone book which contains other people’s personal data); as well as looking at the transparency the platform offers over its sharing of data with its parent entity Facebook (a highly controversial issue at the time the privacy U-turn was announced back in 2016, although it predated GDPR being applied).
In sum, the DPC found a range of transparency infringements by WhatsApp — spanning articles 5(1)(a); 12, 13 and 14 of the GDPR.
In addition to issuing a sizeable financial penalty, it has ordered WhatsApp to take a number of actions to improve the level of transparency it offer users and non-users — giving the tech giant a three-month deadline for making all the ordered changes.
In a statement responding to the DPC’s decision, WhatsApp disputed the findings and dubbed the penalty “entirely disproportionate” — as well as confirming it will appeal, writing:
“WhatsApp is committed to providing a secure and private service. We have worked to ensure the information we provide is transparent and comprehensive and will continue to do so. We disagree with the decision today regarding the transparency we provided to people in 2018 and the penalties are entirely disproportionate. We will appeal this decision.”
It’s worth emphasizing that the scope of the DPC enquiry which has finally been decided today was limited to only looking at WhatsApp’s transparency obligations.
The regulator was explicitly not looking into wider complaints — which have also been raised against Facebook’s data-mining empire for well over three years — about the legal basis WhatsApp claims for processing people’s information in the first place.
So the DPC will continue to face criticism over both the pace and approach of its GDPR enforcement.
…system to add years until this fine will actually be paid – but at least it's a start… 10k cases per year to go!
— Max Schrems (@maxschrems) September 2, 2021
Indeed, prior to today, Ireland’s regulator had only issued one decision in a major cross-border cases addressing ‘Big Tech’ — against Twitter when, back in December, it knuckle-tapped the social network over a historical security breach with a fine of $550k.
WhatsApp’s first GDPR penalty is, by contrast, considerably larger — reflecting what EU regulators (plural) evidently consider to be a far more serious infringement of the GDPR.
Transparency is a key principle of the regulation. And while a security breach may indicate sloppy practice, systematic opacity towards people whose data your adtech empire relies upon to turn a fat profit looks rather more intentional; indeed, it’s arguably the whole business model.
And — at least in Europe — such companies are going to find themselves being forced to be up front about what they’re doing with people’s data.
The WhatsApp decision will rekindle the debate about whether the GDPR is working effectively where it counts most: Against the most powerful companies in the world, who are also of course Internet companies.
Under the EU’s flagship data protection regulation, decisions on cross border cases require agreement from all affected regulators — across the 27 Member States — so while the GDPR’s “one-stop-shop” mechanism seeks to streamline the regulatory burden for cross-border businesses by funnelling complaints and investigations via a lead regulator (typically where a company has its main legal establishment in the EU), objections can be raised to that lead supervisory authority’s conclusions (and any proposed sanctions), as has happened here, in this WhatsApp case.
Ireland originally proposed a far more low-ball penalty of up to €50M for WhatsApp. However other EU regulators objected to the draft decision on a number of fronts — and the European Data Protection Board (EDPB) ultimately had to step in and take a binding decision (issued this summer) to settle the various disputes.
Through that (admittedly rather painful) joint-working, the DPC was required to increase the size of the fine issued to WhatsApp. In a mirror of what happened with its draft Twitter decision — where the DPC has also suggested an even tinier penalty in the first instance.
While there is a clear time cost in settling disputes between the EU’s smorgasbord of data protection agencies — the DPC submitted its draft WhatsApp decision to the other DPAs for review back in December, so it’s taken well over half a year to hash out all the disputes about WhatsApp’s lossy hashing and so forth — the fact that ‘corrections’ are being made to its decisions and conclusions can land — if not jointly agreed but at least arriving via a consensus being pushed through by the EDPB — is a sign that the process, while slow and creaky, is working.
Even so, Ireland’s data watchdog will continue to face criticism for its outsized role in handling GDPR complaints and investigations — with some accusing the DPC of essentially cherry-picking which issues to examine in detail (by its choice and framing of cases) and which to elide entirely (by those issues it doesn’t open an enquiry into or complaints it simply drops or ignores), with its loudest critics arguing it’s therefore still a major bottleneck on effective enforcement of data protection rights across the EU. And the associated conclusion for that critique is that tech giants like Facebook are still getting a pretty free pass to violate Europe’s privacy rules.
But while it’s true that a $267M penalty is still the equivalent of a parking ticket for Facebook, orders to change how such adtech giants are able to process people’s information have the potential to be a far more significant correction on problematic business models. Again, though, time will be needed to tell.
In a statement on the WhatsApp decision today, noyb — the privacy advocay group founded by long-time European privacy campaigner Max Schrems, said: “We welcome the first decision by the Irish regulator. However, the DPC gets about ten thousand complaints per year since 2018 and this is the first major fine. The DPC also proposed an initial €50MK fine and was forced by the other European data protection authorities to move towards €225M, which is still only 0.08% of the turnover of the Facebook Group. The GDPR foresees fines of up to 4% of the turnover. This shows how the DPC is still extremely dysfunctional.”
Schrems also noted that he and noyb still have a number of pending cases before the DPC — including on WhatsApp.
In further remarks, Schrems and noyb said: “WhatsApp will surely appeal the decision. In the Irish court system this means that years will pass before any fine is actually paid. In our cases we often had the feeling that the DPC is more concerned with headlines than with actually doing the hard groundwork. It will be very interesting to see if the DPC will actually defend this decision fully, as it was basically forced to make this decision by its European counterparts. I can imagine that the DPC will simply not put many resources on the case or ‘settle’ with WhatsApp in Ireland. We will monitor this case closely to ensure that the DPC is actually following through with this decision.”
Facebook is getting into fantasy sports and other types of fantasy games. The company this morning announced the launch of Facebook Fantasy Games in the U.S. and Canada on the Facebook app for iOS and Android. Some games are described as “simpler” versions of the traditional fantasy sports games already on the market, while others allow users to make predictions associated with popular TV series, like “Survivor” or “The Bachelorette.”
The first game to launch is Pick & Play Sports, in partnership with Whistle Sports, where fans get points for correctly predicting the winner of a big game, the points scored by a top player, or other events that unfold during the match. Players can also earn bonus points for building a streak of correct predictions over several days. This game is arriving today.
Image Credits: Facebook
In the months ahead, it will be followed by other games in sports, TV, and pop culture, including Fantasy Survivor, where players choose a set of Castaways from the popular CBS TV show to join their fantasy team and Fantasy “The Bachelorette,” where fans will pick a group of men from the suitors vying for the Bachelorette’s heart and get points based on their actions and events that take place during the show. Other upcoming sports-focused games include MLB Home Run Picks, where players pick the team that they think will hit the most home runs, and LaLiga Winning Streak, where fans predict the team that will win that day.
In addition to top players being featured on leaderboards, games have a social component for those who want to play with friends.
Image Credits: Facebook
Players can create their own fantasy league with friends to compete with one another or against other fans, either publicly or privately. League members can compare scores with each other and will have a place where they can share picks, reactions and comments. This league area resembles a private group on Facebook, as it offers its own compose box for posting only to members and its own dedicated feed. However, the page is designed to support groups with specific buttons to “play” or view the “leaderboard,” among others.
The addition of fantasy games could help Facebook increase the time users spent on its app at a time when the company is facing significant competition in social, namely from TikTok. According to App Annie, the average monthly time spent per user in TikTok grew faster than other top social apps in 2020, including by 70% in the U.S., surpassing Facebook.
Facebook had dabbled in the idea of becoming a second screen companion for live events in the past, but in a different way than fantasy sports and games. Instead, its R&D division tested Venue, which worked as a way for fans to comment on live events which were hosted in the app by well-known personalities.
The new league games will be available from the bookmark menu on the mobile app and in News Feed through notifications.
TikTok is expanding its in-app parental controls feature, Family Pairing, with educational resources designed to help parents better support their teenage users, the company announced morning. The pairing feature, which launched to global users last year, allows parents of teens aged 13 and older to connect their accounts with the child’s so the parent can set controls related to screen time use, who the teen can direct message, and more. But the company heard from teens that they also want their voices to be heard when it comes to parents’ involvement in their digital life.
To create the new educational content, TikTok partnered with the online safety nonprofit, Internet Matters. The organization developed a set of resources in collaboration with teens that aim to offer parents tips about navigating the TikTok landscape and teenage social media usage in general.
Teens said they want parents to understand the rules they’re setting when they use features like Family Pairing and they want them to be open to having discussions about the time teens spend online. And while teens don’t mind when parents set boundaries, they also want to feel they’ve earned some level of trust from the adults in their life.
The older teens get, the more autonomy they want to have on their own device and social networks, as well. They may even tell mom or dad that they don’t want them to follow them on a given platform.
This doesn’t necessarily mean the teen is up to no good, the new resources explain to parents. The teens just want to feel like they can hang out with their friends online without being so closely monitored. This has become an important part of the online experience today, in the pandemic era, where many younger people are spending more time at home instead of socializing with friends in real-life or participating in other in-person group activities.
Image Credits: TikTok
Teens said they also want to be able to come to parents when something goes wrong, without fearing that they’ll be harshly punished or that the parent will panic about the situation. The teens know they’ll be consequences if they break the rules, but they want parents to work through other tough situations with them and devise solutions together, not just react in anger.
All this sounds like straightforward, common sense advice, but parents on TikTok often have varying degrees of comfort with their teens’ digital life and use of social networks. Some basic guidelines that explain what teens want and feel makes sense to include. That said, the parents who are technically savvy enough to enable a parental control feature like Family Pairing may already be clued into best practices.
Image Credits: TikTok
In addition, this sort of teen-focused privacy and safety content is also designed to help TikTok better establish itself as a platform working to protect its younger users — an increasingly necessary stance in light of the potential regulation which big tech has been trying to ahead of, as of late. TikTok, for instance, announced in August it would roll out more privacy protections for younger teens aimed to make the app safer. Facebook, Google and YouTube also did the same.
TikTok says parents or guardians who have currently linked their account to a teen’s account via the Family Pairing feature will receive a notification that prompts them to find out more about the teens’ suggestions and how to approach those conversations about digital literacy and online safety. Parents who sign up and enable Family Pairing for the first time, will also be guided to the resources.
This is the second post in a series on the Facebook monopoly. The first post explored how the U.S. Federal Trade Commission should define the Facebook monopoly. I am inspired by Cloudflare’s recent post explaining the impact of Amazon’s monopoly in its industry.
Perhaps it was a competitive tactic, but I genuinely believe it more a patriotic duty: guideposts for legislators and regulators on a complex issue. My generation has watched with a combination of sadness and trepidation as legislators who barely use email question the leading technologists of our time about products that have long pervaded our lives in ways we don’t yet understand.
I, personally, and my company both stand to gain little from this — but as a participant in the latest generation of social media upstarts, and as an American concerned for the future of our democracy, I feel a duty to try.
Mark Zuckerberg has reached his Key Largo moment.
In May 1972, executives of the era’s preeminent technology company — AT&T — met at a secret retreat in Key Largo, Florida. Their company was in crisis.
At the time, Ma Bell’s breathtaking monopoly consisted of a holy trinity: Western Electric (the vast majority of phones and cables used for American telephony), the lucrative long distance service (for both personal and business use) and local telephone service, which the company subsidized in exchange for its monopoly.
Over the next decade, all three government branches — legislators, regulators and the courts — parried with AT&T’s lawyers as the press piled on, battering the company’s reputation in the process. By 1982, a consent decree forced AT&T’s dismantling. The biggest company on earth withered to 30% of its book value and seven independent “Baby Bell” regional operating companies. AT&T’s brand would live on, but the business as the world knew it was dead.
Mark Zuckerberg is, undoubtedly, the greatest technologist of our time. For over 17 years, he has outgunned, outsmarted and outperformed like no software entrepreneur before him. Earlier this month, the U.S. Federal Trade Commission refiled its sweeping antitrust case against Facebook.
Its own holy trinity of Facebook Blue, Instagram and WhatsApp is under attack. All three government branches — legislators, regulators and the courts — are gaining steam in their fight, and the press is piling on, battering the company’s reputation in the process. Facebook, the AT&T of our time, is at the brink. For so long, Zuckerberg has told us all to move fast and break things. It’s time for him to break Facebook.
If Facebook does exist to “make the world more open and connected, and not just to build a company,” as Zuckerberg wrote in the 2012 IPO prospectus, he will spin off Instagram and WhatsApp now so that they have a fighting chance. It would be the ultimate Zuckerbergian chess move. Zuckerberg would lose voting control and thus power over all three entities, but in his action he would successfully scatter the opposition. The rationale is simple:
I write this as an admirer; I genuinely believe much of the criticism Zuckerberg has received is unfair. Facebook faces Sisyphean tasks. The FTC will not let Zuckerberg sneeze without an investigation, and the company has failed to innovate.
Given no chance to acquire new technology and talent, how can Facebook survive over the long term? In 2006, Terry Semel of Yahoo offered $1 billion to buy Facebook. Zuckerberg reportedly remarked, “I just don’t know if I want to work for Terry Semel.” Even if the FTC were to allow it, this generation of founders will not sell to Facebook. Unfair or not, Mark Zuckerberg has become Terry Semel.
It is not a matter of if; it is a matter of when.
In a speech on the floor of Congress in 1890, Senator John Sherman, the founding father of the modern American antitrust movement, famously said, “If we will not endure a king as a political power, we should not endure a king over the production, transportation and sale of any of the necessities of life. If we would not submit to an emperor, we should not submit to an autocrat of trade with power to prevent competition and to fix the price of any commodity.”
This is the sentiment driving the building resistance to Facebook’s monopoly, and it shows no sign of abating. Zuckerberg has proudly called Facebook the fifth estate. In the U.S., we only have four estates.
All three branches of the federal government are heating up their pursuit. In the Senate, an unusual bipartisan coalition is emerging, with Senators Amy Klobuchar (D-MN), Mark Warner (D-VA), Elizabeth Warren (D-MA) and Josh Hawley (R-MO) each waging a war from multiple fronts.
In the House, Speaker Nancy Pelosi (D-CA) has called Facebook “part of the problem.” Lina Khan’s FTC is likewise only getting started, with unequivocal support from the White House that feels burned by Facebook’s disingenuous lobbying. The Department of Justice will join, too, aided by state attorneys general. And the courts will continue to turn the wheels of justice, slowly but surely.
In the wake of Facebook co-founder Chris Hughes’ scathing 2019 New York Times op-ed, Zuckerberg said that Facebook’s immense size allows it to spend more on trust and safety than Twitter makes in revenue.
“If what you care about is democracy and elections, then you want a company like us to be able to invest billions of dollars per year like we are in building up really advanced tools to fight election interference,” Zuckerberg said.
This could be true, but it does not prove that the concentration of such power in one man’s hands is consistent with U.S. public policy. And the centralized operations could be rebuilt easily in standalone entities.
Time and time again, whether on Holocaust denial, election propaganda or vaccine misinformation, Zuckerberg has struggled to make quick judgments when presented with the information his trust and safety team uncovers. And even before a decision is made, the structure of the team disincentivizes it from even measuring anything that could harm Facebook’s brand. This is inherently inconsistent with U.S. democracy. The New York Times’ army of reporters will not stop uncovering scandal after scandal, contradicting Zuckerberg’s narrative. The writing is on the wall.
Facebook Blue, Instagram and WhatsApp all face existential threats. Pressure from the government will stifle Facebook’s efforts to right the ship.
For so long, Facebook has dominated the social media industry. But if you ask Chinese technology executives about Facebook today, they quote Tencent founder Pony Ma: “When a giant falls, his corpse will still be warm for a while.”
Facebook’s recent demise begins with its brand. The endless, cascading scandals of the last decade have irreparably harmed its image. Younger users refuse to adopt the flagship Facebook Blue. The company’s internal polling on two key metrics — good for the world (GFW) and cares about users (CAU) — shows Facebook’s reputation is in tatters. Talent is fleeing, too; Instacart alone recently poached 55 Facebook executives.
In 2012 and 2014, Instagram and WhatsApp were real dangers. Facebook extinguished both through acquisition. Yet today they represent the company’s two most promising, underutilized assets. They are the underinvested telephone networks of our time.
Weeks ago, Instagram head Adam Mosseri announced that the company no longer considers itself a photo-sharing app. Instead, its focus is entertainment. In other words, as the media widely reported, Instagram is changing to compete with TikTok.
TikTok’s strength represents an existential threat. U.S. children 4 to 15 already spend over 80 minutes a day on ByteDance’s TikTok, and it’s just getting started. The demographics are quickly expanding way beyond teenagers, as social products always have. For Instagram, it could be too little too late — as a part of Facebook, Instagram cannot acquire the technology and retain the talent it needs to compete with TikTok.
Imagine Instagram acquisitions of Squarespace to bolster its e-commerce offerings, or Etsy to create a meaningful marketplace. As a part of Facebook, Instagram is strategically adrift.
Likewise, a standalone WhatsApp could easily be a $100 billion market cap company. WhatsApp has a proud legacy of robust security offerings, but its brand has been tarnished by associations with Facebook. Discord’s rise represents a substantial threat, and WhatsApp has failed to innovate to account for this generation’s desire for community-driven messaging. Snapchat, too, is in many ways a potential WhatsApp killer; its young users use photography and video as a messaging medium. Facebook’s top augmented reality talents are leaving for Snapchat.
With 2 billion monthly active users, WhatApp could be a privacy-focused alternative to Facebook Blue, and it would logically introduce expanded profiles, photo-sharing capabilities and other features that would strengthen its offerings. Inside Facebook, WhatsApp has suffered from underinvestment as a potential threat to Facebook Blue and Messenger. Shareholders have suffered for it.
Beyond Instagram and WhatsApp, Facebook Blue itself is struggling. Q2’s earnings may have skyrocketed, but the increase in revenue hid a troubling sign: Ads increased by 47%, but inventory increased by just 6%. This means Facebook is struggling to find new places to run its ads. Why? The core social graph of Facebook is too old.
I fondly remember the day Facebook came to my high school; I have thousands of friends on the platform. I do not use Facebook anymore — not for political reasons, but because my friends have left. A decade ago, hundreds of people wished me happy birthday every year. This year it was 24, half of whom are over the age of 50. And I’m 32 years old. Teen girls run the social world, and many of them don’t even have Facebook on their phones.
Zuckerberg’s newfound push into the metaverse has been well covered, but the question remains: Why wouldn’t a Facebook serious about the metaverse acquire Roblox? Of course, the FTC would currently never allow it.
Facebook’s current clunky attempt at a hardware solution, with an emphasis on the workplace, shows little sign of promise. The launch was hardly propitious, as CNN reported, “While Bosworth, the Facebook executive, was in the middle of describing how he sees Workrooms as a more interactive way to gather virtually with coworkers than video chat, his avatar froze midsentence, the pixels of its digital skin turning from flesh-toned to gray. He had been disconnected.”
This is not the indomitable Facebook of yore. This is graying Facebook, freezing midsentence.
Zuckerberg’s control of 58% of Facebook’s voting shares has forestalled a typical Wall Street reckoning: Investors are tiring of Zuckerberg’s unilateral power. Many justifiably believe the company is more valuable as the sum of its parts. The success of AT&T’s breakup is a case in point.
Five years after AT&T’s 1984 breakup, AT&T and the Baby Bells’ value had doubled compared to AT&T’s pre-breakup market capitalization. Pressure from Japanese entrants battered Western Electric’s market share, but greater competition in telephony spurred investment and innovation among the Baby Bells.
AT&T turned its focus to competing with IBM and preparing for the coming information age. A smaller AT&T became more nimble, ready to focus on the future rather than dwell on the past.
Standalone Facebook Blue, Instagram and WhatsApp could drastically change their futures by attracting talent and acquiring new technologies.
Zuckerberg has always been one step ahead. And when he wasn’t, he was famously unprecious: “Copying is faster than innovating.” If he really believes in Facebook’s mission and recognizes that the situation cannot possibly get any better from here, he will copy AT&T’s solution before it is forced upon him.
Regulators are tying Zuckerberg’s hands behind his back as the company weathers body blows and uppercuts from Beijing to Silicon Valley. As Zuckerberg’s idol Augustus Caesar might have once said, carpe diem. It’s time to break Facebook.
Facebook is looking to create a standalone advisory committee for election-related policy decisions, according to a new report from The New York Times. The company has reportedly approached a number of policy experts and academics it is interested in recruiting for the group, which could give the company cover for some of its most consequential choices.
The group, which the Times characterizes as a commission, would potentially be empowered to weigh in on issues like election misinformation and political advertising — two of Facebook’s biggest policy headaches. Facebook reportedly plans for the commission to be in place for the 2022 U.S. midterm elections and could announce its formation as soon as this fall.
Facebook’s election commission could be modeled after the Oversight Board, the company’s first experiment in quasi-independent external decision making. The Oversight Board began reviewing cases in October of last year, but didn’t gear up in time to impact the flood of election misinformation that swept the platform during the U.S. presidential election. Initially, the board could only make policy rulings based on material that was already removed from Facebook.
The company touts the independence of the Oversight Board, and while it does operate independently, Facebook created the group and appointed its four original co-chairs. The Oversight Board is able to set policy precedents and make binding per-case moderation rulings, but ultimately its authority comes from Facebook itself, which at any point could decide to ignore the board’s decisions.
A similar external policy-setting body focused on elections would be very politically useful for Facebook. The company is a frequent target for both Republicans and Democrats, with the former claiming Facebook censors conservatives disproportionately and the latter calling attention to Facebook’s long history of incubating conspiracies and political misinformation.
Neither side was happy when Facebook decided to suspend political advertising after the election — a gesture that failed to address the exponential spread of organic misinformation. Facebook asked the Oversight Board to review its decision to suspend former President Trump, though the board ultimately kicked its most controversial case back to the company itself.
The Y Combinator application season is upon us. I have been through YC a couple of times and have reviewed thousands of applications as a volunteer in later years.
Typically, you hear advice focused on ways to improve your YC application so it gets accepted. Here are some tips on what not to do and why so many YC applications get rejected. I’ve also put down some advice about what else to anticipate and take into consideration as you navigate the application process.
In short, don’t overthink your application, and keep it simple and straightforward.
When in doubt, read YC’s instructions and answer the question literally. Avoid verbose marketing lingo and keep answers short and concise.
The best applications are often those made at the last minute, because applicants do not overthink their responses and toil over details they think need to be shoved into a question. While I do not recommend submitting applications at the deadline because the system has had issues receiving submissions, you can capture the essence of last-minute submissions by being clear and concise.
Remember that your application should be good enough to get an interview, not win a prize. Go back to work instead of spending more time perfecting an application.
YC experiments frequently. For this batch and the last, there was an early deadline that would give accepted teams access to YC before the batch officially began. Applying early gives you an opportunity to land an interview in the early round and to update your application to be considered in the standard round.
The UK government has named the person it wants to take over as its chief data protection watchdog, with sitting commissioner Elizabeth Denham overdue to vacate the post: The Department of Digital, Culture, Media and Sport (DCMS) today said its preferred replacement is New Zealand’s privacy commissioner, John Edwards.
Edwards, who has a legal background, has spent more than seven years heading up the Office of the Privacy Commissioner In New Zealand — in addition to other roles with public bodies in his home country.
He is perhaps best known to the wider world for his verbose Twitter presence and for taking a public dislike to Facebook: In the wake of the 2018 Cambridge Analytica data misuse scandal Edwards publicly announced that he was deleting his account with the social media — accusing Facebook of not complying with the country’s privacy laws.
An anti-‘Big Tech’ stance aligns with the UK government’s agenda to tame the tech giants as it works to bring in safety-focused legislation for digital platforms and reforms of competition rules that take account of platform power.
— John Edwards (@JCE_PC) August 26, 2021
If confirmed in the role — the DCMS committee has to approve Edwards’ appointment; plus there’s a ceremonial nod needed from the Queen — he will be joining the regulatory body at a crucial moment as digital minister Oliver Dowden has signalled the beginnings of a planned divergence from the European Union’s data protection regime, post-Brexit, by Boris Johnson’s government.
Dial back the clock five years and prior digital minister, Matt Hancock, was defending the EU’s General Data Protection Regulation (GDPR) as a “decent piece of legislation” — and suggesting to parliament that there would be little room for the UK to diverge in data protection post-Brexit.
But Hancock is now out of government (aptly enough after a data leak showed him breaching social distancing rules by kissing his aide inside a government building), and the government mood music around data has changed key to something far more brash — with sitting digital minister Dowden framing unfettered (i.e. deregulated) data-mining as “a great opportunity” for the post-Brexit UK.
For months, now, ministers have been eyeing how to rework the UK’s current (legascy) EU-based data protection framework — to, essentially, reduce user rights in favor of soundbites heavy on claims of slashing ‘red tape’ and turbocharging data-driven ‘innovation’. Of course the government isn’t saying the quiet part out loud; its press releases talk about using “the power of data to drive growth and create jobs while keeping high data protection standards”. But those standards are being reframed as a fig leaf to enable a new era of data capture and sharing by default.
Dowden has said that the emergency data-sharing which was waived through during the pandemic — when the government used the pressing public health emergency to justify handing NHS data to a raft of tech giants — should be the ‘new normal’ for a post-Brexit UK. So, tl;dr, get used to living in a regulatory crisis.
A special taskforce, which was commissioned by the prime minister to investigate how the UK could reshape its data policies outside the EU, also issued a report this summer — in which it recommended scrapping some elements of the UK’s GDPR altogether — branding the regime “prescriptive and inflexible”; and advocating for changes to “free up data for innovation and in the public interest”, as it put it, including pushing for revisions related to AI and “growth sectors”.
The government is now preparing to reveal how it intends to act on its appetite to ‘reform’ (read: reduce) domestic privacy standards — with proposals for overhauling the data protection regime incoming next month.
Speaking to the Telegraph for a paywalled article published yesterday, Dowden trailed one change that he said he wants to make which appears to target consent requirements — with the minister suggesting the government will remove the legal requirement to gain consent to, for example, track and profile website visitors — all the while framing it as a pro-consumer move; a way to do away with “endless” cookie banners.
Only cookies that pose a ‘high risk’ to privacy would still require consent notices, per the report — whatever that means.
Oliver Dowden, the UK Minister for Digital, Culture, Media and Sport, says that the UK will break away from GDPR, and will no longer require cookie warnings, other than those posing a 'high risk'.https://t.co/2ucnppHrIm pic.twitter.com/RRUdpJumYa
— dan barker (@danbarker) August 25, 2021
“There’s an awful lot of needless bureaucracy and box ticking and actually we should be looking at how we can focus on protecting people’s privacy but in as light a touch way as possible,” the digital minister also told the Telegraph.
The draft of this Great British ‘light touch’ data protection framework will emerge next month, so all the detail is still to be set out. But the overarching point is that the government intends to redefine UK citizens’ privacy rights, using meaningless soundbites — with Dowden touting a plan for “common sense” privacy rules — to cover up the fact that it intends to reduce the UK’s currently world class privacy standards and replace them with worse protections for data.
If you live in the UK, how much privacy and data protection you get will depend upon how much ‘innovation’ ministers want to ‘turbocharge’ today — so, yes, be afraid.
It will then fall to Edwards — once/if approved in post as head of the ICO — to nod any deregulation through in his capacity as the post-Brexit information commissioner.
We can speculate that the government hopes to slip through the devilish detail of how it will torch citizens’ privacy rights behind flashy, distraction rhetoric about ‘taking action against Big Tech’. But time will tell.
Data protection experts are already warning of a regulatory stooge.
While the Telegraph suggests Edwards is seen by government as an ideal candidate to ensure the ICO takes a “more open and transparent and collaborative approach” in its future dealings with business.
In a particularly eyebrow raising detail, the newspaper goes on to report that government is exploring the idea of requiring the ICO to carry out “economic impact assessments” — to, in the words of Dowden, ensure that “it understands what the cost is on business” before introducing new guidance or codes of practice.
All too soon, UK citizens may find that — in the ‘sunny post-Brexit uplands’ — they are afforded exactly as much privacy as the market deems acceptable to give them. And that Brexit actually means watching your fundamental rights being traded away.
In a statement responding to Edwards’ nomination, Denham, the outgoing information commissioner, appeared to offer some lightly coded words of warning for government, writing [emphasis ours]: “Data driven innovation stands to bring enormous benefits to the UK economy and to our society, but the digital opportunity before us today will only be realised where people continue to trust their data will be used fairly and transparently, both here in the UK and when shared overseas.”
The lurking iceberg for government is of course that if wades in and rips up a carefully balanced, gold standard privacy regime on a soundbite-centric whim — replacing a pan-European standard with ‘anything goes’ rules of its/the market’s choosing — it’s setting the UK up for a post-Brexit future of domestic data misuse scandals.
You only have to look at the dire parade of data breaches over in the US to glimpse what’s coming down the pipe if data protection standards are allowed to slip. The government publicly bashing the privacy sector for adhering to lax standards it deregulated could soon be the new ‘get popcorn’ moment for UK policy watchers…
UK citizens will surely soon learn of unfair and unethical uses of their data under the ‘light touch’ data protection regime — i.e. when they read about it in the newspaper.
Such an approach will indeed be setting the country on a path where mistrust of digital services becomes the new normal. And that of course will be horrible for digital business over the longer run. But Dowden appears to lack even a surface understanding of Internet basics.
The UK is also of course setting itself on a direct collision course with the EU if it goes ahead and lowers data protection standards.
This is because its current data adequacy deal with the bloc — which allows for EU citizens’ data to continue flowing freely to the UK is precariously placed — was granted only on the basis that the UK was, at the time it was inked, still aligned with the GDPR.
So Dowden’s rush to rip up protections for people’s data presents a clear risk to the “significant safeguards” needed to maintain EU adequacy.
Back in June, when the Commission signed off on the UK’s adequacy deal, it clearly warned that “if anything changes on the UK side, we will intervene”. Moreover, the adequacy deal is also the first with a baked in sunset clause — meaning it will automatically expire in four years.
So even if the Commission avoids taking proactive action over slipping privacy standards in the UK there is a hard deadline — in 2025 — when the EU’s executive will be bound to look again in detail at exactly what Dowden & Co. have wrought. And it probably won’t be pretty.
The longer term UK ‘plan’ (if we can put it that way) appears to be to replace domestic economic reliance on EU data flows — by seeking out other jurisdictions that may be friendly to a privacy-light regime governing what can be done with people’s information.
Hence — also today — DCMS trumpeted an intention to secure what it billed as “new multi-billion pound global data partnerships” — saying it will prioritize striking ‘data adequacy’ “partnerships” with the US, Australia, the Republic of Korea, Singapore, and the Dubai International Finance Centre and Colombia.
Future partnerships with India, Brazil, Kenya and Indonesia will also be prioritized, it added — with the government department cheerfully glossing over the fact it’s UK citizens’ own privacy that is being deprioritized here.
“Estimates suggest there is as much as £11 billion worth of trade that goes unrealised around the world due to barriers associated with data transfers,” DCMS writes in an ebullient press release.
As it stands, the EU is of course the UK’s largest trading partner. And statistics from the House of Commons library on the UK’s trade with the EU — which you won’t find cited in the DCMS release — underline quite how tiny this potential Brexit ‘data bonanza’ is, given that UK exports to the EU stood at £294 billion in 2019 (43% of all UK exports).
So even the government’s ‘economic’ case to water down citizens’ privacy rights looks to be puffed up with the same kind of misleadingly vacuous nonsense as ministers’ reframing of a post-Brexit UK as ‘Global Britain’.
Everyone hates cookies banners, sure, but that’s a case for strengthening not weakening people’s privacy — for making non-tracking the default setting online and outlawing manipulative dark patterns so that Internet users don’t constantly have to affirm they want their information protected. Instead the UK may be poised to get rid of annoying cookie consent ‘friction’ by allowing a free for all on people’s data.
A London-headquartered startup called LOVE, valued at $17 million following its pre-seed funding, aims to redefine how people stay in touch with close family and friends. The company is launching a messaging app that offers a combination of video calling as well as asynchronous video and audio messaging, in an ad-free, privacy-focused experience with a number of bells and whistles, including artistic filters and real-time transcription and translation features.
But LOVE’s bigger differentiator may not be its product alone, but rather the company’s mission.
LOVE aims for its product direction to be guided by its user base in a democratic fashion as opposed to having the decisions made about its future determined by an elite few at the top of some corporate hierarchy. In addition, the company’s longer-term goal is ultimately to hand over ownership of the app and its governance to its users, the company says.
These concepts have emerged as part of bigger trends towards a sort of “Web 3.0,” or next phase of internet development, where services are decentralized, user privacy is elevated, data is protected and transactions take place on digital ledgers, like a blockchain, in a more distributed fashion.
LOVE’s founders are proponents of this new model, including serial entrepreneur Samantha Radocchia, who previously founded three companies and was an early advocate for the blockchain as the co-founder of Chronicled, an enterprise blockchain company focused on the pharmaceutical supply chain.
As someone who’s been interested in emerging technology since her days of writing her anthropology thesis on currency exchanges in “Second Life’s” virtual world, she’s now faculty at Singularity University, where she’s given talks about blockchain, AI, Internet of Things, Future of Work, and other topics. She’s also authored an introductory guide to the blockchain with her book “Bitcoin Pizza.”
Co-founder Christopher Schlaeffer, meanwhile, held a number of roles at Deutsche Telekom, including chief product & innovation officer, corporate development officer and chief strategy officer, where he along with Google execs introduced the first mobile phone to run Android. He was also chief digital officer at the telecommunication services company VEON.
The two crossed paths after Schlaeffer had already begun the work of organizing a team to bring LOVE to the public, which includes co-founders Chief Technologist Jim Reeves, also previously of VEON, and Chief Designer Timm Kekeritz, previously an interaction designer at international design firm IDEO in San Francisco, design director at IXDS and founder of design consultancy Raureif in Berlin, among other roles.
Image Credits: LOVE
Explained Radocchia, what attracted her to join as CEO was the potential to create a new company that upholds more positive values than what’s often seen today — in fact, the brand name “LOVE” is a reference to this aim. She was also interested in the potential to think through what she describes as “new business models that are not reliant on advertising or harvesting the data of our users,” she says.
To that end, LOVE plans to monetize without any advertising. While the company isn’t ready to explain its business model in full, it would involve users opting in to services through granular permissions and membership, we’re told.
“We believe our users will much rather be willing to pay for services they consciously use and grant permissions to in a given context than have their data used for an advertising model which is simply not transparent,” says Radocchia.
LOVE expects to share more about the model next year.
As for the LOVE app itself, it’s a fairly polished mobile messenger offering an interesting combination of features. Like any other video chat app, you can video call with friends and family, either in one-on-one calls or in groups. Currently, LOVE supports up to five call participants, but expects to expand that as it scales. The app also supports video and audio messaging for asynchronous conversations. There are already tools that offer this sort of functionality on the market, of course — like WhatsApp, with its support for audio messages, or video messenger Marco Polo. But they don’t offer quite the same expanded feature set.
Image Credits: LOVE
For starters, LOVE limits its video messages to 60 seconds, for brevity’s sake. (As anyone who’s used Marco Polo knows, videos can become a bit rambling, which makes it harder to catch up when you’re behind on group chats.) In addition, LOVE allows you to both watch the video content as well as read the real-time transcription of what’s being said — the latter which comes in handy not only for accessibility’s sake, but also for those times you want to hear someone’s messages but aren’t in a private place to listen or don’t have headphones. Conversations can also be translated into 50 languages.
“A lot of the traditional communication or messenger products are coming from a paradigm that has always been text-based,” explains Radocchia. “We’re approaching it completely differently. So while other platforms have a lot of the features that we do, I think that…the perspective that we’ve approached it has completely flipped it on its head,” she continues. “As opposed to bolting video messages on to a primarily text-based interface, [LOVE is] actually doing it in the opposite way and adding text as a sort of a magically transcribed add-on — and something that you never, hopefully, need to be typing out on your keyboard again,” she adds.
The app’s user interface, meanwhile, has been designed to encourage eye-to-eye contact with the speaker to make conversations feel more natural. It does this by way of design elements where bubbles float around as you’re speaking and the bubble with the current speaker grows to pull your focus away from looking at yourself. The company is also working with the curator of Serpentine Gallery in London, Hans Ulrich-Obrist, to create new filters that aren’t about beautification or gimmicks, but are instead focused on introducing a new form of visual expression that makes people feel more comfortable on camera.
For the time being, this has resulted in a filter that slightly abstracts your appearance, almost in the style of animation or some other form of visual arts.
The app claims to use end-to-end encryption and the automatic deletion of its content after seven days — except for messages you yourself recorded, if you’ve chosen to save them as “memorable moments.”
“One of our commitments is to privacy and the right-to-forget,” says Radocchia. “We don’t want to be or need to be storing any of this information.”
LOVE has been soft-launched on the App Store, where it’s been used with a number of testers and is working to organically grow its user base through an onboarding invite mechanism that asks users to invite at least three people to join. This same onboarding process also carefully explains why LOVE asks for permissions — like using speech recognition to create subtitles.
LOVE says its valuation is around $17 million USD following pre-seed investments from a combination of traditional startup investors and strategic angel investors across a variety of industries, including tech, film, media, TV and financial services. The company will raise a seed round this fall.
The app is currently available on iOS, but an Android version will arrive later in the year. (Note that LOVE does not currently support the iOS 15 beta software, where it has issues with speech transcription and in other areas. That should be resolved next week, following an app update now in the works.)
Just days after Elastic announced the acquisition of build.security, the company is making yet another security acquisition. As part of its second-quarter earnings announcement this afternoon, Elastic disclosed that it is acquiring Vancouver, Canada based security vendor CMD. Financial terms of the deal are not being publicly disclosed.
CMD‘s technology provides runtime security for cloud infrastructure, helping organizations gain better visibility into processes that are running. The startup was founded in 2016 and has raised $21.6 million in funding to date. The company’s last round was a $15 million Series B that was announced in 2019, led by GV.
Elastic CEO and co-founder Shay Banon told TechCrunch that his company will be welcoming the employees of CMD into his company, but did not disclose precisely how many would be coming over. CMD CEO and co-founder Santosh Krishan and his fellow co-founder Jake King will both be taking executive roles within Elastic.
Both build.security and CMD are set to become part of Elastic’s security organization. The two technologies will be integrated into the Elastic Stack platform that provides visibility into what an organization is running, as well as security insights to help limit risk. Elastic has been steadily growing its security capabilities in recent years, acquiring Endgame Security in 2019 for $234 million.
Banon explained that, as organizations increasingly move to the cloud and make use of Kubernetes, they are looking for more layers of introspection and protection for Linux. That’s where CMD’s technology comes in. CMD’s security service is built with an open source technology known as eBPF. With eBPF, it’s possible to hook into a Linux operating system for visibility and security control. Work is currently ongoing to extend eBPF for Windows workloads, as well.
CMD isn’t the only startup that has been building based on eBP. Isovalent, which announced a $29 million Series A round led by Andreessen Horowitz and Google in November 2020, is also active in the space. The Linux Foundation also recently announced the creation of an eBPF Foundation, with the participation of Facebook, Google, Microsoft, Netflix and Isovalent.
Fundamentally, Banon sees a clear alignment between what CMD was building and what Elastic aims to deliver for its users.
“We have a saying at Elastic – while you observe, why not protect?” Banon said. “With CMD if you look at everything that they do, they also have this deep passion and belief that it starts with observability. “
It will take time for Elastic to integrate the CMD technology into the Elastic Stack, though it won’t be too long. Banon noted that one of the benefits of acquiring a startup is that it’s often easier to integrate than a larger, more established vendor.
“With all of these acquisitions that we make we spend time integrating them into a single product line,” Banon said.
That means Elastic needs to take the technology that other companies have built and fold it into its stack and that sometimes can take time, Banon explained. He noted that it took two years to integrate the Endgame technology after that acquisition.
“Typically that lends itself to us joining forces with smaller companies with really innovative technology that can be more easily taken and integrated into our stack,” Banon said.
To celebrate its ten year anniversary, Messenger today announced a handful of new features: poll games, word effects, contact sharing, and birthday gifting via Facebook Pay. But beyond the fun features, Facebook has been testing a way to add voice and video calls back into the Facebook app, rather than on Messenger.
“We are testing audio and video calls within the Facebook app messaging experience so people can make and receive calls regardless of which app they’re using,” a representative from Facebook told TechCrunch. “This will give people on Facebook easy ways to connect with their communities where they already are.”
Although earlier in Facebook history, the Messenger app had operated as a standalone experience, Facebook tells us that it’s now starting to see Messenger less as a separate entity — more of an underlying technology that can help to power many of the new experiences Facebook is now developing.
“We’ve been focused more on real-time experiences — Watch Together, Rooms, Live Audio Rooms — and we’ve started to think of Messenger as a connective tissue regardless of the surface,” a Facebook spokesperson told us. “This is a test, but the bigger vision is for us to unlock content and communities that may not be accessible in Messenger, and that the Facebook app is going to become more about shared real-time experiences,” they added.
Given the company’s move in recent months to integrate its underlying communication infrastructure, it should come to reason that Facebook would ultimately add more touchpoints for accessing its new Messenger-powered features inside the desktop app, as well. When asked for comment on this point, the spokesperson said the company didn’t have any details to share at this time. However, they noted that the test is a part of Facebook’s broader vision to enable more real-time experiences across Facebook’s services.
Despite the new integrations, the standalone version of Messenger isn’t going away.
Facebook says that people who want a more “full-featured” messaging, audio and video calling experience” should continue to use Messenger.
Image Credits: Messenger
As for today’s crop of new features — including polls, word effects, contact sharing, and others — the goal is to celebrate Messenger’s ability to keep people in touch with their family a friends.
To play the new poll games, users can tap “Polls” in their group chat and select the “Most Likely To” tab — then, they can choose from questions like “most likely to miss their flight?” or “most likely to give gifts on their own birthday?”, select names of chat participants to be included as potential answers, and send the poll.
Contact sharing will make it easier to share others’ Facebook contacts through Messenger, while birthday gifting lets users send birthday-themed payments on Messenger via Facebook Pay. There will also be other “birthday expression tools,” including a birthday song soundmoji, “Messenger is 10!” sticker pack, a new balloon background, a message effect, and AR effect to celebrate Messenger’s double-digit milestone.
Image Credits: Messenger
Meanwhile, word effects lets users manually input a phrase, and any time they send a message with that phrase, an accompanying emoji will float across the screen. In an example, Messenger showed the phrase “happy birthday” accompanied with a word effect of confetti emojis flooding the screen. (That one’s pretty tame, but this could be a remarkable application of the poop emoji.) The company only shared a “sneak peak” of this feature, as it’s not rolling out immediately.
In total, Facebook is announcing a total of ten features, most of which will begin rolling out today.
Messenger has come a long way over the past decade.
Ten years ago, Facebook acqui-hired a small group messaging start-up called Beluga, started by three former Google employees (apparently, a functional group thread was a white whale back then — simpler times). Several months later, the company unveiled Messenger, a standalone messaging app.
But three years into Messenger’s existence, it was no longer an optional add-on to the Facebook experience, but a mandatory download for anyone who wanted to keep up with their friends on the go. Facebook removed the option to send messages within its flagship app, directing users to use Messenger instead. Facebook’s reasoning behind this, the company told TechCrunch at the time, was that they wanted to eliminate the confusion of having two different mobile messaging systems. Just months earlier, Facebook had spent $19 billion to acquire WhatsApp and woo international users. Though removing Messenger from the Facebook app was controversial, the app reached 1.2 billion users three years later in 2017.
Today, Facebook has declared that it wants to evolve into a “metaverse” company, and on the same day as the anti-trust filing last week, Mark Zuckerberg unveiled a product that applies virtual reality in an impressively boring way: helping people attend work meetings. This metaverse would be enabled by technologies built by Facebook’s platform team, noted Vice President of Messenger Stan Chudnovsky. However, he added that people in the metaverse will still need platforms like Messenger.
“I don’t think messaging is going anywhere, even in the metaverse, because a asynchronous communication is going to continue to exist,” Chudnovsky said. People will still need to send messages to those who aren’t currently available to chat, he explained. Plus, Chudnovsky believes this sort of communication will become even more popular with the launch of the metaverse, as the technology will help to serve as a bridge between your phone, real life, and the metaverse.
“if anything is gonna happen more, not less. Because messaging is that things that just continues to grow with every new platform leap,” he said.
Additional reporting: Sarah Perez
Following its announcement late last month, Facebook’s new 128GB model of the Oculus Quest 2 is now available to buy. You can purchase the VR headset from the company’s website for the same $299 price as the previous 64GB base model. “Long story short? We’ve created this 128GB model so that players can easily store and access more games and apps on a single device,” Facebook says of the new variant.
Facebook announced the 128GB model at the same time it issued a voluntary recall of the Quest 2 to address an issue with the original face insert that came with the headset. The company temporarily stopped selling the Quest 2 for about a month so that it could add a new silicone face cover inside the box of each new unit. If you’re a current Quest 2 owner, you can request Facebook send you the new silicone cover by visiting the My Devices section of the account settings. The new 128GB model also comes with the silicone cover inside the box.
Editor’s note: This post originally appeared on Engadget.
Instagram is ditching the “swipe-up” link in Instagram Stories starting on August 30. The popular feature has historically allowed businesses and high-profile creators a way to direct their Story’s viewers to a website where they could learn more about a product, read an article, sign-up for a service, or do anything else the creator wanted to promote. In place of the “swipe up” call-to-action, Instagram users who previously had access to the feature will instead be able to use the new Link Sticker, the company says.
This sticker had been in testing starting in June with a small handful of users, the company said. But on August 30, it will begin to roll out more broadly.
App researcher Jane Manchun Wong first noticed the announcement which warned creators of the plan to shut down swipe-up links.
IG said the swipe up links will go away starting from Aug 30 and that I should use the “link sticker”
… but I searched my Stories Sticker sheet and I’m not seeing the link sticker at all (not rolled out to me).
Does that mean I’ll lose the ability to add links to my Stories?
— Jane Manchun Wong (@wongmjane) August 23, 2021
Instagram says it will begin to convert those who currently have access to the swipe-up link to the Link Sticker starting on August 30, 2021. This will include businesses and creators who are either verified or who have met the threshold for follower count. (While Instagram doesn’t publicly comment on this count, it’s widely reported to be at least 10,000 followers.)
The new Link Sticker has a couple of key advantages over the older “swipe-up” link.
For starters, it offers greater creator control over their Stories.
Like polls, questions and location stickers, the Link Sticker lets creators toggle between different styles, resize the sticker, and then place it anywhere on the Story for maximum engagement. In addition, viewers will now be able to react and reply to posts that have the Link Sticker attached, just like any other Story. Before, that sort of feedback wasn’t possible on posts with the swipe-up link, Instagram noted.
While there isn’t a change to who will gain access to the Link Sticker for now, Instagram says it’s evaluating whether or not to expand link access to more accounts in the future. The decision to expand access is one that has to be made carefully, however, as it could impact the app’s integrity and safety. For instance, if Link Sticker were to be adopted by bad actors, it could be used to spread misinformation or post spam. The shift to the Link Sticker is the first step in making it possible to broaden access to link sharing in Stories, if Instagram chooses to go that route.
Overall, the move away from a gesture to sticker is more in line with Instagram’s current creative direction, where interactive features are added to posts in the form of stickers. The new Link Sticker will join others already available in the app, including stickers for donations, music, and polls.
Both Facebook and Snap offer tools that allow developers to build out augmented reality (AR) experiences and features for their own respective family of apps. Now, TikTok is looking to do the same. The company recently launched a new creative toolset called TikTok Effect Studio, currently in private beta testing, which will allow its own developer community to build AR effects for TikTok’s short-form video app.
On a new website titled “Effect House,” TikTok asks interested developers to sign up for early access to Effect Studio.
On the form provided, developers fill out their name, email, TikTok account info, company, and level of experience with building for AR, as well as examples of their work. The website also asks if they’re using a Mac or PC (presumably to gauge which desktop platform to prioritize), and whether they would test Effect House for work or for personal use.
TikTok is launching an Effects Studio in beta
— Matt Navarra (@MattNavarra) August 14, 2021
TikTok confirmed to TechCrunch the website launched earlier in August, but the project itself is still in the early stages of testing in only a few select markets, one of which is the U.S.
The company couldn’t offer a timeframe as to when these tools would become more broadly available. Instead, TikTok characterized Effect Studio as an early “experiment,” adding that some of its experiments don’t always make it to launch. Plus, other experiments may undergo significant changes between their early beta phases and what later becomes a public product.
That said, the launch of an AR toolset would make TikTok more competitive with industry rivals, who today rely on creative communities to expand their apps’ features sets with new features and experiences. Snap, for example, launched a $3.5 million fund last year directed toward Snapchat AR Lens creation. Meanwhile, at Facebook’s F8 developer conference in June, the company announced it had grown its Spark AR platform to over 600,000 creators across 190 countries, making it the largest mobile AR platform worldwide.
Image Credits: screenshot of TikTok website
TikTok, too, has been increasing its investment in developer tools over the past couple of years. However, its focus as of late has been on toolkits aimed at third-party developers who want to integrate more closely with TikTok in their own apps. Today, TikTok’s developer website provides access to tools that allow app makers to add TikTok features to their apps like user authentication flows, sound sharing, and others that allow users to publish videos from a third-party editing app out to TikTok.
The new TikTok Effect Studio isn’t meant to be used with third-party apps, however.
Instead, it’s about building AR experiences (and possibly, other creative effects), that would be provided to TikTok users directly in the consumer-facing video app.
Though willing to confirm its broader goals for TikTok Effect Studio, the company declined to share specific details about the exact tools may be included, citing the project’s early days.
“We’re always thinking about new ways to bring value to our community and enrich the TikTok experience,” a TikTok spokesperson told TechCrunch. “Currently, we’re experimenting with ways to give creators additional tools to bring their creative ideas to life for the TikTok community,” they added.
Hello friends, and welcome back to Week in Review!
I’m back from a very fun and rehabilitative couple weeks away from my phone, my Twitter account and the news cycle. That said, I actually really missed writing this newsletter, and while Greg did a fantastic job while I was out, I won’t be handing over the reins again anytime soon. Plenty happened this week and I struggled to zero in on a single topic to address, but I finally chose to focus on Bezos’s Blue Origin suing NASA.
I was going to write about OnlyFans for the newsletter this week and their fairly shocking move to ban sexually explicit content from their site in a bid to stay friendly with payment processors, but alas I couldn’t help myself and wrote an article for ole TechCrunch dot com instead. Here’s a link if you’re curious.
Now, I should also note that while I was on vacation I missed all of the conversation surrounding Apple’s incredibly controversial child sexual abuse material detection software that really seems to compromise the perceived integrity of personal devices. I’m not alone in finding this to be a pretty worrisome development despite Apple’s intention of staving off a worse alternative. Hopefully, one of these weeks I’ll have the time to talk with some of the folks in the decentralized computing space about how our monolithic reliance on a couple tech companies operating with precious little consumer input is very bad. In the meantime, I will point you to some reporting from TechCrunch’s own Zack Whittaker on the topic which you should peruse because I’m sure it will be a topic I revisit here in the future.
Now then! Onto the topic at hand.
Federal government agencies don’t generally inspire much adoration. While great things have been accomplished at the behest of ample federal funding and the tireless work of civil servants, most agencies are treated as bureaucratic bloat and aren’t generally seen as anything worth passionately defending. Among the public and technologists in particular, NASA occupies a bit more of a sacred space. The American space agency has generally been a source of bipartisan enthusiasm, as has its goal to return astronauts to the lunar surface by 2024.
Which brings us to some news this week. While so much digital ink was spilled on Jeff Bezos’s little jaunt to the edge of space, cowboy hat, champagne and all, there’s been less fanfare around his space startup’s lawsuit against NASA, which we’ve now learned will delay the development of a new lunar lander by months, potentially throwing NASA’s goal to return astronauts to the moon’s surface on schedule into doubt.
Bezos’s upstart Blue Origin is protesting the fact that they were not awarded a government contract while Elon Musk’s SpaceX earned a $2.89 billion contract to build a lunar lander. This contract wasn’t just recently awarded either, SpaceX won it back in April and Blue Origin had already filed a complaint with the Government Accountability Office. This happened before Bezos penned an open letter promising a $2 billion discount for NASA which had seen budget cuts at the hands of Congress dash its hoped to award multiple contracts. None of these maneuverings proved convincing enough for the folks at NASA, pushing Bezos’s space startup to sue the agency.
This little feud has caused long-minded Twitter users to dig up this little gem from a Bezos 2019 speech — as transcribed by Gizmodo — highlighting Bezos’s own distaste for how bureaucracy and greed have hampered NASA’s ability to reach for the stars:
“To the degree that big NASA programs become seen as jobs programs and that they have to be distributed to the right states where the right Senators live, and so on. That is going to change the objective. Now your objective is not to, you know, whatever it is, to get a man to the moon or a woman to the moon, but instead to get a woman to the moon while preserving X number of jobs in my district. That is a complexifier, and not a healthy one…[…]
Today, there would be, you know, three protests, and the losers would sue the federal government because they didn’t win. It’s interesting, but the thing that slows things down is procurement. It’s become the bigger bottleneck than the technology, which I know for a fact for all the well meaning people at NASA is frustrating.
A Blue Origin spokesperson called the suit, an “attempt to remedy the flaws in the acquisition process found in NASA’s Human Landing System.” But the lawsuit really seems to highlight how dire this deal is to the ability of Blue Origin to lock down top talent. Whether the startup can handle the reputational risk of suing NASA and delaying America’s return to the moon seems to be a question very much worth asking.
Photo: ROBYN BECK/AFP via Getty Images
Here are the TechCrunch news stories that especially caught my eye this week:
OnlyFans bans “sexually explicit content”
A lot of people had pretty visceral reactions to OnlyFans killing off what seems to be a pretty big chunk of its business, outlawing “sexually explicit content” on the platform. It seems the decision was reached as a result of banking and payment partners leaning on the company.
Musk “unveils” the “Tesla Bot”
I truly struggle to even call this news, but I’d be remiss not to highlight how Elon Musk had a guy dress up in a spandex outfit and walk around doing the robot and spawned hundreds of news stories about his new “Tesla Bot.” While there certainly could be a product opportunity here for Tesla at some point, I would bet all of the dogecoin in the world that his prototype “coming next year” either never arrives or falls hilariously short of expectations.
Facebook drops a VR meeting simulator
This week, Facebook released one of its better virtual reality apps, a workplace app designed to help people host meetings inside virtual reality. To be clear, no one really asked for this, but the company made a full court PR press for the app which will help headset owners simulate the pristine experience of sitting in a conference room.
Yes, this looks dumb. But avatar-based work apps are coming for your Zooms, and Facebook made a pretty convincing one here. https://t.co/aGvOW6zm8U
— Lucas Matney (@lucasmtny) August 19, 2021
Social platforms wrestle with Taliban presence on platforms
Following the Taliban takeover of Afghanistan, social media platforms are being pushed to clarify their policies around accounts operated by identified Taliban members. It’s put some of the platforms in a hairy situation.
Facebook releases content transparency report
This week, Facebook released its first ever content transparency report, highlighting what data on the site had the most reach over a given time period, in this case a three-month period. Compared to lists highlighting which posts get the most engagement on the platform, lists generally populated mostly by right wing influencers and news sources, the list of posts with the most reach seems to be pretty benign.
Safety regulators open inquiry into Tesla Autopilot
While Musk talks about building a branded humanoid robot, U.S. safety regulators are concerned with why Tesla vehicles on Autopilot are crashing into so many parked emergency response vehicles.
Image Credits: Nigel Sussman
Some of my favorite reads from our Extra Crunch subscription service this week:
The Nuro EC-1
“..Dave Ferguson and Jiajun Zhu aren’t the only Google self-driving project employees to launch an AV startup, but they might be the most underrated. Their company, Nuro, is valued at $5 billion and has high-profile partnerships with leaders in retail, logistics and food including FedEx, Domino’s and Walmart. And, they seem to have navigated the regulatory obstacle course with success — at least so far…”
A VC shares 5 keys to pitching VCs
“The success of a fundraising process is entirely dependent on how well an entrepreneur can manage it. At this stage, it is important for founders to be honest, straightforward and recognize the value meetings with venture capitalists and investors can bring beyond just the monetary aspect..“
A crash course on corporate development
“…If you’re going to get acquired, chances are you’re going to spend a lot of time with corporate development teams. With a hot stock market, mountains of cash and cheap debt floating around, the environment for acquisitions is extremely rich.”
Thanks for reading! Until next week…
Welcome back to This Week in Apps, the weekly TechCrunch series that recaps the latest in mobile OS news, mobile applications and the overall app economy.
The app industry continues to grow, with a record 218 billion downloads and $143 billion in global consumer spend in 2020. Consumers last year also spent 3.5 trillion minutes using apps on Android devices alone. And in the U.S., app usage surged ahead of the time spent watching live TV. Currently, the average American watches 3.7 hours of live TV per day, but now spends four hours per day on their mobile devices.
Apps aren’t just a way to pass idle hours — they’re also a big business. In 2019, mobile-first companies had a combined $544 billion valuation, 6.5x higher than those without a mobile focus. In 2020, investors poured $73 billion in capital into mobile companies — a figure that’s up 27% year-over-year.
This Week in Apps offers a way to keep up with this fast-moving industry in one place with the latest from the world of apps, including news, updates, startup fundings, mergers and acquisitions, and suggestions about new apps and games to try, too.
Do you want This Week in Apps in your inbox every Saturday? Sign up here: techcrunch.com/newsletters
(Photo Illustration by Jakub Porzycki/NurPhoto via Getty Images)
Creator platform OnlyFans is getting out of the porn business. The company announced this week it will begin to prohibit any “sexually explicit” content starting on October 1, 2021 — a decision it claimed would ensure the long-term sustainability of the platform. The news angered a number of impacted creators who weren’t notified ahead of time and who’ve come to rely on OnlyFans as their main source of income.
However, word is that OnlyFans was struggling to find outside investors, despite its sizable user base, due to the adult content it hosts. Some VC firms are prohibited from investing in adult content businesses, while others may be concerned over other matters — like how NSFW content could have limited interest from advertisers and brand partners. They may have also worried about OnlyFans’ ability to successfully restrict minors from using the app, in light of what appears to be soon-to-come increased regulations for online businesses. Plus, porn companies face a number of other issues, too. They have to continually ensure they’re not hosting illegal content like child sex abuse material, revenge porn or content from sex trafficking victims — the latter which has led to lawsuits at other large porn companies.
The news followed a big marketing push for OnlyFans’ porn-free (SFW) app, OFTV, which circulated alongside reports that the company was looking to raise funds at a $1 billion+ valuation. OnlyFans may not have technically needed the funding to operate its current business — it handled more than $2 billion in sales in 2020 and keeps 20%. Rather, the company may have seen there’s more opportunity to cater to the “SFW” creator community, now that it has big names like Bella Thorne, Cardi B, Tyga, Tyler Posey, Blac Chyna, Bhad Bhabie and others on board.
The TikTok logo is seen on an iPhone 11 Pro max. Image Credits: Nur Photo/Getty Images
Earlier this month, Senators Amy Klobuchar (D-MN) and John Thune (R-SD) sent a letter to TikTok CEO Shou Zi Chew, which said they were “alarmed” by the change, and demanded to know what information TikTok will be collecting and what it plans to do with the data. This wouldn’t be the first time TikTok got in trouble for excessive data collection. Earlier this year, the company paid out $92 million to settle a class-action lawsuit that claimed TikTok had unlawfully collected users’ biometric data and shared it with third parties.
Image Credits: Apple
Image Credits: Facebook
Image Source: The Pokémon Company
Image Credits: Sensor Tower
Image Credits: Samsung
South Korea’s GS Retail Co. Ltd will buy Delivery Hero’s food delivery app Yogiyo in a deal valued at 800 billion won ($685 million USD). Yogiyo is the second-largest food delivery app in South Korea, with a 25% market share.
Gaming platform Roblox acquired a Discord rival, Guilded, which allows users to have text and voice conversations, organize communities around events and calendars and more. Deal terms were not disclosed. Guilded raised $10.2 million in venture funding. Roblox’s stock fell by 7% after the company reported earnings this week, after failing to meet Wall Street expectations.
Travel app Hopper raised $175 million in a Series G round of funding led by GPI Capital, valuing the business at over $3.5 billion. The company raised a similar amount just last year, but is now benefiting from renewed growth in travel following COVID-19 vaccinations and lifting restrictions.
Indian quiz app maker Zupee raised $30 million in a Series B round of funding led by Silicon Valley-based WestCap Group and Tomales Bay Capital. The round values the company at $500 million, up 5x from last year.
Danggeun Market, the publisher of South Korea’s hyperlocal community app Karrot, raised $162 million in a Series D round of funding led by DST Global. The round values the business at $2.7 billion and will be used to help the company launch its own payments platform, Karrot Pay.
Bangalore-based fintech app Smallcase raised $40 million in Series C funding round led by Faering Capital and Premji Invest, with participation from existing investors, as well as Amazon. The Robinhood-like app has over 3 million users who are transacting about $2.5 billion per year.
Social listening app Earbuds raised $3 million in Series A funding led by Ecliptic Capital. Founded by NFL star Jason Fox, the app lets anyone share their favorite playlists, livestream music like a DJ or comment on others’ music picks.
U.S. neobank app One raised $40 million in Series B funding led by Progressive Investment Company (the insurance giant’s investment arm), bringing its total raise to date to $66 million. The app offers all-in-one banking services and budgeting tools aimed at middle-income households who manage their finances on a weekly basis.
Indian travel booking app ixigo is looking to raise Rs 1,600 crore in its initial public offering, The Economic Times reported this week.
Trading app Robinhood disappointed in its first quarterly earnings as a publicly traded company, when it posted a net loss of $502 million, or $2.16 per share, larger than Wall Street forecasts. This overshadowed its beat on revenue ($565 million versus $521.8 million expected) and its more than doubling of MAUs to 21.3 million in Q2. Also of note, the company said dogecoin made up 62% of its crypto revenue in Q2.
Image Credits: Polycam
3D scanning software maker Polycam launched a new 3D capture tool, Photo Mode, that allows iPhone and iPad users to capture professional-quality 3D models with just an iPhone. While the app’s scanner before had required the use of the lidar sensor built into newer devices like the iPhone 12 Pro and iPad Pro models, the new Photo Mode feature uses just an iPhone’s camera. The resulting 3D assets are ready to use in a variety of applications, including 3D art, gaming, AR/VR and e-commerce. Data export is available in over a dozen file formats, including .obj, .gtlf, .usdz and others. The app is a free download on the App Store, with in-app purchases available.
Jiobit, the tracking dongle acquired by family safety and communication app Life360, this week partnered with emergency response service Noonlight to offer Jiobit Protect, a premium add-on that offers Jiobit users access to an SOS Mode and Alert Button that work with the Jiobit mobile app. SOS Mode can be triggered by a child’s caregiver when they detect — through notifications from the Jiobit app — that a loved one may be in danger. They can then reach Noonlight’s dispatcher who can facilitate a call to 911 and provide the exact location of the person wearing the Jiobit device, as well as share other details, like allergies or special needs, for example.
When your app redesign goes wrong…
Prominent App Store critic Kosta Eleftheriou shut down his FlickType iOS app this week after too many frustrations with App Review. He cited rejections that incorrectly argued that his app required more access than it did — something he had successfully appealed and overturned years ago. Attempted follow-ups with Apple were ignored, he said.
Anyone have app ideas?
With the hasty U.S. military withdrawal from Afghanistan underway after two decades occupying the country, social media platforms have a complex new set of policy decisions to make.
The Taliban has been social media-savvy for years, but those companies will face new questions as the notoriously brutal, repressive group seeks to present itself as Afghanistan’s legitimate governing body to the rest of the world. Given its ubiquity among political leaders and governments, social media will likely play an even more central role for the Taliban as it seeks to cement control and move toward governing.
Facebook has taken some early precautions to protect its users from potential reprisals as the Taliban seizes power. Through Twitter, Facebook’s Nathaniel Gleicher announced a set of new measures the platform rolled out over the last week. The company added a “one-click” way for people in Afghanistan to instantly lock their accounts, hiding posts on their timeline and preventing anyone they aren’t friends with from downloading or sharing their profile picture.
4/ We’ve launched a one-click tool for people in Afghanistan to quickly lock down their account. When their profile is locked, people who aren’t their friends can’t download or share their profile photo or see posts on their timeline. pic.twitter.com/pUANh5uBgn
— Nathaniel Gleicher (@ngleicher) August 19, 2021
Facebook also removed the ability for users to view and search anyone’s friends list for people located in Afghanistan. On Instagram, pop-up alerts will provide Afghanistan-based users with information on how to quickly lock down their accounts.
The Taliban has long been banned on Facebook under the company’s rules against dangerous organizations. “The Taliban is sanctioned as a terrorist organization under US law… This means we remove accounts maintained by or on behalf of the Taliban and prohibit praise, support, and representation of them,” a Facebook spokesperson told the BBC.
The Afghan Taliban is actually not designated as a foreign terrorist organization by the U.S. State Department, but the Taliban operating out of Pakistan has held that designation since 2010. While it doesn’t appear on the list of foreign terrorist organizations, the Afghanistan-based Taliban is defined as a terror group according to economic sanctions that the U.S. put in place after 9/11.
While the Taliban is also banned from Facebook-owned WhatsApp, the platform’s end-to-end encryption makes enforcing those rules on WhatsApp more complex. WhatsApp is ubiquitous in Afghanistan and both the Afghan military and the Taliban have relied on the chat app to communicate in recent years. Though Facebook doesn’t allow the Taliban on its platforms, the group turned to WhatsApp to communicate its plans to seize control to the Afghan people and discourage resistance in what was a shockingly swift and frictionless sprint to power. The Taliban even set up WhatsApp number as a sort of help line for Afghans to report violence or crime, but Facebook quickly shut down the account.
Earlier this week, Facebook’s VP of content policy Monika Bickert noted that even if the U.S. does ultimately remove the Taliban from its lists of sanctioned terror groups, the platform would reevaluate and make its own decision. “… We would have to do a policy analysis on whether or not they nevertheless violate our dangerous organizations policy,” Bickert said.
Like Facebook, YouTube maintains that the Taliban is banned from its platform. YouTube’s own decision also appears to align with sanctions and could be subject to change if the U.S. approach to the Taliban shifts.
“YouTube complies with all applicable sanctions and trade compliance laws, including relevant U.S. sanctions,” a YouTube spokesperson told TechCrunch. “As such, if we find an account believed to be owned and operated by the Afghan Taliban, we terminate it. Further, our policies prohibit content that incites violence.”
On Twitter, Taliban spokesperson Zabihullah Mujahid has continued to share regular updates about the group’s activities in Kabul. Another Taliban representative, Qari Yousaf Ahmadi, also freely posts on the platform. Unlike Facebook and YouTube, Twitter doesn’t have a blanket ban on the group but will enforce its policies on a post-by-post basis.
If the Taliban expands its social media footprint, other platforms might be facing the same set of decisions. TikTok did not respond to TechCrunch’s request for comment, but previously told NBC that it considers the Taliban a terrorist organization and does not allow content that promotes the group.
The Taliban doesn’t appear to have a foothold beyond the most mainstream social networks, but it’s not hard to imagine the former insurgency turning to alternative platforms to remake its image as the world looks on.
While Twitch declined to comment on what it might do if the group were to use the platform, it does have a relevant policy that takes “off-service conduct” into account when banning users. That policy was designed to address reports of abusive behavior and sexual harassment among Twitch streamers.
The new rules also apply to accounts linked to violent extremism, terrorism, or other serious threats, whether those actions take place on or off Twitch. That definition would likely preclude the Taliban from establishing a presence on the platform, even if the U.S. lifts sanctions or changes its terrorist designations in the future.
Disinformation, misinformation, infotainment, algowars — if the debates over the future of media the past few decades have meant anything, they’ve at least left a pungent imprint on the English language. There’s been a lot of invective and fear over what social media is doing to us, from our individual psychologies and neurologies to wider concerns about the strength of democratic societies. As Joseph Bernstein put it recently, the shift from “wisdom of the crowds” to “disinformation” has indeed been an abrupt one.
What is disinformation? Does it exist, and if so, where is it and how do we know we are looking at it? Should we care about what the algorithms of our favorite platforms show us as they strive to squeeze the prune of our attention? It’s just those sorts of intricate mathematical and social science questions that got Noah Giansiracusa interested in the subject.
Giansiracusa, a professor at Bentley University in Boston, is trained in mathematics (focusing his research in areas like algebraic geometry) but he’s also had a penchant of looking at social topics through a mathematical lens, such as connecting computational geometry to the Supreme Court. Most recently, he’s published a book called How Algorithms Create and Prevent Fake News to explore some of the challenging questions around the media landscape today and how technology is exacerbating and ameliorating those trends.
I hosted Giansiracusa on a Twitter Space recently, and since Twitter hasn’t made it easy to listen to these talks afterwards (ephemerality!), I figured I’d pull out the most interesting bits of our conversation for you and posterity.
This interview has been edited and condensed for clarity.
Danny Crichton: How did you decide to research fake news and write this book?
Noah Giansiracusa: One thing I noticed is there’s a lot of really interesting sociological, political science discussion of fake news and these types of things. And then on the technical side, you’ll have things like Mark Zuckerberg saying AI is going to fix all these problems. It just seemed like, it’s a little bit difficult to bridge that gap.
Everyone’s probably heard this recent quote of Biden saying, “they’re killing people,”in regards to misinformation on social media. So we have politicians speaking about these things where it’s hard for them to really grasp the algorithmic side. Then we have computer science people that are really deep in the details. So I’m kind of sitting in between, I’m not a real hardcore computer science person. So I think it’s a little easier for me to just step back and get the bird’s eye view.
At the end of the day, I just felt I kind of wanted to explore some more interactions with society where things get messy, where the math is not so clean.
Crichton: Coming from a mathematical background, you’re entering this contentious area where a lot of people have written from a lot of different angles. What are people getting right in this area and what have people perhaps missed some nuance?
Giansiracusa: There’s a lot of incredible journalism, I was blown away at how a lot of journalists really were able to deal with pretty technical stuff. But I would say one thing that maybe they didn’t get wrong, but kind of struck me was, there’s a lot of times when an academic paper comes out, or even an announcement from Google or Facebook or one of these tech companies, and they’ll kind of mention something, and the journalist will maybe extract a quote, and try to describe it, but they seem a little bit afraid to really try to look and understand it. And I don’t think it’s that they weren’t able to, it really seems like more of an intimidation and a fear.
One thing I’ve experienced a ton as a math teacher is people are so afraid of saying something wrong and making a mistake. And this goes for journalists who have to write about technical things, they don’t want to say something wrong. So it’s easier to just quote a press release from Facebook or quote an expert.
One thing that’s so fun and beautiful about pure math, is you don’t really worry about being wrong, you just try ideas and see where they lead and you see all these interactions. When you’re ready to write a paper or give a talk, you check the details. But most of math is this creative process where you’re exploring, and you’re just seeing how ideas interact. My training as a mathematician you think would make me apprehensive about making mistakes and to be very precise, but it kind of had the opposite effect.
Second, a lot of these algorithmic things, they’re not as complicated as they seem. I’m not sitting there implementing them, I’m sure to program them is hard. But just the big picture, all these algorithms nowadays, so much of these things are based on deep learning. So you have some neural net, doesn’t really matter to me as an outsider what architecture they’re using, all that really matters is, what are the predictors? Basically, what are the variables that you feed this machine learning algorithm? And what is it trying to output? Those are things that anyone can understand.
Crichton: One of the big challenges I think of analyzing these algorithms is the lack of transparency. Unlike, say, the pure math world which is a community of scholars working to solve problems, many of these companies can actually be quite adversarial about supplying data and analysis to the wider community.
Giansiracusa: It does seem there’s a limit to what anyone can deduce just by kind of being from the outside.
So a good example is with YouTube, teams of academics wanted to explore whether the YouTube recommendation algorithm sends people down these conspiracy theory rabbit holes of extremism. The challenge is that because this is the recommendation algorithm, it’s using deep learning, it’s based on hundreds and hundreds of predictors based on your search history, your demographics, the other videos you’ve watched and for how long — all these things. It’s so customized to you and your experience, that all the studies I was able to find use incognito mode.
So they’re basically a user who has no search history, no information and they’ll go to a video and then click the first recommended video then the next one. And let’s see where the algorithm takes people. That’s such a different experience than an actual human user with a history. And this has been really difficult. I don’t think anyone has figured out a good way to algorithmically explore the YouTube algorithm from the outside.
Honestly, the only way I think you could do it is just kind of like an old school study where you recruit a whole bunch of volunteers and sort of put a tracker on their computer and say, “Hey, just live life the way you normally do with your histories and everything and tell us the videos that you’re watching.” So it’s it’s been difficult to get past this fact that a lot of these algorithms, almost all of them, I would say, are so heavily based on your individual data. We don’t know how to study that in the aggregate.
And it’s not just that me or anyone else on the outside who has trouble because we don’t have the data. It’s even people within these companies who built the algorithm and who know how the algorithm works on paper, but they don’t know how it’s going to actually behave. It’s like Frankenstein’s monster: they built this thing, but they don’t know how it’s going to operate. So the only way I think you can really study it is if people on the inside with that data go out of their way and spend time and resources to study it.
Crichton: There are a lot of metrics used around evaluating misinformation and determining engagement on a platform. Coming from your mathematical background, do you think those measures are robust?
Giansiracusa: People try to debunk misinformation. But in the process, they might comment on it, they might retweet it or share it, and that counts as engagement. So a lot of these measurements of engagement, are they really looking at positive or just all engagement? You know, it kind of all gets lumped together?
This happens in academic research, too. Citations are the universal metric of how successful researches is. Well, really bogus things like Wakefield’s original autism and vaccines paper got tons of citations, a lot of them were people citing it because they thought it’s right, but a lot of it was scientists who were debunking it, they cite it in their paper to say, we demonstrate that this theory is wrong. But somehow a citation is a citation. So it all counts towards the success metric.
So I think that’s a bit of what’s happening with engagement. If I post something on my comments saying, “Hey, that’s crazy,” how does the algorithm know if I’m supporting it or not? They could use some AI language processing to try but I’m not sure if they are, and it’s a lot of effort to do so.
Crichton: Lastly, I want to talk a bit about GPT-3 and the concern around synthetic media and fake news. There’s a lot of fear that AI bots will overwhelm media with disinformation — how scared or not scared should we be?
Giansiracusa: Because my book really grew out of a class from experience, I wanted to try to stay impartial, and just kind of inform people and let them reach their own decisions. I decided to try to cut through that debate and really let both sides speak. I think the newsfeed algorithms and recognition algorithms do amplify a lot of harmful stuff, and that is devastating to society. But there’s also a lot of amazing progress of using algorithms productively and successfully to limit fake news.
There’s these techno-utopians, who say that AI is going to fix everything, we’ll have truth-telling, and fact-checking and algorithms that can detect misinformation and take it down. There’s some progress, but that stuff is not going to happen, and it never will be fully successful. It’ll always need to rely on humans. But the other thing we have is kind of irrational fear. There’s this kind of hyperbolic AI dystopia where algorithms are so powerful, kind of like singularity type of stuff that they’re going to destroy us.
When deep fakes were first hitting the news in 2018, and GPT-3 had been released a couple years ago, there was a lot of fear that, “Oh shit, this is gonna make all our problems with fake news and understanding what’s true in the world much, much harder.” And I think now that we have a couple of years of distance, we can see that they’ve made it a little harder, but not nearly as significantly as we expected. And the main issue is kind of more psychological and economic than anything.
So the original authors of GPT-3 have a research paper that introduces the algorithm, and one of the things they did was a test where they pasted some text in and expanded it to an article, and then they had some volunteers evaluate and guess which is the algorithmically-generated one and which article is the human-generated one. They reported that they got very, very close to 50% accuracy, which means barely above random guesses. So that sounds, you know, both amazing and scary.
But if you look at the details, they were extending like a one line headline to a paragraph of text. If you tried to do a full, The Atlantic-length or New Yorker-length article, you’re gonna start to see the discrepancies, the thought is going to meander. The authors of this paper didn’t mention this, they just kind of did their experiment and said, “Hey, look how successful it is.”
So it looks convincing, they can make these impressive articles. But here’s the main reason, at the end of the day, why GPT-3 hasn’t been so transformative as far as fake news and misinformation and all this stuff is concerned. It’s because fake news is mostly garbage. It’s poorly written, it’s low quality, it’s so cheap and fast to crank out, you could just pay your 16-year-old nephew to just crank out a bunch of fake news articles in minutes.
It’s not so much that math helped me see this. It’s just that somehow, the main thing we’re trying to do in mathematics is to be skeptical. So you have to question these things and be a little skeptical.