FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Apple’s dangerous path

By Lucas Matney

Hello friends, and welcome back to Week in Review.

Last week, we dove into the truly bizarre machinations of the NFT market. This week, we’re talking about something that’s a little bit more impactful on the current state of the web — Apple’s NeuralHash kerfuffle.

If you’re reading this on the TechCrunch site, you can get this in your inbox from the newsletter page, and follow my tweets @lucasmtny


the big thing

In the past month, Apple did something it generally has done an exceptional job avoiding — the company made what seemed to be an entirely unforced error.

In early August — seemingly out of nowhere** — the company announced that by the end of the year they would be rolling out a technology called NeuralHash that actively scanned the libraries of all iCloud Photos users, seeking out image hashes that matched known images of child sexual abuse material (CSAM). For obvious reasons, the on-device scanning could not be opted out of.

This announcement was not coordinated with other major consumer tech giants, Apple pushed forward on the announcement alone.

Researchers and advocacy groups had almost unilaterally negative feedback for the effort, raising concerns that this could create new abuse channels for actors like governments to detect on-device information that they regarded as objectionable. As my colleague Zach noted in a recent story, “The Electronic Frontier Foundation said this week it had amassed more than 25,000 signatures from consumers. On top of that, close to 100 policy and rights groups, including the American Civil Liberties Union, also called on Apple to abandon plans to roll out the technology.”

(The announcement also reportedly generated some controversy inside of Apple.)

The issue — of course — wasn’t that Apple was looking at find ways that prevented the proliferation of CSAM while making as few device security concessions as possible. The issue was that Apple was unilaterally making a massive choice that would affect billions of customers (while likely pushing competitors towards similar solutions), and was doing so without external public input about possible ramifications or necessary safeguards.

A long story short, over the past month researchers discovered Apple’s NeuralHash wasn’t as air tight as hoped and the company announced Friday that it was delaying the rollout “to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”

Having spent several years in the tech media, I will say that the only reason to release news on a Friday morning ahead of a long weekend is to ensure that the announcement is read and seen by as few people as possible, and it’s clear why they’d want that. It’s a major embarrassment for Apple, and as with any delayed rollout like this, it’s a sign that their internal teams weren’t adequately prepared and lacked the ideological diversity to gauge the scope of the issue that they were tackling. This isn’t really a dig at Apple’s team building this so much as it’s a dig on Apple trying to solve a problem like this inside the Apple Park vacuum while adhering to its annual iOS release schedule.

illustration of key over cloud icon

Image Credits: Bryce Durbin / TechCrunch /

Apple is increasingly looking to make privacy a key selling point for the iOS ecosystem, and as a result of this productization, has pushed development of privacy-centric features towards the same secrecy its surface-level design changes command. In June, Apple announced iCloud+ and raised some eyebrows when they shared that certain new privacy-centric features would only be available to iPhone users who paid for additional subscription services.

You obviously can’t tap public opinion for every product update, but perhaps wide-ranging and trail-blazing security and privacy features should be treated a bit differently than the average product update. Apple’s lack of engagement with research and advocacy groups on NeuralHash was pretty egregious and certainly raises some questions about whether the company fully respects how the choices they make for iOS affect the broader internet.

Delaying the feature’s rollout is a good thing, but let’s all hope they take that time to reflect more broadly as well.

** Though the announcement was a surprise to many, Apple’s development of this feature wasn’t coming completely out of nowhere. Those at the top of Apple likely felt that the winds of global tech regulation might be shifting towards outright bans of some methods of encryption in some of its biggest markets.

Back in October of 2020, then United States AG Bill Barr joined representatives from the UK, New Zealand, Australia, Canada, India and Japan in signing a letter raising major concerns about how implementations of encryption tech posed “significant challenges to public safety, including to highly vulnerable members of our societies like sexually exploited children.” The letter effectively called on tech industry companies to get creative in how they tackled this problem.


other things

Here are the TechCrunch news stories that especially caught my eye this week:

LinkedIn kills Stories
You may be shocked to hear that LinkedIn even had a Stories-like product on their platform, but if you did already know that they were testing Stories, you likely won’t be so surprised to hear that the test didn’t pan out too well. The company announced this week that they’ll be suspending the feature at the end of the month. RIP.

FAA grounds Virgin Galactic over questions about Branson flight
While all appeared to go swimmingly for Richard Branson’s trip to space last month, the FAA has some questions regarding why the flight seemed to unexpectedly veer so far off the cleared route. The FAA is preventing the company from further launches until they find out what the deal is.

Apple buys a classical music streaming service
While Spotify makes news every month or two for spending a massive amount acquiring a popular podcast, Apple seems to have eyes on a different market for Apple Music, announcing this week that they’re bringing the classical music streaming service Primephonic onto the Apple Music team.

TikTok parent company buys a VR startup
It isn’t a huge secret that ByteDance and Facebook have been trying to copy each other’s success at times, but many probably weren’t expecting TikTok’s parent company to wander into the virtual reality game. The Chinese company bought the startup Pico which makes consumer VR headsets for China and enterprise VR products for North American customers.

Twitter tests an anti-abuse ‘Safety Mode’
The same features that make Twitter an incredibly cool product for some users can also make the experience awful for others, a realization that Twitter has seemingly been very slow to make. Their latest solution is more individual user controls, which Twitter is testing out with a new “safety mode” which pairs algorithmic intelligence with new user inputs.


extra things

Some of my favorite reads from our Extra Crunch subscription service this week:

Our favorite startups from YC’s Demo Day, Part 1 
“Y Combinator kicked off its fourth-ever virtual Demo Day today, revealing the first half of its nearly 400-company batch. The presentation, YC’s biggest yet, offers a snapshot into where innovation is heading, from not-so-simple seaweed to a Clearco for creators….”

…Part 2
“…Yesterday, the TechCrunch team covered the first half of this batch, as well as the startups with one-minute pitches that stood out to us. We even podcasted about it! Today, we’re doing it all over again. Here’s our full list of all startups that presented on the record today, and below, you’ll find our votes for the best Y Combinator pitches of Day Two. The ones that, as people who sift through a few hundred pitches a day, made us go ‘oh wait, what’s this?’

All the reasons why you should launch a credit card
“… if your company somehow hasn’t yet found its way to launch a debit or credit card, we have good news: It’s easier than ever to do so and there’s actual money to be made. Just know that if you do, you’ve got plenty of competition and that actual customer usage will probably depend on how sticky your service is and how valuable the rewards are that you offer to your most active users….”


Thanks for reading, and again, if you’re reading this on the TechCrunch site, you can get this in your inbox from the newsletter page, and follow my tweets @lucasmtny

Lucas Matney

UK now expects compliance with children’s privacy design code

By Natasha Lomas

In the UK, a 12-month grace period for compliance with a design code aimed at protecting children online expires today — meaning app makers offering digital services in the market which are “likely” to be accessed by children (defined in this context as users under 18 years old) are expected to comply with a set of standards intended to safeguard kids from being tracked and profiled.

The age appropriate design code came into force on September 2 last year however the UK’s data protection watchdog, the ICO, allowed the maximum grace period for hitting compliance to give organizations time to adapt their services.

But from today it expects the standards of the code to be met.

Services where the code applies can include connected toys and games and edtech but also online retail and for-profit online services such as social media and video sharing platforms which have a strong pull for minors.

Among the code’s stipulations are that a level of ‘high privacy’ should be applied to settings by default if the user is (or is suspected to be) a child — including specific provisions that geolocation and profiling should be off by default (unless there’s a compelling justification for such privacy hostile defaults).

The code also instructs app makers to provide parental controls while also providing the child with age-appropriate information about such tools — warning against parental tracking tools that could be used to silently/invisibly monitor a child without them being made aware of the active tracking.

Another standard takes aim at dark pattern design — with a warning to app makers against using “nudge techniques” to push children to provide “unnecessary personal data or weaken or turn off their privacy protections”.

The full code contains 15 standards but is not itself baked into legislation — rather it’s a set of design recommendations the ICO wants app makers to follow.

The regulatory stick to make them do so is that the watchdog is explicitly linking compliance with its children’s privacy standards to passing muster with wider data protection requirements that are baked into UK law.

The risk for apps that ignore the standards is thus that they draw the attention of the watchdog — either through a complaint or proactive investigation — with the potential of a wider ICO audit delving into their whole approach to privacy and data protection.

“We will monitor conformance to this code through a series of proactive audits, will consider complaints, and take appropriate action to enforce the underlying data protection standards, subject to applicable law and in line with our Regulatory Action Policy,” the ICO writes in guidance on its website. “To ensure proportionate and effective regulation we will target our most significant powers, focusing on organisations and individuals suspected of repeated or wilful misconduct or serious failure to comply with the law.”

It goes on to warn it would view a lack of compliance with the kids’ privacy code as a potential black mark against (enforceable) UK data protection laws, adding: “If you do not follow this code, you may find it difficult to demonstrate that your processing is fair and complies with the GDPR [General Data Protection Regulation] or PECR [Privacy and Electronics Communications Regulation].”

Tn a blog post last week, Stephen Bonner, the ICO’s executive director of regulatory futures and innovation, also warned app makers: “We will be proactive in requiring social media platforms, video and music streaming sites and the gaming industry to tell us how their services are designed in line with the code. We will identify areas where we may need to provide support or, should the circumstances require, we have powers to investigate or audit organisations.”

“We have identified that currently, some of the biggest risks come from social media platforms, video and music streaming sites and video gaming platforms,” he went on. “In these sectors, children’s personal data is being used and shared, to bombard them with content and personalised service features. This may include inappropriate adverts; unsolicited messages and friend requests; and privacy-eroding nudges urging children to stay online. We’re concerned with a number of harms that could be created as a consequence of this data use, which are physical, emotional and psychological and financial.”

“Children’s rights must be respected and we expect organisations to prove that children’s best interests are a primary concern. The code gives clarity on how organisations can use children’s data in line with the law, and we want to see organisations committed to protecting children through the development of designs and services in accordance with the code,” Bonner added.

The ICO’s enforcement powers — at least on paper — are fairly extensive, with GDPR, for example, giving it the ability to fine infringers up to £17.5M or 4% of their annual worldwide turnover, whichever is higher.

The watchdog can also issue orders banning data processing or otherwise requiring changes to services it deems non-compliant. So apps that chose to flout the children’s design code risk setting themselves up for regulatory bumps or worse.

In recent months there have been signs some major platforms have been paying mind to the ICO’s compliance deadline — with Instagram, YouTube and TikTok all announcing changes to how they handle minors’ data and account settings ahead of the September 2 date.

In July, Instagram said it would default teens to private accounts — doing so for under 18s in certain countries which the platform confirmed to us includes the UK — among a number of other child-safety focused tweaks. Then in August, Google announced similar changes for accounts on its video charing platform, YouTube.

A few days later TikTok also said it would add more privacy protections for teens. Though it had also made earlier changes limiting privacy defaults for under 18s.

Apple also recently got itself into hot water with the digital rights community following the announcement of child safety-focused features — including a child sexual abuse material (CSAM) detection tool which scans photo uploads to iCloud; and an opt in parental safety feature that lets iCloud Family account users turn on alerts related to the viewing of explicit images by minors using its Messages app.

The unifying theme underpinning all these mainstream platform product tweaks is clearly ‘child protection’.

And while there’s been growing attention in the US to online child safety and the nefarious ways in which some apps exploit kids’ data — as well as a number of open probes in Europe (such as this Commission investigation of TikTok, acting on complaints) — the UK may be having an outsized impact here given its concerted push to pioneer age-focused design standards.

The code also combines with incoming UK legislate which is set to apply a ‘duty of care’ on platforms to take a rboad-brush safety-first stance toward users, also with a big focus on kids (and there it’s also being broadly targeted to cover all children; rather than just applying to kids under 13s as with the US’ COPPA, for example).

In the blog post ahead of the compliance deadline expiring, the ICO’s Bonner sought to take credit for what he described as “significant changes” made in recent months by platforms like Facebook, Google, Instagram and TikTok, writing: “As the first-of-its kind, it’s also having an influence globally. Members of the US Senate and Congress have called on major US tech and gaming companies to voluntarily adopt the standards in the ICO’s code for children in America.”

“The Data Protection Commission in Ireland is preparing to introduce the Children’s Fundamentals to protect children online, which links closely to the code and follows similar core principles,” he also noted.

And there are other examples in the EU: France’s data watchdog, the CNIL, looks to have been inspired by the ICO’s approach — issuing its own set of right child-protection focused recommendations this June (which also, for example, encourage app makers to add parental controls with the clear caveat that such tools must “respect the child’s privacy and best interests”).

The UK’s focus on online child safety is not just making waves overseas but sparking growth in a domestic compliance services industry.

Last month, for example, the ICO announced the first clutch of GDPR certification scheme criteria — including two schemes which focus on the age appropriate design code. Expect plenty more.

Bonner’s blog post also notes that the watchdog will formally set out its position on age assurance this autumn — so it will be providing further steerage to organizations which are in scope of the code on how to tackle that tricky piece, although it’s still not clear how hard a requirement the ICO will support, with Bonner suggesting it could be actually “verifying ages or age estimation”. Watch that space. Whatever the recommendations are, age assurance services are set to spring up with compliance-focused sales pitches.

Children’s safety online has been a huge focus for UK policymakers in recent years, although the wider (and long in train) Online Safety (neé Harms) Bill remains at the draft law stage.

An earlier attempt by UK lawmakers to bring in mandatory age checks to prevent kids from accessing adult content websites — dating back to 2017’s Digital Economy Act — was dropped in 2019 after widespread criticism that it would be both unworkable and a massive privacy risk for adult users of porn.

But the government did not drop its determination to find a way to regulate online services in the name of child safety. And online age verification checks look set to be — if not a blanket, hardened requirement for all digital services — increasingly brought in by the backdoor, through a sort of ‘recommended feature’ creep (as the ORG has warned). 

The current recommendation in the age appropriate design code is that app makers “take a risk-based approach to recognising the age of individual users and ensure you effectively apply the standards in this code to child users”, suggesting they: “Either establish age with a level of certainty that is appropriate to the risks to the rights and freedoms of children that arise from your data processing, or apply the standards in this code to all your users instead.” 

At the same time, the government’s broader push on online safety risks conflicting with some of the laudable aims of the ICO’s non-legally binding children’s privacy design code.

For instance, while the code includes the (welcome) suggestion that digital services gather as little information about children as possible, in an announcement earlier this summer UK lawmakers put out guidance for social media platforms and messaging services — ahead of the planned Online Safety legislation — that recommends they prevent children from being able to use end-to-end encryption.

That’s right; the government’s advice to data-mining platforms — which it suggests will help prepare them for requirements in the incoming legislation — is not to use ‘gold standard’ security and privacy (e2e encryption) for kids.

So the official UK government messaging to app makers appears to be that, in short order, the law will require commercial services to access more of kids’ information, not less — in the name of keeping them ‘safe’. Which is quite a contradiction vs the data minimization push on the design code.

The risk is that a tightening spotlight on kids privacy ends up being fuzzed and complicated by ill-thought through policies that push platforms to monitor kids to demonstrate ‘protection’ from a smorgasbord of online harms — be it adult content or pro-suicide postings, or cyber bullying and CSAM.

The law looks set to encourage platforms to ‘show their workings’ to prove compliance — which risks resulting in ever closer tracking of children’s activity, retention of data — and maybe risk profiling and age verification checks (that could even end up being applied to all users; think sledgehammer to crack a nut). In short, a privacy dystopia.

Such mixed messages and disjointed policymaking seem set to pile increasingly confusing — and even conflicting — requirements on digital services operating in the UK, making tech businesses legally responsible for divining clarity amid the policy mess — with the simultaneous risk of huge fines if they get the balance wrong.

Complying with the ICO’s design standards may therefore actually be the easy bit.

 

LOVE unveils a modern video messaging app with a business model that puts users in control

By Sarah Perez

A London-headquartered startup called LOVE, valued at $17 million following its pre-seed funding, aims to redefine how people stay in touch with close family and friends. The company is launching a messaging app that offers a combination of video calling as well as asynchronous video and audio messaging, in an ad-free, privacy-focused experience with a number of bells and whistles, including artistic filters and real-time transcription and translation features.

But LOVE’s bigger differentiator may not be its product alone, but rather the company’s mission.

LOVE aims for its product direction to be guided by its user base in a democratic fashion as opposed to having the decisions made about its future determined by an elite few at the top of some corporate hierarchy. In addition, the company’s longer-term goal is ultimately to hand over ownership of the app and its governance to its users, the company says.

These concepts have emerged as part of bigger trends towards a sort of “Web 3.0,” or next phase of internet development, where services are decentralized, user privacy is elevated, data is protected and transactions take place on digital ledgers, like a blockchain, in a more distributed fashion.

LOVE’s founders are proponents of this new model, including serial entrepreneur Samantha Radocchia, who previously founded three companies and was an early advocate for the blockchain as the co-founder of Chronicled, an enterprise blockchain company focused on the pharmaceutical supply chain.

As someone who’s been interested in emerging technology since her days of writing her anthropology thesis on currency exchanges in “Second Life’s” virtual world, she’s now faculty at Singularity University, where she’s given talks about blockchain, AI, Internet of Things, Future of Work, and other topics. She’s also authored an introductory guide to the blockchain with her book “Bitcoin Pizza.”

Co-founder Christopher Schlaeffer, meanwhile, held a number of roles at Deutsche Telekom, including chief product & innovation officer, corporate development officer and chief strategy officer, where he along with Google execs introduced the first mobile phone to run Android. He was also chief digital officer at the telecommunication services company VEON.

The two crossed paths after Schlaeffer had already begun the work of organizing a team to bring LOVE to the public, which includes co-founders Chief Technologist Jim Reeves, also previously of VEON, and Chief Designer Timm Kekeritz, previously an interaction designer at international design firm IDEO in San Francisco, design director at IXDS and founder of design consultancy Raureif in Berlin, among other roles.

Image Credits: LOVE

Explained Radocchia, what attracted her to join as CEO was the potential to create a new company that upholds more positive values than what’s often seen today — in fact, the brand name “LOVE” is a reference to this aim. She was also interested in the potential to think through what she describes as “new business models that are not reliant on advertising or harvesting the data of our users,” she says.

To that end, LOVE plans to monetize without any advertising. While the company isn’t ready to explain its business model in full, it would involve users opting in to services through granular permissions and membership, we’re told.

“We believe our users will much rather be willing to pay for services they consciously use and grant permissions to in a given context than have their data used for an advertising model which is simply not transparent,” says Radocchia.

LOVE expects to share more about the model next year.

As for the LOVE app itself, it’s a fairly polished mobile messenger offering an interesting combination of features. Like any other video chat app, you can video call with friends and family, either in one-on-one calls or in groups. Currently, LOVE supports up to five call participants, but expects to expand that as it scales. The app also supports video and audio messaging for asynchronous conversations. There are already tools that offer this sort of functionality on the market, of course — like WhatsApp, with its support for audio messages, or video messenger Marco Polo. But they don’t offer quite the same expanded feature set.

Image Credits: LOVE

For starters, LOVE limits its video messages to 60 seconds, for brevity’s sake. (As anyone who’s used Marco Polo knows, videos can become a bit rambling, which makes it harder to catch up when you’re behind on group chats.) In addition, LOVE allows you to both watch the video content as well as read the real-time transcription of what’s being said — the latter which comes in handy not only for accessibility’s sake, but also for those times you want to hear someone’s messages but aren’t in a private place to listen or don’t have headphones. Conversations can also be translated into 50 languages.

“A lot of the traditional communication or messenger products are coming from a paradigm that has always been text-based,” explains Radocchia. “We’re approaching it completely differently. So while other platforms have a lot of the features that we do, I think that…the perspective that we’ve approached it has completely flipped it on its head,” she continues. “As opposed to bolting video messages on to a primarily text-based interface, [LOVE is] actually doing it in the opposite way and adding text as a sort of a magically transcribed add-on — and something that you never, hopefully, need to be typing out on your keyboard again,” she adds.

The app’s user interface, meanwhile, has been designed to encourage eye-to-eye contact with the speaker to make conversations feel more natural. It does this by way of design elements where bubbles float around as you’re speaking and the bubble with the current speaker grows to pull your focus away from looking at yourself. The company is also working with the curator of Serpentine Gallery in London, Hans Ulrich-Obrist, to create new filters that aren’t about beautification or gimmicks, but are instead focused on introducing a new form of visual expression that makes people feel more comfortable on camera.

For the time being, this has resulted in a filter that slightly abstracts your appearance, almost in the style of animation or some other form of visual arts.

The app claims to use end-to-end encryption and the automatic deletion of its content after seven days — except for messages you yourself recorded, if you’ve chosen to save them as “memorable moments.”

“One of our commitments is to privacy and the right-to-forget,” says Radocchia. “We don’t want to be or need to be storing any of this information.”

LOVE has been soft-launched on the App Store, where it’s been used with a number of testers and is working to organically grow its user base through an onboarding invite mechanism that asks users to invite at least three people to join. This same onboarding process also carefully explains why LOVE asks for permissions — like using speech recognition to create subtitles.

LOVE says its valuation is around $17 million USD following pre-seed investments from a combination of traditional startup investors and strategic angel investors across a variety of industries, including tech, film, media, TV and financial services. The company will raise a seed round this fall.

The app is currently available on iOS, but an Android version will arrive later in the year. (Note that LOVE does not currently support the iOS 15 beta software, where it has issues with speech transcription and in other areas. That should be resolved next week, following an app update now in the works.)

This Week in Apps: OnlyFans bans sexual content, SharePlay delayed, TikTok questioned over biometric data collection

By Sarah Perez

Welcome back to This Week in Apps, the weekly TechCrunch series that recaps the latest in mobile OS news, mobile applications and the overall app economy.

The app industry continues to grow, with a record 218 billion downloads and $143 billion in global consumer spend in 2020. Consumers last year also spent 3.5 trillion minutes using apps on Android devices alone. And in the U.S., app usage surged ahead of the time spent watching live TV. Currently, the average American watches 3.7 hours of live TV per day, but now spends four hours per day on their mobile devices.

Apps aren’t just a way to pass idle hours — they’re also a big business. In 2019, mobile-first companies had a combined $544 billion valuation, 6.5x higher than those without a mobile focus. In 2020, investors poured $73 billion in capital into mobile companies — a figure that’s up 27% year-over-year.

This Week in Apps offers a way to keep up with this fast-moving industry in one place with the latest from the world of apps, including news, updates, startup fundings, mergers and acquisitions, and suggestions about new apps and games to try, too.

Do you want This Week in Apps in your inbox every Saturday? Sign up here: techcrunch.com/newsletters

Top Stories

OnlyFans to ban sexually explicit content

OnlyFans logo displayed on a phone screen and a website

(Photo Illustration by Jakub Porzycki/NurPhoto via Getty Images)

Creator platform OnlyFans is getting out of the porn business. The company announced this week it will begin to prohibit any “sexually explicit” content starting on October 1, 2021 — a decision it claimed would ensure the long-term sustainability of the platform. The news angered a number of impacted creators who weren’t notified ahead of time and who’ve come to rely on OnlyFans as their main source of income.

However, word is that OnlyFans was struggling to find outside investors, despite its sizable user base, due to the adult content it hosts. Some VC firms are prohibited from investing in adult content businesses, while others may be concerned over other matters — like how NSFW content could have limited interest from advertisers and brand partners. They may have also worried about OnlyFans’ ability to successfully restrict minors from using the app, in light of what appears to be soon-to-come increased regulations for online businesses. Plus, porn companies face a number of other issues, too. They have to continually ensure they’re not hosting illegal content like child sex abuse material, revenge porn or content from sex trafficking victims — the latter which has led to lawsuits at other large porn companies.

The news followed a big marketing push for OnlyFans’ porn-free (SFW) app, OFTV, which circulated alongside reports that the company was looking to raise funds at a $1 billion+ valuation. OnlyFans may not have technically needed the funding to operate its current business — it handled more than $2 billion in sales in 2020 and keeps 20%. Rather, the company may have seen there’s more opportunity to cater to the “SFW” creator community, now that it has big names like Bella Thorne, Cardi B, Tyga, Tyler Posey, Blac Chyna, Bhad Bhabie and others on board.

U.S. lawmakers demand info on TikTok’s plans for biometric data collection

The TikTok logo is seen on an iPhone 11 Pro max

The TikTok logo is seen on an iPhone 11 Pro max. Image Credits: Nur Photo/Getty Images

U.S. lawmakers are challenging TikTok on its plans to collect biometric data from its users. TechCrunch first reported on TikTok’s updated privacy policy in June, where the company gave itself permission to collect biometric data in the U.S., including users’ “faceprints and voiceprints.” When reached for comment, TikTok could not confirm what product developments necessitated the addition of biometric data to its list of disclosures about the information it automatically collects from users, but said it would ask for consent in the case such data collection practices began.

Earlier this month, Senators Amy Klobuchar (D-MN) and John Thune (R-SD) sent a letter to TikTok CEO Shou Zi Chew, which said they were “alarmed” by the change, and demanded to know what information TikTok will be collecting and what it plans to do with the data. This wouldn’t be the first time TikTok got in trouble for excessive data collection. Earlier this year, the company paid out $92 million to settle a class-action lawsuit that claimed TikTok had unlawfully collected users’ biometric data and shared it with third parties.

Weekly News

Platforms: Apple

Image Credits: Apple

  • ⭐ Apple told developers that some of the features it announced as coming in iOS 15 won’t be available at launch. This includes one of the highlights of the new OS, SharePlay, a feature that lets people share music, videos and their screen over FaceTime calls. Other features that will come in later releases include Wallet’s support for ID cards, the App Privacy report and others that have yet to make it to beta releases.
  • Apple walked back its controversial Safari changes with the iOS 15 beta 6 update. Apple’s original redesign had shown the address bar at the bottom of the screen, floating atop the page’s content. Now the tab bar will appear below the page’s content, offering access to its usual set of buttons as when it was at the top. Users can also turn off the bottom tab bar now and revert to the old, Single Tab option that puts the address bar back at the top as before.
  • In response to criticism over its new CSAM detection technology, Apple said the version of NeuralHash that was reverse-engineered by a developer, Asuhariet Ygvar, was a generic version, and not the complete version that will roll out later this year.
  • The Verge dug through over 800 documents from the Apple-Epic trial to find the best emails, which included dirt on a number of other companies like Netflix, Hulu, Sony, Google, Nintendo, Valve, Microsoft, Amazon and more. These offered details on things like Netflix’s secret arrangement to pay only 15% of revenue, how Microsoft also quietly offers a way for some companies to bypass its full cut, how Apple initially saw the Amazon Appstore as a threat and more.

Platforms: Google

  • A beta version of the Android Accessibility Suite app (12.0.0) which rolled out with the fourth Android beta release added something called “Camera Switches” to Switch Access, a toolset that lets you interact with your device without using the touchscreen. Camera Switches allows users to navigate their phone and use its features by making face gestures, like a smile, open mouth, raised eyebrows and more.
  • Google announced its Pixel 5a with 5G, the latest A-series Pixel phone, will arrive on August 27, offering IP67 water resistance, long-lasting Adaptive Battery, Pixel’s dual-camera system and more, for $449. The phone makes Google’s default Android experience available at a lower price point than the soon to arrive Pixel 6.
  • An unredacted complaint from the Apple-Epic trial revealed that Google had quietly paid developers hundreds of millions of dollars via a program known as “Project Hug,” (later “Apps and Games Velocity Program”) to keep their games on the Play Store. Epic alleges Google launched the program to keep developers from following its lead by moving their games outside the store.

Augmented Reality

  • Snap on Thursday announced it hired its first VP of Platform Partnerships to lead AR, Konstantinos Papamiltiadis (“KP”). The new exec will lead Snap’s efforts to onboard partners, including individual AR creators building via Lens Studio as well as large companies that incorporate Snapchat’s camera and AR technology (Camera Kit) into their apps. KP will join in September, and report to Ben Schwerin, SVP of Content and Partnerships.

Fintech

  • Crypto exchange Coinbase will enter the Japanese market through a new partnership with Japanese financial giant Mitsubishi UFJ Financial Group (MUFG). The company said it plans to launch other localized versions of its existing global services in the future.

Social

Image Credits: Facebook

  • Facebook launched a “test” of Facebook Reels in the U.S. on iOS and Android. The new feature brings the Reels experience to Facebook, allowing users to create and share short-form video content directly within the News Feed or within Facebook Groups. Instagram Reels creators can also now opt in to have their Reels featured on users’ News Feed. The company is heavily investing its its battle with TikTok, even pledging that some portion of its $1 billion creator fund will go toward Facebook Reels.
  • Twitter’s redesign of its website and app was met with a lot of backlash from users and accessibility experts alike. The company choices add more visual contrast between various elements and may have helped those with low vision. But for others, the contrast is causing strain and headaches. Experts believe accessibility isn’t a one-size fits all situation, and Twitter should have introduced tools that allowed people to adjust their settings to their own needs.
  • The pro-Trump Twitter alternative Gettr’s lack of moderation has allowed users to share child exploitation images, according to research from the Stanford Internet Observatory’s Cyber Policy Center.
  • Pinterest rolled out a new set of more inclusive search filters that allow people to find styles for different types of hair textures — like coily, curly, wavy, straight, as well as shaved or bald and protective styles. 

Photos

  • Photoshop for iPad gained new image correction tools, including the Healing Brush and Magic Wand, and added support for connecting an iPad to external monitors via HDMI or USB-C. The company also launched a Photoshop Beta program on the desktop.

Messaging

  • WhatsApp is being adopted by the Taliban to spread its message across Afghanistan, despite being on Facebook’s list of banned organizations. The company says it’s proactively removing Taliban content — but that may be difficult to do since WhatsApp’s E2E encryption means it can’t read people’s texts. This week, Facebook shut down a Taliban helpline in Kabul, which allowed civilians to report violence and looting, but some critics said this wasn’t actually helping local Afghans, as the group was now in effect governing the region.
  • WhatsApp is also testing a new feature that will show a large preview when sharing links, which some suspect may launch around the time when the app adds the ability to have the same account running on multiple devices.

Streaming & Entertainment

  • Netflix announced it’s adding spatial audio support on iPhone and iPad on iOS 14, joining other streamers like HBO Max, Disney+ and Peacock that have already pledged to support the new technology. The feature will be available to toggle on and off in the Control Center, when it arrives.
  • Blockchain-powered streaming music service Audius partnered with TikTok to allow artists to upload their songs using TikTok’s new SoundKit in just one click.
  • YouTube’s mobile app added new functionality that allows users to browse a video’s chapters, and jump into the chapter they want directly from the search page.
  • Spotify’s Anchor app now allows users in global markets to record “Music + Talk” podcasts, where users can combine spoken word recordings with any track from Spotify’s library of 70 million songs for a radio DJ-like experience.
  • Podcasters are complaining that Apple’s revamped Podcasts platform is not working well, reports The Verge. Podcasts Connect has been buggy, and sports a confusing interface that has led to serious user errors (like entire shows being archived). And listeners have complained about syncing problems and podcasts they already heard flooding their libraries.

Dating

  • Tinder announced a new feature that will allow users to voluntarily verify their identity on the platform, which will allow the company to cross-reference sex offender registry data. Previously, Tinder would only check this database when a user signed up for a paid subscription with a credit card.

Gaming

Image Source: The Pokémon Company

  • Pokémon Unite will come to iOS and Android on September 22, The Pokémon Company announced during a livestream this week. The strategic battle game first launched on Nintendo Switch in late July.
  • Developer Konami announced a new game, Castlevania: Grimoire of Souls, which will come exclusively to Apple Arcade. The game is described as a “full-fledged side-scrolling action game,” featuring a roster of iconic characters from the classic game series. The company last year released another version of Castelvania on the App Store and Google Play.
  • Dragon Ball Z: Dokkan Battle has now surpassed $3 billion in player spending since its 2015 debut, reported Sensor Tower. The game from Bandai Namco took 20 months to reach the figure after hitting the $2 billion milestone in 2019. The new landmark sees the game joining other top-grossers, including Clash Royale, Lineage M and others.
  • Sensor Tower’s mobile gaming advertising report revealed data on top ad networks in the mobile gaming market, and their market share. It also found puzzle games were among the top advertisers on gaming-focused networks like Chartboost, Unity, IronSource and Vungle. On less game-focused networks, mid-core games were top titles, like Call of Duty: Mobile and Top War. 

Image Credits: Sensor Tower

Health & Fitness

  • Apple is reportedly scaling back HealthHabit, an internal app for Apple employees that allowed them to track fitness goals, talk to clinicians and coaches at AC Wellness (a doctors’ group Apple works with) and manage hypertension. According to Insider, 50 employees had been tasked to work on the project.
  • Samsung launched a new product for Galaxy smartphones in partnership with healthcare nonprofit The Commons Project, that allows U.S. users to save a verifiable copy of their vaccination card in the Samsung Pay digital wallet.

Image Credits: Samsung

Adtech

Government & Policy

  • China cited 43 apps, including Tencent’s WeChat and an e-reader from Alibaba, for illegally transferring user data. The regulator said the apps had transferred users location data and contact list and harassed them with pop-up windows. The apps have until August 25 to make changes before being punished.

Security & Privacy

  • A VICE report reveals a fascinating story about a jailbreaking community member who had served as a double agent by spying for Apple’s security team. Andrey Shumeyko, whose online handles included JVHResearch and YRH04E, would advertise leaked apps, manuals and stolen devices on Twitter and Discord. He would then tell Apple things like which Apple employees were leaking confidential info, which reporters would talk to leakers, who sold stolen iPhone prototypes and more. Shumeyko decided to share his story because he felt Apple took advantage of him and didn’t compensate him for the work.

Funding and M&A

💰 South Korea’s GS Retail Co. Ltd will buy Delivery Hero’s food delivery app Yogiyo in a deal valued at 800 billion won ($685 million USD). Yogiyo is the second-largest food delivery app in South Korea, with a 25% market share.

💰 Gaming platform Roblox acquired a Discord rival, Guilded, which allows users to have text and voice conversations, organize communities around events and calendars and more. Deal terms were not disclosed. Guilded raised $10.2 million in venture funding. Roblox’s stock fell by 7% after the company reported earnings this week, after failing to meet Wall Street expectations.

💰 Travel app Hopper raised $175 million in a Series G round of funding led by GPI Capital, valuing the business at over $3.5 billion. The company raised a similar amount just last year, but is now benefiting from renewed growth in travel following COVID-19 vaccinations and lifting restrictions.

💰 Indian quiz app maker Zupee raised $30 million in a Series B round of funding led by Silicon Valley-based WestCap Group and Tomales Bay Capital. The round values the company at $500 million, up 5x from last year.

💰 Danggeun Market, the publisher of South Korea’s hyperlocal community app Karrot, raised $162 million in a Series D round of funding led by DST Global. The round values the business at $2.7 billion and will be used to help the company launch its own payments platform, Karrot Pay.

💰 Bangalore-based fintech app Smallcase raised $40 million in Series C funding round led by Faering Capital and Premji Invest, with participation from existing investors, as well as Amazon. The Robinhood-like app has over 3 million users who are transacting about $2.5 billion per year.

💰 Social listening app Earbuds raised $3 million in Series A funding led by Ecliptic Capital. Founded by NFL star Jason Fox, the app lets anyone share their favorite playlists, livestream music like a DJ or comment on others’ music picks.

💰 U.S. neobank app One raised $40 million in Series B funding led by Progressive Investment Company (the insurance giant’s investment arm), bringing its total raise to date to $66 million. The app offers all-in-one banking services and budgeting tools aimed at middle-income households who manage their finances on a weekly basis.

Public Markets

📈Indian travel booking app ixigo is looking to raise Rs 1,600 crore in its initial public offering, The Economic Times reported this week.

📉Trading app Robinhood disappointed in its first quarterly earnings as a publicly traded company, when it posted a net loss of $502 million, or $2.16 per share, larger than Wall Street forecasts. This overshadowed its beat on revenue ($565 million versus $521.8 million expected) and its more than doubling of MAUs to 21.3 million in Q2.  Also of note, the company said dogecoin made up 62% of its crypto revenue in Q2.

Downloads

Polycam (update)

Image Credits: Polycam

3D scanning software maker Polycam launched a new 3D capture tool, Photo Mode, that allows iPhone and iPad users to capture professional-quality 3D models with just an iPhone. While the app’s scanner before had required the use of the lidar sensor built into newer devices like the iPhone 12 Pro and iPad Pro models, the new Photo Mode feature uses just an iPhone’s camera. The resulting 3D assets are ready to use in a variety of applications, including 3D art, gaming, AR/VR and e-commerce. Data export is available in over a dozen file formats, including .obj, .gtlf, .usdz and others. The app is a free download on the App Store, with in-app purchases available.

Jiobit (update)

Jiobit, the tracking dongle acquired by family safety and communication app Life360, this week partnered with emergency response service Noonlight to offer Jiobit Protect, a premium add-on that offers Jiobit users access to an SOS Mode and Alert Button that work with the Jiobit mobile app. SOS Mode can be triggered by a child’s caregiver when they detect — through notifications from the Jiobit app — that a loved one may be in danger. They can then reach Noonlight’s dispatcher who can facilitate a call to 911 and provide the exact location of the person wearing the Jiobit device, as well as share other details, like allergies or special needs, for example.

Tweets

When your app redesign goes wrong…

Image Credits: Twitter.com

Prominent App Store critic Kosta Eleftheriou shut down his FlickType iOS app this week after too many frustrations with App Review. He cited rejections that incorrectly argued that his app required more access than it did — something he had successfully appealed and overturned years ago. Attempted follow-ups with Apple were ignored, he said. 

Image Credits: Twitter.com

Anyone have app ideas?

Social platforms wrestle with what to do about the Taliban

By Taylor Hatmaker

With the hasty U.S. military withdrawal from Afghanistan underway after two decades occupying the country, social media platforms have a complex new set of policy decisions to make.

The Taliban has been social media-savvy for years, but those companies will face new questions as the notoriously brutal, repressive group seeks to present itself as Afghanistan’s legitimate governing body to the rest of the world. Given its ubiquity among political leaders and governments, social media will likely play an even more central role for the Taliban as it seeks to cement control and move toward governing.

Facebook has taken some early precautions to protect its users from potential reprisals as the Taliban seizes power. Through Twitter, Facebook’s Nathaniel Gleicher announced a set of new measures the platform rolled out over the last week. The company added a “one-click” way for people in Afghanistan to instantly lock their accounts, hiding posts on their timeline and preventing anyone they aren’t friends with from downloading or sharing their profile picture.

4/ We’ve launched a one-click tool for people in Afghanistan to quickly lock down their account. When their profile is locked, people who aren’t their friends can’t download or share their profile photo or see posts on their timeline. pic.twitter.com/pUANh5uBgn

— Nathaniel Gleicher (@ngleicher) August 19, 2021

Facebook also removed the ability for users to view and search anyone’s friends list for people located in Afghanistan. On Instagram, pop-up alerts will provide Afghanistan-based users with information on how to quickly lock down their accounts.

The Taliban has long been banned on Facebook under the company’s rules against dangerous organizations. “The Taliban is sanctioned as a terrorist organization under US law… This means we remove accounts maintained by or on behalf of the Taliban and prohibit praise, support, and representation of them,” a Facebook spokesperson told the BBC.

The Afghan Taliban is actually not designated as a foreign terrorist organization by the U.S. State Department, but the Taliban operating out of Pakistan has held that designation since 2010. While it doesn’t appear on the list of foreign terrorist organizations, the Afghanistan-based Taliban is defined as a terror group according to economic sanctions that the U.S. put in place after 9/11.

While the Taliban is also banned from Facebook-owned WhatsApp, the platform’s end-to-end encryption makes enforcing those rules on WhatsApp more complex. WhatsApp is ubiquitous in Afghanistan and both the Afghan military and the Taliban have relied on the chat app to communicate in recent years. Though Facebook doesn’t allow the Taliban on its platforms, the group turned to WhatsApp to communicate its plans to seize control to the Afghan people and discourage resistance in what was a shockingly swift and frictionless sprint to power. The Taliban even set up WhatsApp number as a sort of help line for Afghans to report violence or crime, but Facebook quickly shut down the account.

Earlier this week, Facebook’s VP of content policy Monika Bickert noted that even if the U.S. does ultimately remove the Taliban from its lists of sanctioned terror groups, the platform would reevaluate and make its own decision. “… We would have to do a policy analysis on whether or not they nevertheless violate our dangerous organizations policy,” Bickert said.

Like Facebook, YouTube maintains that the Taliban is banned from its platform. YouTube’s own decision also appears to align with sanctions and could be subject to change if the U.S. approach to the Taliban shifts.

“YouTube complies with all applicable sanctions and trade compliance laws, including relevant U.S. sanctions,” a YouTube spokesperson told TechCrunch. “As such, if we find an account believed to be owned and operated by the Afghan Taliban, we terminate it. Further, our policies prohibit content that incites violence.”

On Twitter, Taliban spokesperson Zabihullah Mujahid has continued to share regular updates about the group’s activities in Kabul. Another Taliban representative, Qari Yousaf Ahmadi, also freely posts on the platform. Unlike Facebook and YouTube, Twitter doesn’t have a blanket ban on the group but will enforce its policies on a post-by-post basis.

If the Taliban expands its social media footprint, other platforms might be facing the same set of decisions. TikTok did not respond to TechCrunch’s request for comment, but previously told NBC that it considers the Taliban a terrorist organization and does not allow content that promotes the group.

The Taliban doesn’t appear to have a foothold beyond the most mainstream social networks, but it’s not hard to imagine the former insurgency turning to alternative platforms to remake its image as the world looks on.

While Twitch declined to comment on what it might do if the group were to use the platform, it does have a relevant policy that takes “off-service conduct” into account when banning users. That policy was designed to address reports of abusive behavior and sexual harassment among Twitch streamers.

The new rules also apply to accounts linked to violent extremism, terrorism, or other serious threats, whether those actions take place on or off Twitch. That definition would likely preclude the Taliban from establishing a presence on the platform, even if the U.S. lifts sanctions or changes its terrorist designations in the future.

Apple’s CSAM detection tech is under fire — again

By Zack Whittaker

Apple has encountered monumental backlash to a new child sexual abuse material (CSAM) detection technology it announced earlier this month. The system, which Apple calls NeuralHash, has yet to be activated for its billion-plus users, but the technology is already facing heat from security researchers who say the algorithm is producing flawed results.

NeuralHash is designed to identify known CSAM on a user’s device without having to possess the image or knowing the contents of the image. Because a user’s photos stored in iCloud are end-to-end encrypted so that even Apple can’t access the data, NeuralHash instead scans for known CSAM on a user’s device, which Apple claims is more privacy friendly, as it limits the scanning to just photos rather than other companies which scan all of a user’s file.

Apple does this by looking for images on a user’s device that have the same hash — a string of letters and numbers that can uniquely identify an image — that are provided by child protection organizations like NCMEC. If NeuralHash finds 30 or more matching hashes, the images are flagged to Apple for a manual review before the account owner is reported to law enforcement. Apple says the chance of a false positive is about one in one trillion accounts.

But security experts and privacy advocates have expressed concern that the system could be abused by highly resourced actors, like governments, to implicate innocent victims or to manipulate the system to detect other materials that authoritarian nation states find objectionable. NCMEC called critics the “screeching voices of the minority,” according to a leaked memo distributed internally to Apple staff.

Last night, Asuhariet Ygvar reverse-engineered Apple’s NeuralHash into a Python script and published code to GitHub, allowing anyone to test the technology regardless of whether they have an Apple device to test. In a Reddit post, Ygvar said NeuralHash “already exists” in iOS 14.3 as obfuscated code, but was able to reconstruct the technology to help other security researchers understand the algorithm better before it’s rolled out to iOS and macOS devices later this year.

It didn’t take long before others tinkered with the published code and soon came the first reported case of a “hash collision,” which in NeuralHash’s case is where two entirely different images produce the same hash. Cory Cornelius, a well-known research scientist at Intel Labs, discovered the hash collision. Ygvar confirmed the collision a short time later.

Hash collisions can be a death knell to systems that rely on cryptography to keep them secure, such as encryption. Over the years several well-known password hashing algorithms, like MD5 and SHA-1, were retired after collision attacks rendered them ineffective.

Kenneth White, a cryptography expert and founder of the Open Crypto Audit Project, said in a tweet: “I think some people aren’t grasping that the time between the iOS NeuralHash code being found and [the] first collision was not months or days, but a couple of hours.”

When reached, an Apple spokesperson declined to comment on the record. But in a background call where reporters were not allowed to quote executives directly or by name, Apple downplayed the hash collision and argued that the protections it puts in place — such as a manual review of photos before they are reported to law enforcement — are designed to prevent abuses. Apple also said that the version of NeuralHash that was reverse-engineered is a generic version, and not the complete version that will roll out later this year.

It’s not just civil liberties groups and security experts that are expressing concern about the technology. A senior lawmaker in the German parliament sent a letter to Apple chief executive Tim Cook this week saying that the company is walking down a “dangerous path” and urged Apple not to implement the system.

Facebook is bringing end-to-end encryption to Messenger calls and Instagram DMs

By Carly Page

Facebook has extended the option of using end-to-end encryption for Messenger voice calls and video calls.

End-to-end encryption (E2EE) — a security feature that prevents third-parties from eavesdropping on calls and chats — has been available for text conversations on Facebook’s flagship messaging service since 2016. Although the company has faced pressure from governments to roll back its end-to-end encryption plans, Facebook is now extending this protection to both voice and video calls on Messenger, which means that “nobody else, including Facebook, can see or listen to what’s sent or said.”

“End-to-end encryption is already widely used by apps like WhatsApp to keep personal conversations safe from hackers and criminals,” Ruth Kricheli, director of product management for Messenger, said in a blog post on Friday. “It’s becoming the industry standard and works like a lock and key, where just you and the people in the chat or call have access to the conversation.”

Facebook has some other E2EE features in the works, too. It’s planning to start public tests of end-to-end encryption for group chats and calls in Messenger in the coming weeks and is also planning a limited test of E2EE for Instagram direct messages. Those involved in the trial will be able to opt-in to end-to-end encrypted messages and calls for one-on-one conversations carried out on the photo-sharing platform.

Beyond encryption, the social networking giant is also updating its expiring messages feature, which is similar to the ephemeral messages feature available on Facebook-owned WhatsApp. It’s now offering more options for people in the chat to choose the amount of time before all new messages disappear, from as few as five seconds to as long as 24 hours.

“People expect their messaging apps to be secure and private, and with these new features, we’re giving them more control over how private they want their calls and chats to be,” Kricheli added.

News of Facebook ramping up its E2EE rollout plans comes just days after the company changed its privacy settings — again.

Signal now lets you choose disappearing messages by default for new chats

By Taylor Hatmaker

The encrypted chat app Signal is adding a few new options for users looking to lock down their messages. The app will now allow anyone to turn on a default timer for disappearing messages, automatically applying the settings to any newly initiated conversations.

Signal’s disappearing messages option deletes chats for both the sender and receiver after a set amount of time passes. Previously, you had to toggle the option on and select an interval for each individual conversation, which made it easy to overlook the extra privacy feature if you had a lot of chats going at once.

Shout out to everyone who's been asking for this for a while.

Now you can set a default disappearing message timer in Signal. All new groups you create or new conversations you initiate will be preconfigured with it. https://t.co/QGWv6DTx6V

— Moxie Marlinspike (@moxie) August 10, 2021

Signal is also adding more options for how long disappearing messages stick around before evaporating. The app’s users can now select an interval up to four weeks and as short as 30 seconds. You can even lower that to a single second in the app’s custom time options.

On any chat app, it’s important to remember that disappearing messages vanish from the user interface, but that doesn’t mean they’re gone for good. Anything you share online can live on indefinitely via screenshots or through someone taking a photo of an app’s screen with another device.

Signal wants its users to keep this in mind, noting that the disappearing message options are best for saving storage space and keeping conversation history to a minimum, just in case. “This is not for situations where your contact is your adversary,” the company wrote in a blog post.

The app remains one of the most popular end-to-end encrypted messaging options to date, and earlier this year even managed to absorb some WhatsApp users who grew skittish over data-sharing policy changes at Facebook.

The privacy-minded messaging app is very well regarded for its strong feature set and the company’s independence, though Signal remains relatively small compared to Facebook’s own end-to-end encrypted WhatsApp, which the company acquired in 2014. As of December 2020, Signal boasted around 20 million monthly active users, while WhatsApp hit 2 billion users early last year.

WhatsApp Has a Secure Fix for One of Its Biggest Drawbacks

By Lily Hay Newman
Starting with a beta that launches today, you’ll no longer have to route all your messages through your smartphone.

WhatsApp’s Fight With India Has Global Implications

By Varsha Bansal
The country’s “traceability” requirement would undermine the privacy of the encrypted messaging app’s users far beyond its borders.
❌