The FBI has warned that the Chinese government is using both in-person and digital techniques to intimidate, silence and harass U.S.-based Uyghur Muslims.
The Chinese government has long been accused of human rights abuses over its treatment of the Uyghur population and other mostly Muslim ethnic groups in China’s Xinjiang region. More than a million Uyghurs have been detained in internment camps, according to a United Nations human rights committee, and many other Uyghurs have been targeted and hacked by state-backed cyberattacks. China has repeatedly denied the claims.
In recent months, the Chinese government has become increasingly aggressive in its efforts to shut down foreign critics, including those based in the United States and other Western democracies. These efforts have now caught the attention of the FBI.
In an unclassified bulletin, the FBI warned that officials are using transnational repression — a term that refers to foreign government transgression of national borders through physical and digital means to intimidate or silence members of diaspora and exile communities — in an attempt to compel compliance from U.S.-based Uyghurs and other Chinese refugees and dissidents, including Tibetans, Falun Gong members, and Taiwan and Hong Kong activists.
“Threatened consequences for non-compliance routinely include detainment of a U.S.-based person’s family or friends in China, seizure of China-based assets, sustained digital and in-person harassment, Chinese government attempts to force repatriation, computer hacking and digital attacks, and false representation online,” the FBI bulletin warns.
The bulletin was reported by video surveillance news site IPVM.
The FBI highlighted four instances of U.S.-based individuals facing harassment. In one case from June, the Chinese government imprisoned dozens of family members of six U.S.-based Uyghur journalists in retaliation for their continued reporting on China and its repression of Uyghurs for the U.S. government-funded news service Radio Free Asia. The bulletin said that between 2019 and March 2021, Chinese officials used WeChat to call and text a U.S.-based Uyghur to discourage her from publicly discussing Uyghur mistreatment. Members of this person’s family were later detained in Xinjiang detention camps.
“The Chinese government continues to conduct this activity, even as the U.S. government has sanctioned Chinese officials and increased public and diplomatic messaging to counter China’s human rights and democratic abuses in Xinjiang over the past year,” the FBI states. “This transnational repression activity violates US laws and individual rights.
The FBI has urged U.S. law enforcement personnel, as well as members of the public, to report any suspected incidents of Chinese government harassment.
The Federal Trade Commission has unanimously voted to ban the spyware maker SpyFone and its chief executive Scott Zuckerman from the surveillance industry, the first order of its kind, after the agency accused the company of harvesting mobile data on thousands of people and leaving it on the open internet.
The agency said SpyFone “secretly harvested and shared data on people’s physical movements, phone use, and online activities through a hidden device hack,” allowing the spyware purchaser to “see the device’s live location and view the device user’s emails and video chats.”
SpyFone is one of many so-called “stalkerware” apps that are marketed under the guise of parental control but are often used by spouses to spy on their partners. The spyware works by being surreptitiously installed on someone’s phone, often without their permission, to steal their messages, photos, web browsing history, and real-time location data. The FTC also charged that the spyware maker exposed victims to additional security risks because the spyware runs at the “root” level of the phone, which allows the spyware to access off-limits parts of the device’s operating system. A premium version of the app included a keylogger and “live screen viewing,” the FTC says.
But the FTC said that SpyFone’s “lack of basic security” exposed those victims’ data, because of an unsecured Amazon cloud storage server that was spilling the data its spyware was collecting from more than 2,000 victims’ phones. SpyFone said it partnered with a cybersecurity firm and law enforcement to investigate, but the FTC says it never did.
Practically, the ban means SpyFone and its CEO Zuckerman are banned from “offering, promoting, selling, or advertising any surveillance app, service, or business,” making it harder for the company to operate. But FTC Commissioner Rohit Chopra said in a separate statement that stalkerware makers should also face criminal sanctions under U.S. computer hacking and wiretap laws.
The FTC has also ordered the company to delete all the data it “illegally” collected, and, also for the first time, notify victims that the app had been secretly installed on their devices.
In a statement, the FTC’s consumer protection chief Samuel Levine said: “This case is an important reminder that surveillance-based businesses pose a significant threat to our safety and security.”
The EFF, which launched the Coalition Against Stalkerware two years ago, a coalition of companies that detects, combats and raises awareness of stalkerware, praised the FTC’s order. “With the FTC now turning its focus to this industry, victims of stalkerware can begin to find solace in the fact that regulators are beginning to take their concerns seriously,” said EFF’s Eva Galperin and Bill Budington in a blog post.
This is the FTC’s second order against a stalkerware maker. In 2019, the FTC settled with Retina-X after the company was hacked several times and eventually shut down.
Over the years, several other stalkerware makers were either hacked or inadvertently exposed their own systems, including mSpy, Mobistealth, and Flexispy. Another stalkerware maker, ClevGuard, left thousands of hacked victims’ phone data on an exposed cloud server.
If you or someone you know needs help, the National Domestic Violence Hotline (1-800-799-7233) provides 24/7 free, confidential support to victims of domestic abuse and violence. If you are in an emergency situation, call 911.
Did you receive a notification and want to tell your story? You can contact this reporter on Signal and WhatsApp at +1 646-755-8849 or email@example.com by email.
It’s been a long time coming but Facebook is finally feeling some heat from Europe’s much trumpeted data protection regime: Ireland’s Data Protection Commission (DPC) has just announced a €225 million (~$267M) for WhatsApp.
The Facebook-owned messaging app has been under investigation by the Irish DPC, its lead data supervisor in the European Union, since December 2018 — several months after the first complaints were fired at WhatsApp over how it processes user data under Europe’s General Data Protection Regulation (GDPR), once it begun being applied in May 2018.
Despite receiving a number of specific complaints about WhatsApp, the investigation undertaken by the DPC that’s been decided today was what’s known as an “own volition” enquiry — meaning the regulator selected the parameters of the investigation itself, choosing to fix on an audit of WhatsApp’s ‘transparency’ obligations.
A key principle of the GDPR is that entities which are processing people’s data must be clear, open and honest with those people about how their information will be used.
The DPC’s decision today (which runs to a full 266 pages) concludes that WhatsApp failed to live up to the standard required by the GDPR.
Its enquiry considered whether or not WhatsApp fulfils transparency obligations to both users and non-users of its service (WhatsApp may, for example, upload the phone numbers of non-users if a user agrees to it ingesting their phone book which contains other people’s personal data); as well as looking at the transparency the platform offers over its sharing of data with its parent entity Facebook (a highly controversial issue at the time the privacy U-turn was announced back in 2016, although it predated GDPR being applied).
In sum, the DPC found a range of transparency infringements by WhatsApp — spanning articles 5(1)(a); 12, 13 and 14 of the GDPR.
In addition to issuing a sizeable financial penalty, it has ordered WhatsApp to take a number of actions to improve the level of transparency it offer users and non-users — giving the tech giant a three-month deadline for making all the ordered changes.
In a statement responding to the DPC’s decision, WhatsApp disputed the findings and dubbed the penalty “entirely disproportionate” — as well as confirming it will appeal, writing:
“WhatsApp is committed to providing a secure and private service. We have worked to ensure the information we provide is transparent and comprehensive and will continue to do so. We disagree with the decision today regarding the transparency we provided to people in 2018 and the penalties are entirely disproportionate. We will appeal this decision.”
It’s worth emphasizing that the scope of the DPC enquiry which has finally been decided today was limited to only looking at WhatsApp’s transparency obligations.
The regulator was explicitly not looking into wider complaints — which have also been raised against Facebook’s data-mining empire for well over three years — about the legal basis WhatsApp claims for processing people’s information in the first place.
So the DPC will continue to face criticism over both the pace and approach of its GDPR enforcement.
…system to add years until this fine will actually be paid – but at least it's a start… 10k cases per year to go!
— Max Schrems (@maxschrems) September 2, 2021
Indeed, prior to today, Ireland’s regulator had only issued one decision in a major cross-border cases addressing ‘Big Tech’ — against Twitter when, back in December, it knuckle-tapped the social network over a historical security breach with a fine of $550k.
WhatsApp’s first GDPR penalty is, by contrast, considerably larger — reflecting what EU regulators (plural) evidently consider to be a far more serious infringement of the GDPR.
Transparency is a key principle of the regulation. And while a security breach may indicate sloppy practice, systematic opacity towards people whose data your adtech empire relies upon to turn a fat profit looks rather more intentional; indeed, it’s arguably the whole business model.
And — at least in Europe — such companies are going to find themselves being forced to be up front about what they’re doing with people’s data.
The WhatsApp decision will rekindle the debate about whether the GDPR is working effectively where it counts most: Against the most powerful companies in the world, who are also of course Internet companies.
Under the EU’s flagship data protection regulation, decisions on cross border cases require agreement from all affected regulators — across the 27 Member States — so while the GDPR’s “one-stop-shop” mechanism seeks to streamline the regulatory burden for cross-border businesses by funnelling complaints and investigations via a lead regulator (typically where a company has its main legal establishment in the EU), objections can be raised to that lead supervisory authority’s conclusions (and any proposed sanctions), as has happened here, in this WhatsApp case.
Ireland originally proposed a far more low-ball penalty of up to €50M for WhatsApp. However other EU regulators objected to the draft decision on a number of fronts — and the European Data Protection Board (EDPB) ultimately had to step in and take a binding decision (issued this summer) to settle the various disputes.
Through that (admittedly rather painful) joint-working, the DPC was required to increase the size of the fine issued to WhatsApp. In a mirror of what happened with its draft Twitter decision — where the DPC has also suggested an even tinier penalty in the first instance.
While there is a clear time cost in settling disputes between the EU’s smorgasbord of data protection agencies — the DPC submitted its draft WhatsApp decision to the other DPAs for review back in December, so it’s taken well over half a year to hash out all the disputes about WhatsApp’s lossy hashing and so forth — the fact that ‘corrections’ are being made to its decisions and conclusions can land — if not jointly agreed but at least arriving via a consensus being pushed through by the EDPB — is a sign that the process, while slow and creaky, is working.
Even so, Ireland’s data watchdog will continue to face criticism for its outsized role in handling GDPR complaints and investigations — with some accusing the DPC of essentially cherry-picking which issues to examine in detail (by its choice and framing of cases) and which to elide entirely (by those issues it doesn’t open an enquiry into or complaints it simply drops or ignores), with its loudest critics arguing it’s therefore still a major bottleneck on effective enforcement of data protection rights across the EU. And the associated conclusion for that critique is that tech giants like Facebook are still getting a pretty free pass to violate Europe’s privacy rules.
But while it’s true that a $267M penalty is still the equivalent of a parking ticket for Facebook, orders to change how such adtech giants are able to process people’s information have the potential to be a far more significant correction on problematic business models. Again, though, time will be needed to tell.
In a statement on the WhatsApp decision today, noyb — the privacy advocay group founded by long-time European privacy campaigner Max Schrems, said: “We welcome the first decision by the Irish regulator. However, the DPC gets about ten thousand complaints per year since 2018 and this is the first major fine. The DPC also proposed an initial €50MK fine and was forced by the other European data protection authorities to move towards €225M, which is still only 0.08% of the turnover of the Facebook Group. The GDPR foresees fines of up to 4% of the turnover. This shows how the DPC is still extremely dysfunctional.”
Schrems also noted that he and noyb still have a number of pending cases before the DPC — including on WhatsApp.
In further remarks, Schrems and noyb said: “WhatsApp will surely appeal the decision. In the Irish court system this means that years will pass before any fine is actually paid. In our cases we often had the feeling that the DPC is more concerned with headlines than with actually doing the hard groundwork. It will be very interesting to see if the DPC will actually defend this decision fully, as it was basically forced to make this decision by its European counterparts. I can imagine that the DPC will simply not put many resources on the case or ‘settle’ with WhatsApp in Ireland. We will monitor this case closely to ensure that the DPC is actually following through with this decision.”
In the UK, a 12-month grace period for compliance with a design code aimed at protecting children online expires today — meaning app makers offering digital services in the market which are “likely” to be accessed by children (defined in this context as users under 18 years old) are expected to comply with a set of standards intended to safeguard kids from being tracked and profiled.
The age appropriate design code came into force on September 2 last year however the UK’s data protection watchdog, the ICO, allowed the maximum grace period for hitting compliance to give organizations time to adapt their services.
But from today it expects the standards of the code to be met.
Services where the code applies can include connected toys and games and edtech but also online retail and for-profit online services such as social media and video sharing platforms which have a strong pull for minors.
Among the code’s stipulations are that a level of ‘high privacy’ should be applied to settings by default if the user is (or is suspected to be) a child — including specific provisions that geolocation and profiling should be off by default (unless there’s a compelling justification for such privacy hostile defaults).
The code also instructs app makers to provide parental controls while also providing the child with age-appropriate information about such tools — warning against parental tracking tools that could be used to silently/invisibly monitor a child without them being made aware of the active tracking.
Another standard takes aim at dark pattern design — with a warning to app makers against using “nudge techniques” to push children to provide “unnecessary personal data or weaken or turn off their privacy protections”.
The full code contains 15 standards but is not itself baked into legislation — rather it’s a set of design recommendations the ICO wants app makers to follow.
The regulatory stick to make them do so is that the watchdog is explicitly linking compliance with its children’s privacy standards to passing muster with wider data protection requirements that are baked into UK law.
The risk for apps that ignore the standards is thus that they draw the attention of the watchdog — either through a complaint or proactive investigation — with the potential of a wider ICO audit delving into their whole approach to privacy and data protection.
“We will monitor conformance to this code through a series of proactive audits, will consider complaints, and take appropriate action to enforce the underlying data protection standards, subject to applicable law and in line with our Regulatory Action Policy,” the ICO writes in guidance on its website. “To ensure proportionate and effective regulation we will target our most significant powers, focusing on organisations and individuals suspected of repeated or wilful misconduct or serious failure to comply with the law.”
It goes on to warn it would view a lack of compliance with the kids’ privacy code as a potential black mark against (enforceable) UK data protection laws, adding: “If you do not follow this code, you may find it difficult to demonstrate that your processing is fair and complies with the GDPR [General Data Protection Regulation] or PECR [Privacy and Electronics Communications Regulation].”
Tn a blog post last week, Stephen Bonner, the ICO’s executive director of regulatory futures and innovation, also warned app makers: “We will be proactive in requiring social media platforms, video and music streaming sites and the gaming industry to tell us how their services are designed in line with the code. We will identify areas where we may need to provide support or, should the circumstances require, we have powers to investigate or audit organisations.”
“We have identified that currently, some of the biggest risks come from social media platforms, video and music streaming sites and video gaming platforms,” he went on. “In these sectors, children’s personal data is being used and shared, to bombard them with content and personalised service features. This may include inappropriate adverts; unsolicited messages and friend requests; and privacy-eroding nudges urging children to stay online. We’re concerned with a number of harms that could be created as a consequence of this data use, which are physical, emotional and psychological and financial.”
“Children’s rights must be respected and we expect organisations to prove that children’s best interests are a primary concern. The code gives clarity on how organisations can use children’s data in line with the law, and we want to see organisations committed to protecting children through the development of designs and services in accordance with the code,” Bonner added.
The ICO’s enforcement powers — at least on paper — are fairly extensive, with GDPR, for example, giving it the ability to fine infringers up to £17.5M or 4% of their annual worldwide turnover, whichever is higher.
The watchdog can also issue orders banning data processing or otherwise requiring changes to services it deems non-compliant. So apps that chose to flout the children’s design code risk setting themselves up for regulatory bumps or worse.
In recent months there have been signs some major platforms have been paying mind to the ICO’s compliance deadline — with Instagram, YouTube and TikTok all announcing changes to how they handle minors’ data and account settings ahead of the September 2 date.
In July, Instagram said it would default teens to private accounts — doing so for under 18s in certain countries which the platform confirmed to us includes the UK — among a number of other child-safety focused tweaks. Then in August, Google announced similar changes for accounts on its video charing platform, YouTube.
A few days later TikTok also said it would add more privacy protections for teens. Though it had also made earlier changes limiting privacy defaults for under 18s.
Apple also recently got itself into hot water with the digital rights community following the announcement of child safety-focused features — including a child sexual abuse material (CSAM) detection tool which scans photo uploads to iCloud; and an opt in parental safety feature that lets iCloud Family account users turn on alerts related to the viewing of explicit images by minors using its Messages app.
The unifying theme underpinning all these mainstream platform product tweaks is clearly ‘child protection’.
And while there’s been growing attention in the US to online child safety and the nefarious ways in which some apps exploit kids’ data — as well as a number of open probes in Europe (such as this Commission investigation of TikTok, acting on complaints) — the UK may be having an outsized impact here given its concerted push to pioneer age-focused design standards.
The code also combines with incoming UK legislate which is set to apply a ‘duty of care’ on platforms to take a rboad-brush safety-first stance toward users, also with a big focus on kids (and there it’s also being broadly targeted to cover all children; rather than just applying to kids under 13s as with the US’ COPPA, for example).
In the blog post ahead of the compliance deadline expiring, the ICO’s Bonner sought to take credit for what he described as “significant changes” made in recent months by platforms like Facebook, Google, Instagram and TikTok, writing: “As the first-of-its kind, it’s also having an influence globally. Members of the US Senate and Congress have called on major US tech and gaming companies to voluntarily adopt the standards in the ICO’s code for children in America.”
“The Data Protection Commission in Ireland is preparing to introduce the Children’s Fundamentals to protect children online, which links closely to the code and follows similar core principles,” he also noted.
And there are other examples in the EU: France’s data watchdog, the CNIL, looks to have been inspired by the ICO’s approach — issuing its own set of right child-protection focused recommendations this June (which also, for example, encourage app makers to add parental controls with the clear caveat that such tools must “respect the child’s privacy and best interests”).
The UK’s focus on online child safety is not just making waves overseas but sparking growth in a domestic compliance services industry.
Last month, for example, the ICO announced the first clutch of GDPR certification scheme criteria — including two schemes which focus on the age appropriate design code. Expect plenty more.
Bonner’s blog post also notes that the watchdog will formally set out its position on age assurance this autumn — so it will be providing further steerage to organizations which are in scope of the code on how to tackle that tricky piece, although it’s still not clear how hard a requirement the ICO will support, with Bonner suggesting it could be actually “verifying ages or age estimation”. Watch that space. Whatever the recommendations are, age assurance services are set to spring up with compliance-focused sales pitches.
An earlier attempt by UK lawmakers to bring in mandatory age checks to prevent kids from accessing adult content websites — dating back to 2017’s Digital Economy Act — was dropped in 2019 after widespread criticism that it would be both unworkable and a massive privacy risk for adult users of porn.
But the government did not drop its determination to find a way to regulate online services in the name of child safety. And online age verification checks look set to be — if not a blanket, hardened requirement for all digital services — increasingly brought in by the backdoor, through a sort of ‘recommended feature’ creep (as the ORG has warned).
The current recommendation in the age appropriate design code is that app makers “take a risk-based approach to recognising the age of individual users and ensure you effectively apply the standards in this code to child users”, suggesting they: “Either establish age with a level of certainty that is appropriate to the risks to the rights and freedoms of children that arise from your data processing, or apply the standards in this code to all your users instead.”
At the same time, the government’s broader push on online safety risks conflicting with some of the laudable aims of the ICO’s non-legally binding children’s privacy design code.
For instance, while the code includes the (welcome) suggestion that digital services gather as little information about children as possible, in an announcement earlier this summer UK lawmakers put out guidance for social media platforms and messaging services — ahead of the planned Online Safety legislation — that recommends they prevent children from being able to use end-to-end encryption.
That’s right; the government’s advice to data-mining platforms — which it suggests will help prepare them for requirements in the incoming legislation — is not to use ‘gold standard’ security and privacy (e2e encryption) for kids.
So the official UK government messaging to app makers appears to be that, in short order, the law will require commercial services to access more of kids’ information, not less — in the name of keeping them ‘safe’. Which is quite a contradiction vs the data minimization push on the design code.
The risk is that a tightening spotlight on kids privacy ends up being fuzzed and complicated by ill-thought through policies that push platforms to monitor kids to demonstrate ‘protection’ from a smorgasbord of online harms — be it adult content or pro-suicide postings, or cyber bullying and CSAM.
The law looks set to encourage platforms to ‘show their workings’ to prove compliance — which risks resulting in ever closer tracking of children’s activity, retention of data — and maybe risk profiling and age verification checks (that could even end up being applied to all users; think sledgehammer to crack a nut). In short, a privacy dystopia.
Such mixed messages and disjointed policymaking seem set to pile increasingly confusing — and even conflicting — requirements on digital services operating in the UK, making tech businesses legally responsible for divining clarity amid the policy mess — with the simultaneous risk of huge fines if they get the balance wrong.
Complying with the ICO’s design standards may therefore actually be the easy bit.
Apple’s plan to digitize your wallet is slowly taking shape. What started with boarding passes and venue tickets later became credit cards, subway tickets, and student IDs. Next on Apple’s list to digitize are driver’s licenses and state IDs, which it plans to support in its iOS 15 update expected out later this year.
But to get there it needs help from state governments, since it’s the states that issue driver’s licenses and other forms of state identification, and every state issues IDs differently. Apple said today it has so far secured two states, Arizona and Georgia, to bring digital driver’s license and state IDs.
Connecticut, Iowa, Kentucky, Maryland, Oklahoma, and Utah are expected to follow, but a timeline for rolling out wasn’t given.
Apple said in June that it would begin supporting digital licenses and IDs, and that the TSA would be the first agency to begin accepting a digital license from an iPhone at several airports, since only a state ID is required for traveling by air domestically within the United States. The TSA will allow you to present your digital wallet by tapping it on an identity reader. Apple says the feature is secure and doesn’t require handing over or unlocking your phone.
The digital license and ID data is stored on your iPhone but a driver’s license must be verified by the participating state. That has to happen at scale and speed to support millions of drivers and travelers while preventing fake IDs from making it through.
The goal of digitizing licenses and IDs is convenience, rather than fixing a problem. But the move hasn’t exactly drawn confidence from privacy experts, who bemoan Apple’s lack of transparency about how it built this technology and what it ultimately gets out of it.
Apple still has not said much about how the digital ID technology works, or what data the state obtains as part of the process to enroll a digital license. Apple is working on a new security verification feature that takes selfies to validate the user. It’s not to say these systems aren’t inherently problematic, but there are privacy questions that Apple will have to address down the line.
But the fragmented picture of digital licenses and IDs across the U.S. isn’t likely to get less murky overnight, even after Apple enters the picture. A recent public records request by MuckRock showed Apple was in contact with some states as early as 2019 about bringing digital licenses and IDs to iPhones, including California and Illinois, yet neither state has been announced by Apple today.
TikTok is expanding its in-app parental controls feature, Family Pairing, with educational resources designed to help parents better support their teenage users, the company announced morning. The pairing feature, which launched to global users last year, allows parents of teens aged 13 and older to connect their accounts with the child’s so the parent can set controls related to screen time use, who the teen can direct message, and more. But the company heard from teens that they also want their voices to be heard when it comes to parents’ involvement in their digital life.
To create the new educational content, TikTok partnered with the online safety nonprofit, Internet Matters. The organization developed a set of resources in collaboration with teens that aim to offer parents tips about navigating the TikTok landscape and teenage social media usage in general.
Teens said they want parents to understand the rules they’re setting when they use features like Family Pairing and they want them to be open to having discussions about the time teens spend online. And while teens don’t mind when parents set boundaries, they also want to feel they’ve earned some level of trust from the adults in their life.
The older teens get, the more autonomy they want to have on their own device and social networks, as well. They may even tell mom or dad that they don’t want them to follow them on a given platform.
This doesn’t necessarily mean the teen is up to no good, the new resources explain to parents. The teens just want to feel like they can hang out with their friends online without being so closely monitored. This has become an important part of the online experience today, in the pandemic era, where many younger people are spending more time at home instead of socializing with friends in real-life or participating in other in-person group activities.
Image Credits: TikTok
Teens said they also want to be able to come to parents when something goes wrong, without fearing that they’ll be harshly punished or that the parent will panic about the situation. The teens know they’ll be consequences if they break the rules, but they want parents to work through other tough situations with them and devise solutions together, not just react in anger.
All this sounds like straightforward, common sense advice, but parents on TikTok often have varying degrees of comfort with their teens’ digital life and use of social networks. Some basic guidelines that explain what teens want and feel makes sense to include. That said, the parents who are technically savvy enough to enable a parental control feature like Family Pairing may already be clued into best practices.
Image Credits: TikTok
In addition, this sort of teen-focused privacy and safety content is also designed to help TikTok better establish itself as a platform working to protect its younger users — an increasingly necessary stance in light of the potential regulation which big tech has been trying to ahead of, as of late. TikTok, for instance, announced in August it would roll out more privacy protections for younger teens aimed to make the app safer. Facebook, Google and YouTube also did the same.
TikTok says parents or guardians who have currently linked their account to a teen’s account via the Family Pairing feature will receive a notification that prompts them to find out more about the teens’ suggestions and how to approach those conversations about digital literacy and online safety. Parents who sign up and enable Family Pairing for the first time, will also be guided to the resources.
Google is infamous for spinning up products and killing them off, often in very short order. It’s an annoying enough habit when it’s stuff like messaging apps and games. But the tech giant’s ambitions stretch into many domains that touch human lives these days. Including, most directly, healthcare. And — it turns out — so does Google’s tendency to kill off products that its PR has previously touted as ‘life saving’.
To wit: Following a recent reconfiguration of Google’s health efforts — reported earlier by Business Insider — the tech giant confirmed to TechCrunch that it is decommissioning its clinician support app, Streams.
The app, which Google Health PR bills as a “mobile medical device”, was developed back in 2015 by DeepMind, an AI division of Google — and has been used by the UK’s National Health Service in the years since, with a number of Trusts inking deals with DeepMind Health to roll out Streams to their clinicians.
At the time of writing, one NHS Trust — London’s Royal Free — is still using the app in its hospitals.
But, presumably, not for too much longer since Google is in the process of taking Streams out back to be shot and tossed into its deadpool — alongside the likes of its ill-fated social network, Google+, and Internet ballon company Loon, to name just two of a frankly endless list of now defunct Alphabet/Google products.
Other NHS Trusts we contacted which had previously rolled out Streams told us they have already stopped using the app.
University College London NHS Trust confirmed to TechCrunch that it severed ties with Google Health earlier this year.
“Our agreement with Google Health (initially DeepMind) came to an end in March 2021 as originally planned. Google Health deleted all the data it held at the end of the [Streams] project,” a UCL NHS Trust spokesperson told TechCrunch.
Imperial College Healthcare NHS Trust also told us it stopped using Streams this summer (in July) — and said patient data is in the process of being deleted.
“Following the decommissioning of Streams at the Trust earlier this summer, data that has been processed by Google Health to provide the service to the Trust will be deleted and the agreement has been terminated,” a spokesperson said.
“As per the data sharing agreement, any patient data that has been processed by Google Health to provide the service will be deleted. The deletion process is started once the agreement has been terminated,” they added, saying the contractual timeframe for Google deleting patient data is six months.
Another Trust, Taunton & Somerset, also confirmed its involvement with Streams had already ended.
The Streams deals DeepMind inked with NHS Trusts were for five years so these contracts were likely approaching the end of their terms, anyway.
Contract extensions would have had to be agreed by both parties. And Google’s decision to decommission Streams may be factoring in a lack of enthusiasm from involved Trusts to continue using the software — although if that’s the case it may, in turn, be a reflection of Trusts’ perceptions of Google’s weak commitment to the project.
Neither side is saying much publicly.
But as far as we’re aware the Royal Free is the only NHS Trust still using the clinician support app as Google prepares to cut off Stream’s life support.
The Streams story has plenty of wrinkles, to put it politely.
For one thing, despite being developed by Google’s AI division — and despite DeepMind founder Mustafa Suleyman saying the goal for the project was to find ways to integrate AI into Streams so the app could generate predictive healthcare alerts — it doesn’t involve any artificial intelligence.
An algorithm in Streams alerts doctors to the risk of a patient developing acute kidney injury but relies on an existing AKI (acute kidney injury) algorithm developed by the NHS. So Streams essentially digitized and mobilized existing practice.
As a result, it always looked odd that an AI division of an adtech giant would be so interested in building, provisioning and supporting clinician support software over the long term. But then — as it panned out — neither DeepMind nor Google were in it for the long haul at the patient’s bedside.
DeepMind and the NHS Trust it worked with to develop Streams (the aforementioned Royal Free) started out with wider ambitions for their partnership — as detailed in an early 2016 memo we reported on, which set out a five year plan to bring AI to healthcare. Plus, as we noted above, Suleyman keep up the push for years — writing later in 2019 that: “Streams doesn’t use artificial intelligence at the moment, but the team now intends to find ways to safely integrate predictive AI models into Streams in order to provide clinicians with intelligent insights into patient deterioration.”
A key misstep for the project emerged in 2017 — through press reporting of a data scandal, as details of the full scope of the Royal Free-DeepMind data-sharing partnership were published by New Scientist (which used a freedom of information request to obtain contracts the pair had not made public).
The UK’s data protection watchdog went on to find that the Royal Free had not had a valid legal basis when it passed information on millions of patients’ to DeepMind during the development phase of Streams.
Which perhaps explains DeepMind’s eventually cooling ardour for a project it had initially thought — with the help of a willing NHS partner — would provide it with free and easy access to a rich supply of patient data for it to train up healthcare AIs which it would then be, seemingly, perfectly positioned to sell back into the self same service in future years. Price tbc.
No one involved in that thought had properly studied the detail of UK healthcare data regulation, clearly.
Or — most importantly — bothered to considered fundamental patient expectations about their private information.
So it was not actually surprising when, in 2018, DeepMind announced that it was stepping away from Streams — handing the app (and all its data) to Google Health — Google’s internal health-focused division — which went on to complete its takeover of DeepMind Health in 2019. (Although it was still shocking, as we opined at the time.)
It was Google Health that Suleyman suggested would be carrying forward the work to bake AI into Streams, writing at the time of the takeover that: “The combined experience, infrastructure and expertise of DeepMind Health teams alongside Google’s will help us continue to develop mobile tools that can support more clinicians, address critical patient safety issues and could, we hope, save thousands of lives globally.”
A particular irony attached to the Google Health takeover bit of the Streams saga is the fact that DeepMind had, when under fire over its intentions toward patient data, claimed people’s medical information would never be touched by its adtech parent.
Until of course it went on it hand the whole project off to Google — and then lauded the transfer as great news for clinicians and patients!
Google’s takeover of Streams meant NHS Trusts that wanted to continue using the app had to ink new contracts directly with Google Health. And all those who had rolled out the app did so. It’s not like they had much choice if they did want to continue.
Again, jump forward a couple of years and it’s Google Health now suddenly facing a major reorg — with Streams in the frame for the chop as part of Google’s perpetually reconfiguring project priorities.
It is quite the ignominious ending to an already infamous project.
DeepMind’s involvement with the NHS had previously been seized upon by the UK government — with former health secretary, Matt Hancock, trumpeting an AI research partnership between the company and Moorfield’s Eye Hospital as an exemplar of the kind of data-driven innovation he suggested would transform healthcare service provision in the UK.
Luckily for Hancock he didn’t pick Streams as his example of great “healthtech” innovation. (Moorfields confirmed to us that its research-focused partnership with Google Health is continuing.)
The hard lesson here appears to be don’t bet the nation’s health on an adtech giant that plays fast and loose with people’s data and doesn’t think twice about pulling the plug on digital medical devices as internal politics dictate another chair-shuffling reorg.
Patient data privacy advocacy group, MedConfidential — a key force in warning over the scope of the Royal Free’s DeepMind data-sharing deal — urged Google to ditch the spin and come clean about the Streams cock-up, once and for all.
“Streams is the Windows Vista of Google — a legacy it hopes to forget,” MedConfidential’s Sam Smith told us. “The NHS relies on trustworthy suppliers, but companies that move on after breaking things create legacy problems for the NHS, as we saw with wannacry. Google should admit the decision, delete the data, and learn that experimenting on patients is regulated for a reason.”
Despite the Information Commissioner’s Office’s 2017 finding that the Royal Free’s original data-sharing deal with DeepMind was improper, it’s notable that the London Trust stuck with Streams — continuing to pass data to DeepMind.
The original patient data-set that was shared with DeepMind without a valid legal basis was never ordered to be deleted. Nor — presumably has it since been deleted. Hence the call for Google to delete the data now.
Ironically the improperly acquired data should (in theory) finally get deleted — once contractual timeframes for any final back-up purges elapse — but only because it’s Google itself planning to switch off Streams.
The Royal Free confirmed to us that it is still using Streams, even as Google spins the dial on its commercial priorities for the umpteenth time and decides it’s not interested in this particular bit of clinician support, after all.
We put a number of questions to the Trust — including about the deletion of patient data — none of which it responded to.
Instead, two days later, it sent us this one-line statement which raises plenty more questions — saying only that: “The Streams app has not been decommissioned for the Royal Free London and our clinicians continue to use it for the benefit of patients in our hospitals.”
It is not clear how long the Trust will be able to use an app Google is decommissioning. Nor how wise that might be for patient safety — such as if the app won’t get necessary security updates, for example.
We’ve also asked Google how long it will continue to support the Royal Free’s usage — and when it plans to finally switch off the service. As well as which internal group will be responsible for any SLA requests coming from the Royal Free as the Trust continues to use software Google Health is decommissioning — and will update this report with any response. (Earlier a Google spokeswoman told us the Royal Free would continue to use Streams for the ‘near future’ — but she did not offer a specific end date.)
In press reports this month on the Google Health reorg — covering an internal memo first obtained by Business Insider — teams working on various Google health projects were reported to be being split up to other areas, including some set to report into Google’s search and AI teams.
So which Google group will take over responsibility for the handling of the SLA with the Royal Free, as a result of the Google Health reshuffle, is an interesting question.
In earlier comments, Google’s spokeswoman told us the new structure for its reconfigured health efforts — which are still being badged ‘Google Health’ — will encompass all its work in health and wellness, including Fitbit, as well as AI health research, Google Cloud and more.
On Streams specifically, she said the app hasn’t made the cut because when Google assimilated DeepMind Health it decided to focus its efforts on another digital offering for clinicians — called Care Studio — which it’s currently piloting with two US health systems (namely: Ascension & Beth Israel Deaconess Medical Center).
And anyone who’s ever tried to use a Google messaging app will surely have strong feelings of déjà vu on reading that…
DeepMind’s co-founder, meanwhile, appears to have remained blissfully ignorant of Google’s intentions to ditch Streams in favor of Care Studio — tweeting back in 2019 as Google completed the takeover of DeepMind Health that he had been “proud to be part of this journey”, and also touting “huge progress delivered already, and so much more to come for this incredible team”.
In the end, Streams isn’t being ‘supercharged’ (or levelled up to use current faddish political parlance) with AI — as his 2019 blog post had envisaged — Google is simply taking it out of service. Like it did with Reader or Allo or Tango or Google Play Music, or…. well, the list goes on.
Suleyman’s own story contains some wrinkles, too.
He is no longer at DeepMind but has himself been ‘folded into’ Google — joining as a VP of artificial intelligence policy, after initially being placed on an extended leave of absence from DeepMind.
In January, allegations that he had bullied staff were reported by the WSJ. And then, earlier this month, Business Insider expanded on that — reporting follow up allegations that there had been confidential settlements between DeepMind and former employees who had worked under Suleyman and complained about his conduct (although DeepMind denied any knowledge of such settlements).
In a statement to Business Insider, Suleyman apologized for his past behavior — and said that in 2019 he had “accepted feedback that, as a co-founder at DeepMind, I drove people too hard and at times my management style was not constructive”, adding that he had taken time out to start working with a coach and that that process had helped him “reflect, grow and learn personally and professionally”.
We asked Google if Suleyman would like to comment on the demise of Streams — and on his employer’s decision to kill the project — given his high hopes for the project and all the years of work he put into the health push. But the company did not engage with that request.
We also offered Suleyman the chance to comment directly. We’ll update this story if he responds.
The UK government has named the person it wants to take over as its chief data protection watchdog, with sitting commissioner Elizabeth Denham overdue to vacate the post: The Department of Digital, Culture, Media and Sport (DCMS) today said its preferred replacement is New Zealand’s privacy commissioner, John Edwards.
Edwards, who has a legal background, has spent more than seven years heading up the Office of the Privacy Commissioner In New Zealand — in addition to other roles with public bodies in his home country.
He is perhaps best known to the wider world for his verbose Twitter presence and for taking a public dislike to Facebook: In the wake of the 2018 Cambridge Analytica data misuse scandal Edwards publicly announced that he was deleting his account with the social media — accusing Facebook of not complying with the country’s privacy laws.
An anti-‘Big Tech’ stance aligns with the UK government’s agenda to tame the tech giants as it works to bring in safety-focused legislation for digital platforms and reforms of competition rules that take account of platform power.
— John Edwards (@JCE_PC) August 26, 2021
If confirmed in the role — the DCMS committee has to approve Edwards’ appointment; plus there’s a ceremonial nod needed from the Queen — he will be joining the regulatory body at a crucial moment as digital minister Oliver Dowden has signalled the beginnings of a planned divergence from the European Union’s data protection regime, post-Brexit, by Boris Johnson’s government.
Dial back the clock five years and prior digital minister, Matt Hancock, was defending the EU’s General Data Protection Regulation (GDPR) as a “decent piece of legislation” — and suggesting to parliament that there would be little room for the UK to diverge in data protection post-Brexit.
But Hancock is now out of government (aptly enough after a data leak showed him breaching social distancing rules by kissing his aide inside a government building), and the government mood music around data has changed key to something far more brash — with sitting digital minister Dowden framing unfettered (i.e. deregulated) data-mining as “a great opportunity” for the post-Brexit UK.
For months, now, ministers have been eyeing how to rework the UK’s current (legascy) EU-based data protection framework — to, essentially, reduce user rights in favor of soundbites heavy on claims of slashing ‘red tape’ and turbocharging data-driven ‘innovation’. Of course the government isn’t saying the quiet part out loud; its press releases talk about using “the power of data to drive growth and create jobs while keeping high data protection standards”. But those standards are being reframed as a fig leaf to enable a new era of data capture and sharing by default.
Dowden has said that the emergency data-sharing which was waived through during the pandemic — when the government used the pressing public health emergency to justify handing NHS data to a raft of tech giants — should be the ‘new normal’ for a post-Brexit UK. So, tl;dr, get used to living in a regulatory crisis.
A special taskforce, which was commissioned by the prime minister to investigate how the UK could reshape its data policies outside the EU, also issued a report this summer — in which it recommended scrapping some elements of the UK’s GDPR altogether — branding the regime “prescriptive and inflexible”; and advocating for changes to “free up data for innovation and in the public interest”, as it put it, including pushing for revisions related to AI and “growth sectors”.
The government is now preparing to reveal how it intends to act on its appetite to ‘reform’ (read: reduce) domestic privacy standards — with proposals for overhauling the data protection regime incoming next month.
Speaking to the Telegraph for a paywalled article published yesterday, Dowden trailed one change that he said he wants to make which appears to target consent requirements — with the minister suggesting the government will remove the legal requirement to gain consent to, for example, track and profile website visitors — all the while framing it as a pro-consumer move; a way to do away with “endless” cookie banners.
Only cookies that pose a ‘high risk’ to privacy would still require consent notices, per the report — whatever that means.
Oliver Dowden, the UK Minister for Digital, Culture, Media and Sport, says that the UK will break away from GDPR, and will no longer require cookie warnings, other than those posing a 'high risk'.https://t.co/2ucnppHrIm pic.twitter.com/RRUdpJumYa
— dan barker (@danbarker) August 25, 2021
“There’s an awful lot of needless bureaucracy and box ticking and actually we should be looking at how we can focus on protecting people’s privacy but in as light a touch way as possible,” the digital minister also told the Telegraph.
The draft of this Great British ‘light touch’ data protection framework will emerge next month, so all the detail is still to be set out. But the overarching point is that the government intends to redefine UK citizens’ privacy rights, using meaningless soundbites — with Dowden touting a plan for “common sense” privacy rules — to cover up the fact that it intends to reduce the UK’s currently world class privacy standards and replace them with worse protections for data.
If you live in the UK, how much privacy and data protection you get will depend upon how much ‘innovation’ ministers want to ‘turbocharge’ today — so, yes, be afraid.
It will then fall to Edwards — once/if approved in post as head of the ICO — to nod any deregulation through in his capacity as the post-Brexit information commissioner.
We can speculate that the government hopes to slip through the devilish detail of how it will torch citizens’ privacy rights behind flashy, distraction rhetoric about ‘taking action against Big Tech’. But time will tell.
Data protection experts are already warning of a regulatory stooge.
While the Telegraph suggests Edwards is seen by government as an ideal candidate to ensure the ICO takes a “more open and transparent and collaborative approach” in its future dealings with business.
In a particularly eyebrow raising detail, the newspaper goes on to report that government is exploring the idea of requiring the ICO to carry out “economic impact assessments” — to, in the words of Dowden, ensure that “it understands what the cost is on business” before introducing new guidance or codes of practice.
All too soon, UK citizens may find that — in the ‘sunny post-Brexit uplands’ — they are afforded exactly as much privacy as the market deems acceptable to give them. And that Brexit actually means watching your fundamental rights being traded away.
In a statement responding to Edwards’ nomination, Denham, the outgoing information commissioner, appeared to offer some lightly coded words of warning for government, writing [emphasis ours]: “Data driven innovation stands to bring enormous benefits to the UK economy and to our society, but the digital opportunity before us today will only be realised where people continue to trust their data will be used fairly and transparently, both here in the UK and when shared overseas.”
The lurking iceberg for government is of course that if wades in and rips up a carefully balanced, gold standard privacy regime on a soundbite-centric whim — replacing a pan-European standard with ‘anything goes’ rules of its/the market’s choosing — it’s setting the UK up for a post-Brexit future of domestic data misuse scandals.
You only have to look at the dire parade of data breaches over in the US to glimpse what’s coming down the pipe if data protection standards are allowed to slip. The government publicly bashing the privacy sector for adhering to lax standards it deregulated could soon be the new ‘get popcorn’ moment for UK policy watchers…
UK citizens will surely soon learn of unfair and unethical uses of their data under the ‘light touch’ data protection regime — i.e. when they read about it in the newspaper.
Such an approach will indeed be setting the country on a path where mistrust of digital services becomes the new normal. And that of course will be horrible for digital business over the longer run. But Dowden appears to lack even a surface understanding of Internet basics.
The UK is also of course setting itself on a direct collision course with the EU if it goes ahead and lowers data protection standards.
This is because its current data adequacy deal with the bloc — which allows for EU citizens’ data to continue flowing freely to the UK is precariously placed — was granted only on the basis that the UK was, at the time it was inked, still aligned with the GDPR.
So Dowden’s rush to rip up protections for people’s data presents a clear risk to the “significant safeguards” needed to maintain EU adequacy.
Back in June, when the Commission signed off on the UK’s adequacy deal, it clearly warned that “if anything changes on the UK side, we will intervene”. Moreover, the adequacy deal is also the first with a baked in sunset clause — meaning it will automatically expire in four years.
So even if the Commission avoids taking proactive action over slipping privacy standards in the UK there is a hard deadline — in 2025 — when the EU’s executive will be bound to look again in detail at exactly what Dowden & Co. have wrought. And it probably won’t be pretty.
The longer term UK ‘plan’ (if we can put it that way) appears to be to replace domestic economic reliance on EU data flows — by seeking out other jurisdictions that may be friendly to a privacy-light regime governing what can be done with people’s information.
Hence — also today — DCMS trumpeted an intention to secure what it billed as “new multi-billion pound global data partnerships” — saying it will prioritize striking ‘data adequacy’ “partnerships” with the US, Australia, the Republic of Korea, Singapore, and the Dubai International Finance Centre and Colombia.
Future partnerships with India, Brazil, Kenya and Indonesia will also be prioritized, it added — with the government department cheerfully glossing over the fact it’s UK citizens’ own privacy that is being deprioritized here.
“Estimates suggest there is as much as £11 billion worth of trade that goes unrealised around the world due to barriers associated with data transfers,” DCMS writes in an ebullient press release.
As it stands, the EU is of course the UK’s largest trading partner. And statistics from the House of Commons library on the UK’s trade with the EU — which you won’t find cited in the DCMS release — underline quite how tiny this potential Brexit ‘data bonanza’ is, given that UK exports to the EU stood at £294 billion in 2019 (43% of all UK exports).
So even the government’s ‘economic’ case to water down citizens’ privacy rights looks to be puffed up with the same kind of misleadingly vacuous nonsense as ministers’ reframing of a post-Brexit UK as ‘Global Britain’.
Everyone hates cookies banners, sure, but that’s a case for strengthening not weakening people’s privacy — for making non-tracking the default setting online and outlawing manipulative dark patterns so that Internet users don’t constantly have to affirm they want their information protected. Instead the UK may be poised to get rid of annoying cookie consent ‘friction’ by allowing a free for all on people’s data.
The law, called the Personal Information Protection Law (PIPL), is set to take effect on November 1.
It was proposed last year — signalling an intent by China’s communist leaders to crack down on unscrupulous data collection in the commercial sphere by putting legal restrictions on user data collection.
The new law requires app makers to offer users options over how their information is or isn’t used, such as the ability not to be targeted for marketing purposes or to have marketing based on personal characteristics, according to Xinhua.
It also places requirements on data processors to obtain consent from individuals in order to be able to process sensitive types of data such as biometrics, medical and health data, financial information and location data.
While apps that illegally process user data risk having their service suspended or terminated.
Any Western companies doing business in China which involves processing citizens’ personal data must grapple with the law’s extraterritorial jurisdiction — meaning foreign companies will face regulatory requirements such as the need to assign local representatives and report to supervisory agencies in China.
On the surface, core elements of China’s new data protection regime mirror requirements long baked into European Union law — where the General Data Protection Regulation (GDPR) provides citizens with a comprehensive set of rights wrapping their personal data, including putting a similarly high bar on consent to process what EU law refers to as ‘special category data’, such as health data (although elsewhere there are differences in what personal information is considered the most sensitive by the respective data laws).
The GDPR is also extraterritorial in scope.
But the context in which China’s data protection law will operate is also of course very different — not least given how the Chinese state uses a vast data-gathering operation to keep tabs on and police the behavior of its own citizens.
Any limits the PIPL might place on Chinese government departments’ ability to collect data on citizens — state organs were covered in draft versions of the law — may be little more than window-dressing to provide a foil for continued data collection by the Chinese Communist Party (CCP)’s state security apparatus while further consolidating its centralized control over government.
It also remains to be seen how the CCP could use the new data protection rules to further regulate — some might say tame — the power of the domestic tech sector.
It has been cracking down on the sector in a number of ways, using regulatory changes as leverage over giants like Tencent. Earlier this month, for example, Beijing filed a civil suit against the tech giant — citing claims that its messaging-app WeChat’s youth mode does not comply with laws protecting minors.
The PIPL provides the Chinese regime with plenty more attack surface to put strictures on local tech companies.
Nor is it wasting any time in attacking data-mining practices that are common place among Western tech giants but now look likely to face growing friction if deployed by companies within China.
Reuters notes that the National People’s Congress marked the passage of the law today by publishing an op-ed from state media outlet People’s Court Daily which lauds the legislation and calls for entities that use algorithms for “personalized decision making” — such as recommendation engines — to obtain user consent first.
Quoting the op-ed, it writes: “Personalization is the result of a user’s choice, and true personalized recommendations must ensure the user’s freedom to choose, without compulsion. Therefore, users must be given the right to not make use of personalized recommendation functions.”
There is growing concern over algorithmic targeting outside China, too, of course.
In Europe, lawmaker and regulators have been calling for tighter restrictions on behavioral advertising — as the bloc is in the process of negotiating a swathe of new digital regulations that will expand its power to regulate the sector, such as the proposed Digital Markets Act and Digital Services Act.
Regulating the Internet is clearly the new geopolitical battleground as regions compete to shape the future of data flows to suit their respective economic, political and social goals.
For the first time, Google has published the number of geofence warrants it’s historically received from U.S. authorities, providing a rare glimpse into how frequently these controversial warrants are issued.
The figures, published Thursday, reveal that Google has received thousands of geofence warrants each quarter since 2018, and at times accounted for about one-quarter of all U.S. warrants that Google receives. The data shows that the vast majority of geofence warrants are obtained by local and state authorities, with federal law enforcement accounting for just 4% of all geofence warrants served on the technology giant.
According to the data, Google received 982 geofence warrants in 2018, 8,396 in 2019, and 11,554 in 2020. But the figures only provide a small glimpse into the volume of warrants received, and did not break down how often it pushes back on overly broad requests. A spokesperson for Google would not comment on the record.
Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project (STOP), which led efforts by dozens of civil rights groups to lobby for the release of these numbers, commended Google for releasing the numbers.
“Geofence warrants are unconstitutionally broad and invasive, and we look forward to the day they are outlawed completely.” said Cahn.
Geofence warrants are also known as “reverse-location” warrants, since they seek to identify people of interest who were in the near-vicinity at the time a crime was committed. Police do this by asking a court to order Google, which stores vast amounts of location data to drive its advertising business, to turn over details of who was in a geographic area, such as a radius of a few hundred feet at a certain point in time, to help identify potential suspects.
Google has long shied away from providing these figures, in part because geofence warrants are largely thought to be unique to Google. Law enforcement has long known that Google stores vast troves of location data on its users in a database called Sensorvault, first revealed by The New York Times in 2019.
Sensorvault is said to have the detailed location data on “at least hundreds of millions of devices worldwide,” collected from users’ phones when they use an Android device with location data switched on, or Google services like Google Maps and Google Photo, and even Google search results. In 2018, the Associated Press reported that Google could still collect users’ locations even when their location history is “paused.”
But critics have argued that geofence warrants are unconstitutional because the authorities compel Google to turn over data on everyone else who was in the same geographic area.
Worse, these warrants have been known to ensnare entirely innocent people.
TechCrunch reported earlier this year that Minneapolis police used a geofence warrant to identify individuals accused of sparking violence in the wake of the police killing of George Floyd last year. One person on the ground who was filming and documenting the protests had his location data requested by police for being close to the violence. NBC News reported last year how one Gainesville, Fla. resident whose information was given by Google to police investigating a burglary, but was able to prove his innocence thanks to an app on his phone that tracked his fitness activity.
Although the courts have yet to deliberate widely on the legality of geofence warrants, some states are drafting laws to push back against geofence warrants. New York lawmakers proposed a bill last year that would ban geofence warrants in the state, amid fears that police could use these warrants to target protesters — as what happened in Minneapolis.
Cahn, who helped introduce the New York bill last year, said the newly released data will “help spur lawmakers to outlaw the technology.”
“Let’s be clear, the number of geofence warrants should be zero,” he said.
TikTok’s plans to collect biometric identifiers from its users has prompted concern among U.S. lawmakers, who are demanding the company reveal exactly what information it collects and what it plans to do with that data.
Klobuchar and Thune’s letter asks TikTok to explicitly explain what constitutes a “faceprint” and “voiceprint”, as well as to explain how this data will be used and how long it will be retained. The senators also quizzed TikTok on whether any data is gathered for users under the age of 18; whether it makes any inferences about its users based on the biometric data it collects; and to provide a list of all third parties that have access to the data.
“The coronavirus pandemic led to an increase in online activity, which has magnified the need to protect consumers’ privacy,” the letter reads. “This is especially true for children and teenagers, who comprise more than 32% of TikTok’s active users and have relied on online applications such as TikTok for entertainment and for interaction with their friends and loved ones.”
TikTok has been given until August 25 to respond to the lawmakers’ questions. A TikTok spokesperson did not immediately comment.
This isn’t the first time TikTok’s excessive data collection plans have come under scrutiny. Earlier this year, the company paid out $92 million to settle a class-action lawsuit claiming it unlawfully collected users’ biometric data and shared it with third parties. This came after the FTC in 2019 slapped TikTok with a $5.7 million fine for violating the Children’s Online Privacy Protection Act (COPPA), which requires apps to receive parental permission before collecting a minor’s data.
The most critical phase in a marketing team’s mix and overall multichannel strategy happens after you press send on an email campaign: the post-send and performance pillars of email marketing.
During this phase, marketers should gather metrics and data to guide insights impacting future emails and entire marketing campaigns. Email metrics can influence ad messaging and social posts and guide the design, content and product marketing teams. When used strategically, these metrics increase email programs’ ROI while raising marketing channel and workflow efficiency and effectiveness.
As one of the most lucrative channels for reaching target audiences — for every dollar invested in email marketing, brands receive $36 in return — email enables brands to reach their core consumer base: email subscribers.
Just as they adjusted to accommodate the evolution from print to digital, marketers must pivot and accommodate this new disruption to remain competitive — and successful.
They have opted-in to email touch points because they want to hear from the brand. By applying these insights via analytics, marketers optimize marketing spend and messaging to hit business goals.
Email impacts marketing strategy and enables better overall business success. It’s the lifeblood of an effective multichannel campaign. However, Apple’s Mail Privacy Protection — announced earlier this summer with its iOS 15 update — attempts to eliminate metrics and data associated with email.
According to the Litmus Email Client Market Share, in 2020, Apple iPhone, Apple Mail and Apple iPad accounted for nearly half of all email opens. Lacking these insights will create marketing roadblocks for segmented and personalized touch points. Marketers and businesses must prepare by adjusting email strategy and processes before the update occurs.
Companies and consumers have talked about privacy quite a bit lately. Companies fearing breaches, reputation damage and potentially lost revenue want to protect consumer data. Consumer awareness of privacy concerns has grown, too.
In a 2021 survey, over half the respondents expressed more concern about online privacy than in 2020. Consumers expect brands to demonstrate trustworthiness before they willingly share sensitive personal information.
Recognizing an increased desire for better privacy control, Apple revealed new privacy protections in its iOS 15 update, including its Mail Privacy Protection. Apple Mail users may hide their IP addresses, locations and additional data from senders, preventing brands from pulling information like open rates and location. Apple said that “Mail Privacy Protection stops senders from using invisible pixels to collect information about the user.”
What does this update mean for marketers? The potential disappearance of a critical phase in the marketing mix and multichannel strategy: the post-send and performance pillars of email marketing. No open-rate-specific data — the brand will appear to have a 100% open rate.
Apple has encountered monumental backlash to a new child sexual abuse material (CSAM) detection technology it announced earlier this month. The system, which Apple calls NeuralHash, has yet to be activated for its billion-plus users, but the technology is already facing heat from security researchers who say the algorithm is producing flawed results.
NeuralHash is designed to identify known CSAM on a user’s device without having to possess the image or knowing the contents of the image. Because a user’s photos stored in iCloud are end-to-end encrypted so that even Apple can’t access the data, NeuralHash instead scans for known CSAM on a user’s device, which Apple claims is more privacy friendly, as it limits the scanning to just photos rather than other companies which scan all of a user’s file.
Apple does this by looking for images on a user’s device that have the same hash — a string of letters and numbers that can uniquely identify an image — that are provided by child protection organizations like NCMEC. If NeuralHash finds 30 or more matching hashes, the images are flagged to Apple for a manual review before the account owner is reported to law enforcement. Apple says the chance of a false positive is about one in one trillion accounts.
But security experts and privacy advocates have expressed concern that the system could be abused by highly resourced actors, like governments, to implicate innocent victims or to manipulate the system to detect other materials that authoritarian nation states find objectionable. NCMEC called critics the “screeching voices of the minority,” according to a leaked memo distributed internally to Apple staff.
Last night, Asuhariet Ygvar reverse-engineered Apple’s NeuralHash into a Python script and published code to GitHub, allowing anyone to test the technology regardless of whether they have an Apple device to test. In a Reddit post, Ygvar said NeuralHash “already exists” in iOS 14.3 as obfuscated code, but was able to reconstruct the technology to help other security researchers understand the algorithm better before it’s rolled out to iOS and macOS devices later this year.
It didn’t take long before others tinkered with the published code and soon came the first reported case of a “hash collision,” which in NeuralHash’s case is where two entirely different images produce the same hash. Cory Cornelius, a well-known research scientist at Intel Labs, discovered the hash collision. Ygvar confirmed the collision a short time later.
Hash collisions can be a death knell to systems that rely on cryptography to keep them secure, such as encryption. Over the years several well-known password hashing algorithms, like MD5 and SHA-1, were retired after collision attacks rendered them ineffective.
Kenneth White, a cryptography expert and founder of the Open Crypto Audit Project, said in a tweet: “I think some people aren’t grasping that the time between the iOS NeuralHash code being found and [the] first collision was not months or days, but a couple of hours.”
When reached, an Apple spokesperson declined to comment on the record. But in a background call where reporters were not allowed to quote executives directly or by name, Apple downplayed the hash collision and argued that the protections it puts in place — such as a manual review of photos before they are reported to law enforcement — are designed to prevent abuses. Apple also said that the version of NeuralHash that was reverse-engineered is a generic version, and not the complete version that will roll out later this year.
It’s not just civil liberties groups and security experts that are expressing concern about the technology. A senior lawmaker in the German parliament sent a letter to Apple chief executive Tim Cook this week saying that the company is walking down a “dangerous path” and urged Apple not to implement the system.
A group of senators sent new Amazon CEO Andy Jassy a letter Friday pressing the company for more information about how it scans and stores customer palm prints for use in some of its retail stores.
The company rolled out the palm print scanners through a program it calls Amazon One, encouraging people to make contactless payments in its brick and mortar stores without the use of a card. Amazon introduced its Amazon One scanners late last year, and they can now be found in Amazon Go convenience and grocery stores, Amazon Books and Amazon four-star stores across the U.S. The scanners are also installed in eight Washington state-based Whole Foods locations.
In the new letter, Senators Amy Klobuchar (D-MN), Bill Cassidy (R-LA) and Jon Ossoff (D-GA) press Jassy for details about how Amazon plans to expand its biometric payment system and if the data collected will help the company target ads.
“Amazon’s expansion of biometric data collection through Amazon One raises serious questions about Amazon’s plans for this data and its respect for user privacy, including about how Amazon may use the data for advertising and tracking purposes,” the senators wrote in the letter, embedded below.
The lawmakers also requested information on how many people have enrolled in Amazon One to date, how Amazon will secure the sensitive data and if the company has ever paired the palm prints with facial recognition data it collects elsewhere.
“In contrast with biometric systems like Apple’s Face ID and Touch ID or Samsung Pass, which store biometric information on a user’s device, Amazon One reportedly uploads biometric information to the cloud, raising unique security risks,” the senators wrote. “… Data security is particularly important when it comes to immutable customer data, like palm prints.”
The company controversially introduced a $10 credit for new users who enroll their palm prints in the program, prompting an outcry from privacy advocates who see it as a cheap tactic to coerce people to hand over sensitive personal data.
There’s plenty of reason to be skeptical. Amazon has faced fierce criticism for its other big biometric data project, the AI facial recognition software known as Rekognition, which the company provided to U.S. law enforcement agencies before eventually backtracking with a moratorium on policing applications for the software last year.