FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

China’s internet regulator takes aim at forced data collection

By Rita Liao

China is a step closer to cracking down on unscrupulous data collection by app developers. This week, the country’s cybersecurity watchdog began seeking comment on the range of user information that apps from instant messengers to ride-hailing services are allowed to collect.

The move follows in the footstep of a proposed data protection law that was released in October and is currently under review. The comprehensive data privacy law is set to be a “milestone” if passed and implemented, wrote the editorial of China Daily, the Chinese Communist Party’s official mouthpiece. The law is set to restrict data practices not just by private firms but also among government departments.

“Some leaking of personal information has resulted in economic losses for individuals when the information is used to swindle the targeted individual of his or her money,” said the party paper. “With increasingly advanced technology, the collection of personal information has been extended to biological information such as an individual’s face or even genes, which could result in serious consequences if such information is misused.”

Apps in China often force users into surrendering excessive personal information by declining access when users refuse to consent. The draft rules released this week take aim at the practice by defining the types of data collection that are “legal, proper and necessary.”

According to the draft, “necessary” data are those that ensure the “normal operation of apps’ basic functions.” As long as users have allowed the collection of necessary data, apps must grant them access.

Here are a few examples of what’s considered “necessary” personal data for different types of apps, as translated by China Law Translate.

  • Navigation: location
  • Ride-hailing: the registered user’s real identity (normally in the form of one’s mobile phone number in China) and location information
  • Messaging: the registered user’s real identity and contact list
  • Payment: the registered user’s real identity, the payer/payee’s bank information
  • Online shopping: the registered user’s real identity, payment details, information about the recipient like their name, address and phone number
  • Games: the registered user’s real identity
  • Dating: the registered user’s real identity, and the age, sex and marital status of the person looking for marriage or dating

There are also categories of apps that are required to grant users access without gathering any personal information upfront: live streaming, short video, video/music streaming, news, browsers, photo editors, and app stores.

It’s worth noting that while the draft provides clear rules for apps to follow, it gives no details on how they will be enforced or how offenders will be punished. For instance, will app stores incorporate the benchmark into their approval process? Or will internet users be the watchdog? It remains to be seen.

Europe will push to work with the US on tech governance, post-Trump

By Natasha Lomas

The European Union said today that it wants to work with US counterparts on a common approach to tech governance — including pushing to standardize rules for applications of technologies like AI and pushing big tech to be more responsible for what their platforms amplify.

EU lawmakers are anticipating rebooted transatlantic relations under the incoming administration of president-elect Joe Biden .

The Commission has published a new EU-US agenda with the aim of encouraging what it bills as “global cooperation — based on our common values, interests and global influence” in a number of areas, from tackling the coronavirus pandemic to addressing climate change and furthering a Western geopolitical agenda.

Trade and tech policy is another major priority for the hoped for reboot of transatlantic relations, starting with an EU-US Summit in the first half of 2021.

Relations have of course been strained during the Trump era as the sitting US president has threatened the bloc with trade tariffs, berated European nations for not spending enough on defence to fulfil their Nato commitments and heavily implied he’d be a lot happier if the EU didn’t exist at all (including loudly supporting brexit).

The Commission agenda conveys a clear message that the bloc’s lawmakers are hopeful of a lot more joint working — toward common goals and interests — once the Biden administration takes office early next year.

Global AI standards?

On the tech front the Commission’s push is for alignment on governance.

“The EU and the US need to join forces as tech-allies to shape technologies, their use and their regulatory environment,” the Commission writes in the agenda. “Using our combined influence, a transatlantic technology space should form the backbone of a wider coalition of like-minded democracies with a shared vision on tech governance and a shared commitment to defend it.”

Among the proposals it’s floating is a “Transatlantic AI Agreement” — which it envisages as setting “a blueprint for regional and global standards aligned with our values”.

While the EU is working on a pan-EU framework to set rules for the use of “high risk” AIs, some US cities and states have already moved to ban the use of specific applications of artificial intelligence — such as facial recognition. So there’s potential to align on some high level principles or standards.

(Or, as the EU puts it: “We need to start acting together on AI — based on our shared belief in a human-centric approach and dealing with issues such as facial recognition.”)

 

“Our shared values of human dignity, individual rights and democratic principles make us natural partners to harness rapid technological change and face the challenges of rival systems of digital governance. This gives us an unprecedented window of opportunity to set a joint EU-US tech agenda,” the Commission also writes, suggesting there’s a growing convergence of views on tech governance.

Talks on tackling big tech

Here it also sees opportunity for the EU and the US to align on tackling big tech — saying it wants to open discussions on setting rules to tackle the societal and market impacts of platform giants.

“There is a growing consensus on both sides of the Atlantic that online platforms and Big Tech raise issues which threaten our societies and democracies, notably through harmful market behaviours, illegal content or algorithm-fuelled propagation of hate speech and disinformation,” it writes.

“The need for global cooperation on technology goes beyond the hardware or software. It is also about our values, our societies and our democracies,” the Commission adds. “In this spirit, the EU will propose a new transatlantic dialogue on the responsibility of online platforms, which would set the blueprint for other democracies facing the same challenges. We should also work closer together to further strengthen cooperation between competent authorities for antitrust enforcement in digital markets.”

The Commission is on the cusp of unveiling its own blueprint for regulating big tech — with a Digital Services Act and Digital Markets Act due to be presented later this month.

Commissioners have said the legislative packages will set clear conditions on digital players, such as for the handling and reporting of illegal content, as well as setting binding transparency and fairness requirements.

They will also introduce a new regime of ex ante rules for so-called gatekeeper platforms that wield significant market power (aka big tech) — with such players set to be subject to a list of dos and don’ts, which could include bans on certain types of self-preferencing and limits on their use of third party data, with the aim of ensuring a level playing field in the future.

The bloc has also been considering beefing up antitrust powers for intervening in digital markets.

Given how advanced EU lawmakers are on proposals to regulate big tech vs US counterparts there’s arguably only a small window of opportunity for the latter to influence the shape of EU rules on (mostly US) big tech.

But the Commission evidently takes the view that rebooted relations, post-Trump, present an opportunity for it to influence US policy — by encouraging European-style platform rules to cross the pond.

It’s fond of claiming the EU’s data protection framework (GDPR) has set a global example that’s influenced lawmakers around the world. So its intent now looks to be to double down — and push to export a European approach to regulating big tech back where most of these giants are based (even as the bloc’s other institutions are still debating and amending the EU proposals).

Next-gen mobile security

Another common challenge the document points to is next-gen mobile connectivity. This has been a particular soapbox of Trump’s in recent years, with the ALL-CAPS loving president frequently taking to Twitter to threaten and bully allies into taking a tough line on allowing Chinese vendors as suppliers for next-gen mobile infrastructure, arguing they pose too great a national security risk.

“We are facing common challenges in managing the digital transition of our economies and societies. These include critical infrastructure, such as 5G, 6G or cybersecurity assets, which are essential for our security, sovereignty and prosperity — but also data, technologies and the role of online platforms,” the Commission writes, easing into the issue.

EU lawmakers go on to say they will put forward proposals “for secure 5G infrastructure across the globe and open a dialogue on 6G” — as part of what they hope will be “wider cooperation on digital supply chain security done through objective risk-based assessments”.

Instead of a blanket ban on Huawei as a 5G supplier the Commission opted to endorse a package of “mitigating measures” — via a 5G toolbox — at the start of this year, which includes requirements for carriers to beef up network security and risk profile assessments of suppliers.

So it looks to be hoping the US can be convinced in the value of a joint approach to standardizing these sorts of security assessments — aka, ‘no more nasty surprises’ — as a strategy to reduce the shocks and uncertainty that have hit digital supply chains during Trump’s presidency.

Increased cooperation around cybersecurity is another area where the EU says it will be pressing US counterparts — floating the idea of joint EU-US restrictions against attributed attackers from third countries in the future. (A proposal which, should it be taken up, could see coordinated sanctions against Russia, which has previously been identified by US and European intelligence agencies running malware attacks targeted at COVID-19 vaccine R&D, for example.)

Easing EU-US data flows

A trickier area for the tech side of the Commission’s plan to reboot transatlantic relations is EU-US data flows.

That’s because Europe’s top court torpedoed the Commission’s US adequacy finding this summer — stripping the country of a privileged status of ‘essential equivalence’ in data protection standards.

Without that there’s huge legal uncertainty and risk for US businesses that want to take EU citizens’ data out of the region for processing. And recent guidance from EU regulators on how to lawfully secure data transfers makes it clear that in some instances there simply won’t be any extra measures or contractual caveats which will fix the risk entirely.

The solution may in fact be data localization in the EU. (Something the Commission’s Data Governance Act proposal, unveiled last week, appeared to confirm by allowing for Member States to set conditions for reuse of the most sensitive types of data — such as prohibiting transfers to third countries.)

“We must also openly discuss diverging views on data governance and see how these can be overcome constructively,” the Commission writes on this thorny issue, adding: “The EU and the US should intensify their cooperation at bilateral and multilateral level to promote regulatory convergence and facilitate free data flow with trust on the basis of high standards and safeguards.”

Commissioners have warned before that there’s no quick fix for the EU-US data transfer issue — but a longer term solution would be a convergence of standards in the areas of privacy and data protection.

And, again, that’s an area where US states have been taking action. But the Commission’s agenda pushing for “regulatory convergence” to ease data flows sums to trying to convince US counterparts of the economic case for reforming Section 702 of FISA…

Digital tax and tech-trade cooperation

Digital tax reform is also inexorably on the EU agenda since no agreement has been possibly under Trump on this stickiest of tech policy issues.

It writes that both the EU and the US should “strongly commit to the timely conclusion of discussions on a global solution within the context of OECD and G20” — saying this is vital to create “a fair and modern economy, which provides market-based rewards for the best innovative ideas”.

“Fair taxation in the digital economy requires innovative solutions on both sides of the Atlantic,” it adds. 

Another proposal the EU is floating is to establish a EU-US Trade and Technology Council — to “jointly maximise opportunities for market-driven transatlantic collaboration, strengthen our technological and industrial leadership and expand bilateral trade and investment”.

It envisages the body focusing on reducing trade barriers; developing compatible standards and regulatory approaches for new technologies; ensuring critical supply chain security; deepening research collaboration and promoting innovation and fair competition — saying there should also be “a new common focus on protecting critical technologies”.

“We need closer cooperation on issues such as investment screening, Intellectual Property rights, forced transfers of technology, and export controls,” it adds.

The Commission announced its own Intellectual Property Action Plan last week, alongside the Data Governance Act proposal — which included support for SMEs to file patents. It also said it will consider whether reform the framework for filing standards essential patents, encouraging industry to engage in forums aimed at reducing litigation in the meanwhile.

Massachusetts lawmakers vote to pass a statewide police ban on facial recognition

By Taylor Hatmaker

Massachusetts lawmakers have voted to pass a new police reform bill that will ban police departments and public agencies from using facial recognition technology across the state.

The bill was passed by both the state’s House and Senate on Tuesday, a day after senior lawmakers announced an agreement that ended months of deadlock.

The police reform bill also bans the use of chokeholds and rubber bullets, and limits the use of chemical agents like tear gas, and also allows police officers to intervene to prevent the use of excessive and unreasonable force. But the bill does not remove qualified immunity for police, a controversial measure that shields serving police from legal action for misconduct, following objections from police groups.

Lawmakers brought the bill to the state legislature in the wake of the killing of George Floyd, an unarmed Black man who was killed by a white Minneapolis police officer, since charged with his murder.

Critics have for years complained that facial recognition technology is flawed, biased and disproportionately misidentifies people and communities of color. But the bill grants police an exception to run facial recognition searches against the state’s driver license database with a warrant. In granting that exception, the state will have to publish annual transparency figures on the number of searches made by officers.

The Massachusetts Senate voted 28-12 to pass, and the House voted 92-67. The bill will now be sent to Massachusetts governor Charlie Baker for his signature.

In the absence of privacy legislation from the federal government, laws curtailing the use of facial recognition are popping up on a state and city level. The patchwork nature of that legislation means that state and city laws have room to experiment, creating an array of blueprints for future laws that can be replicated elsewhere.

Portland, Oregon passed a broad ban on facial recognition tech this September. The ban, one of the most aggressive in the nation, blocks city bureaus from using the technology but will also prohibit private companies from deploying facial recognition systems in public spaces. Months of clashes between protesters and aggressive law enforcement in that city raised the stakes on Portland’s ban.

Earlier bans in Oakland, San Francisco and Boston focused on forbidding their city governments from using the technology but, like Massachusetts, stopped short of limiting its use by private companies. San Francisco’s ban passed in May of last year, making the international tech hub the first major city to ban the use of facial recognition by city agencies and police departments.

At the same time that cities across the U.S. are acting to limit the creep of biometric surveillance, those same systems are spreading at the federal level. In August, Immigration and Customs Enforcement (ICE) signed a contract for access to a facial recognition database created by Clearview AI, a deeply controversial company that scrapes facial images from online sources, including social media sites.

While most activism against facial recognition only pertains to local issues, at least one state law has proven powerful enough to make waves on a national scale. In Illinois, the Biometric Information Privacy Act (BIPA) has ensnared major tech companies including Amazon, Microsoft and Alphabet for training facial recognition systems on Illinois residents without permission.

No Google-Fitbit merger without human rights remedies, says Amnesty to EU

By Natasha Lomas

Human rights NGO, Amnesty International, has written to the EU’s competition regulator calling for Google’s acquisition of wearable maker Fitbit to be blocked — unless meaningful safeguards can be baked in.

The tech giant announced its intent to splash $2.1 billion to acquire Fitbit a year ago but has yet to gain regulatory approval for the deal in the European Union.

In a letter addressed to the blocs competition chief, Margrethe Vestager, Amnesty writes: “The commission must ensure that the merger does not proceed unless the two business enterprises can demonstrate that they have taken adequate account of the human rights risks and implemented strong and meaningful safeguards that prevent and mitigate these risks in the future.”

The letter urges the commission to take heed of an earlier call by a coalition of civil society groups also raising concerns about the merger for “minimum remedies” that regulators must guarantee before any approval.

In a report last year the NGO attacked the business model of Google and Facebook — arguing that the “surveillance giants” enable human rights harm “at a population scale.”

Amnesty warns now that Google is “incentivized to merge and aggregate data across its different platforms” as a consequence of that surveillance-based business model.

“Google’s business model incentivizes the company to continuously seek more data on more people across the online world and into the physical world. The merger with Fitbit is a clear example of this expansionist approach to data extraction, enabling the company to extend its data collection into the health and wearables sector,” it writes. “The sheer scale of the intrusion of Google’s business model into our private lives is an unprecedented interference with our privacy, and in fact has undermined the very essence of privacy.”

We’ve reached out to the commission and Google for a response to Amnesty’s letter. Update: A commission spokesperson confirmed it’s received the letter and said it will reply in due course.

Google’s plan to gobble Fitbit and its health tracking data has been stalled as EU regulators dig into competition concerns. Vestager elected to open an in-depth probe in August, saying she wanted to make sure the deal wouldn’t distort competition by further entrenching Google’s dominance of the online ad market.

The commission has also voiced concerns about the risk of Google locking other wearable device makers out of its Android mobile ecosystem.

However concerns over Google’s plan to gobble up Fitbit range wider than the risk of it getting more market muscle if the deal gets waved through.

Put simply, letting sensitive health data fall into the hands of an advertising giant is a privacy trash fire.

Amnesty International is just the latest rights watcher to call for the merger to be blocked. Privacy campaign groups and the EU’s own data protection advisor have been warning for months against letting the tech giant gobble up sensitive health data.

The commission’s decision to scrutinize the acquisition rather than waiving it through with a cursory look has led Google to make a number of concessions in an attempt to get it cleared — including a pledge not to use Fitbit data for ad targeting and to guarantee support for other wearables makers to operate on Android.

In its letter, Amnesty argues that the “safeguards” Google has offered are not enough.

“The company’s past practice around privacy further heighten the need for strict safeguards,” it warns, pointing to examples such as Google combining data from advertising network DoubleClick after it had acquired that business with personal data collected from its other platforms.

“The European Data Protection Board has recognized the risks of the merger, stating that the “combination and accumulation of sensitive personal data” by Google could entail a “high level of risk” to the rights to privacy and data protection,” it adds.

As well as undermining people’s privacy, Google’s use of algorithms fed with personal data to generate profiles of internet users in order to predict their behavior erodes what Amnesty describes as “the critical principle that all people should enjoy equal access to their human rights.”

“This risk is heightened when profiling is deployed in contexts that touch directly on people’s economic, social and cultural rights, such as the right to health where people may suffer unequal treatment based on predictions about their health, and as such must be taken into account in the context of health and fitness data,” it suggests.

“This power of the platforms has not only exacerbated and magnified their rights impacts but has also created a situation in which it is very difficult to hold the companies to account, or for those affected to access an effective remedy,” Amnesty adds, noting that while big tech companies have faced a number of regulatory actions around the world none has so far been able to derail what it calls “the fundamental drivers of the surveillance-based business model.”

So far the commission has stood firm in taking its time to consider the issue in detail.

A series of extensions mean a decision on whether to allow the Google-Fitbit merger may not come until early 2021. Though we understand the bloc’s national competition authorities are meeting to discuss the merger at the start of December so it’s possible a decision could be issued before the end of the year.

Per EU merger law, the commission college takes the final decision — with a requirement to take “utmost account” of the opinion of the member states’ advisory committee (though it’s not legally binding).

So it’s ultimately up to Brussels to determine whether Google-Fitbit gets green lit.

In recent years, competition chief Vestager, who is also EVP for the commission’s digital strategy, has said she favors tighter regulation as a tool for ensuring businesses comply with the EU’s rules, rather than blocking market access or outright bans on certain practices.

She has also voiced opposition to breaking up tech giants, again preferring to advocate for imposing controls on how they can use data as a way to rebalance digital markets.

To date, the commission has never blocked a tech/digital merger (it has in telecoms, where it stepped in in 2016 to block Hutchison’s proposed acquisition of Telefonica UK) though it has had its fingers burnt by big tech’s misleading filings — so has its own reputation to consider above reaching for the usual rubber stamp.

Simultaneously, EU lawmakers are working on a proposal for an ex ante regulation to address competition concerns in digital markets that would put specific rules and obligations on dominant players like Google — again in areas such as data use and data access.

That plan is due to be presented early next month — so it’s another factor that may be adding to delay the commission’s Google-Fitbit decision.

GDPR enforcement must level up to catch big tech, report warns

By Natasha Lomas

A new report by European consumer protection umbrella group Beuc, reflecting on the barriers to effective cross-border enforcement of the EU’s flagship data protection framework, makes awkward reading for the regional lawmakers and regulators as they seek to shape the next decades of digital oversight across the bloc.

Beuc’s members filed a series of complaints against Google’s use of location data in November 2018 — but some two years on from raising privacy concerns there’s been no resolution of the complaints.

Since 2018, legal cases in 🇪🇺, 🇺🇸 &🇦🇺 have been launched against Google in relation to their collection and use of location data. Since then, nothing happened while Google generated $251billion from advertising revenue. pic.twitter.com/tNkUvXrAan

— The Consumer Voice (@beuc) November 26, 2020

The tech giant continues to make billions in ad revenue, including by processing and monetizing internet users’ location data. Its lead data protection supervisor, under GDPR’s one-stop-shop mechanism for dealing with cross-border complaints, Ireland’s Data Protection Commission (DPC), did finally open an investigation in February this year.

But it could still be years before Google faces any regulatory action in Europe related to its location tracking.

This is because Ireland’s DPC has yet to issue any cross-border GDPR decisions, some 2.5 years after the regulation started being applied. (Although, as we reported recently, a case related to a Twitter data breach is inching toward a result in the coming days.)

By contrast, France’s data watchdog, the CNIL, was able to complete a GDPR investigation into the transparency of Google’s data processing in much quicker order last year.

This summer French courts also confirmed the $57 million fine it issued, slapping down Google’s appeal.

But the case predated Google coming under the jurisdiction of the DPC. And Ireland’s data regulator has to deal with a disproportionate number of multinational tech companies, given how many have established their EU base in the country.

The DPC has a major backlog of cross-border cases, with more than 20 GDPR probes involving a number of tech companies including Apple, Facebook/WhatsApp and LinkedIn. (Google has also been under investigation in Ireland over its adtech since 2019.)

This week the EU’s internet market commissioner, Thierry Breton, said regional lawmakers are well-aware of enforcement “bottlenecks” in the General Data Protection Regulation (GDPR).

He suggested the commission has learned lessons from this friction — claiming it will ensure similar concerns don’t affect the future working of a regulatory proposal related to data reuse that he was out speaking in public to introduce.

The commission wants to create standard conditions for rights-respecting reuse of industrial data across the EU, via a new Data Governance Act (DGA), which proposes similar oversight mechanisms as are involved in the EU’s oversight of personal data — including national agencies monitoring compliance and a centralized EU steering body (which they’re planning to call the European Data Innovation Board as a mirror entity to the European Data Protection Board).

The commission’s ambitious agenda for updating and expanding the EU’s digital rules framework, means criticism of GDPR risks taking the shine off the DGA before the ink has dried on the proposal document — putting pressure on lawmakers to find creative ways to unblock GDPR’s enforcement “bottleneck.” (Creative because national agencies are responsible for day-to-day oversight, and member states are responsible for resourcing DPAs.) 

In an initial GDPR review this summer, the commission praised the regulation as a “modern and horizontal piece of legislation” and a “global reference point” — claiming it’s served as a point of inspiration for California’s CCPA and other emerging digital privacy frameworks around the world.

But they also conceded GDPR enforcement is lacking.

The best answer to this concern “will be a decision from the Irish data protection authority about important cases,” the EU’s justice commissioner, Didier Reynders, said in June.

Five months later European citizens are still waiting.

Beuc’s report — which it’s called “The long and winding road: Two years of the GDPR: A cross-border data protection case from a consumer perspective” — details the procedural obstacles its member organizations have faced in seeking to obtain a decision related to the original complaints, which were filed with a variety of DPAs around the EU.

This includes concerns of the Irish DPC making unnecessary “information and admissibility checks;” as well as rejecting complaints brought by an interested organization on the grounds they lack a mandate under Irish law, because it does not allow for third party redress (yet the Dutch consumer organization had filed the complaint under Dutch law which does …).

The report also queries why the DPC chose to open its own volition inquiry into Google’s location data activities (rather than a complaint-led inquiry) — which Beuc says risks a further delay to reaching a decision on the complaints themselves.

It further points out that the DPC’s probe of Google only looks at activity since February 2020 not November 2018 when the complaints were made — meaning there’s a missing chunk of Google’s location data processing that’s not even being investigated yet.

It notes that three of its member organizations involved in the Google complaints had considered applying for a judicial review of the DPC’s decision (NB: others have resorted to that route) — but they decided not to proceed in part because of the significant legal costs it would have entailed.

The report also points out the inherent imbalance of GDPR’s one-stop-shop mechanism shifting the administration of complaints to the location of companies under investigation — arguing they therefore benefit from “easier access to justice” (versus the ordinary consumer faced with undertaking legal proceedings in a different country and (likely) language).

“If the lead authority is in a country with tradition in ‘common law,’ like Ireland, things can become even more complex and costly,” Beuc’s report further notes.

Another issue it raises is the overarching one of rights complaints having to fight what it dubs “a moving target” — given well-resourced tech companies can leverage regulatory delays to (superficially) tweak practices, greasing continued abuse with misleading PR campaigns. (Something Beuc accuses Google of doing.)

DPAs must “adapt their enforcement approach to intervene more rapidly and directly.” it concludes.

“Over two years have passed since the GDPR became applicable, we have now reached a turning point. The GDPR must finally show its strength and become a catalyst for urgently needed changes in business practices,” Beuc goes on in a summary of its recommendations. “Our members experience and that of other civil society organisations, reveals a series of obstacles that significantly hamper the effective application of the GDPR and the correct functioning of its enforcement system.

BEUC recommends to the relevant EU and national authorities to make a comprehensive and joint effort to ensure the swift enforcement of the rules and improve the position of data subjects and their representing organisations, particularly in the framework of cross-border enforcement cases.”

We reached out to the Commission and the Irish DPC with questions about the report. But at the time of writing neither had responded. We’ve also asked Google for comment.

Update: The DPC’s deputy commissioner, Graham Doyle, told us the reason it chose to open a “forward-looking” inquiry into Google’s location practices in early 2020 was it wanted to be able to investigate “in real time” rather than try to go back and replicate how things were.

Doyle also said the location-related Google complaints had been lodged with different DPAs at difference times — meaning some complaints had taken considerably longer to reach Ireland than November 2018, raising questions about the efficiency of the current procedures for European DPAs to send complaints to a lead supervisor.

“The complaints in question were lodged with different Supervisory Authorities on different dates from November 2018,” he said. “The DPC received these complaints in July 2019, following which we engaged with Beuc. We then opened an own-volition inquiry in February 2020 in a manner that will enable us to undertake real-time testing in order to evidence our findings.”

Beuc earlier sent a list of eight recommendations for “efficient” GDPR enforcement to the commission in May.

Update II: A commission spokesperson pointed back to its earlier evaluation of the GDPR this summer, flagging follow-up actions it committed to at that point — such as continuing bilateral exchanges with member states on proper implementation of the regulation.

It also said that it would “continue to use all the tools at its disposal to foster compliance by member states with their obligations” — including, potentially, instigating infringement procedures if necessary.

Additional follow-up actions related to “implementing and complementing” the legal framework that it detailed in the report included supporting “further exchanges of views and national practices between member states on topics that are subject to further specification at national level so as to reduce the level of fragmentation of the single market, such as processing of personal data relating to health and research, or which are subject to balancing with other rights such as the freedom of expression;” and to push for “a consistent application of the data protection framework in relation to new technologies to support innovation and technological developments.” 

The commission also said it would use the GDPR Member States Expert Group to “facilitate discussions and sharing of experience between member states and with the commission,” with a view to improving the regulation’s operation.

In the area of GDPR’s governance system, EU lawmakers committed to continue to monitor the effectiveness and independence of national DPAs, and said they would work to encourage cooperation between regulators (“in particular in fields such as competition, electronic communications, security of network and information systems and consumer policy”), while also supporting the EDPB to assess how procedures related to cross-border cases could be improved.  

Europe sets out the rules of the road for its data reuse plan

By Natasha Lomas

European Union lawmakers have laid out a major legislative proposal today to encourage the reuse of industrial data across the Single Market by creating a standardized framework of trusted tools and techniques to ensure what they describe as “secure and privacy-compliant conditions” for sharing data.

Enabling a network of trusted and neutral data intermediaries, and an oversight regime comprised of national monitoring authorities and a pan-EU coordinating body, are core components of the plan.

The move follows the European Commission’s data strategy announcement in February, when it said it wanted to boost data reuse to support a new generation of data-driven services powered by data-hungry artificial intelligence, as well as encouraging the notion of using “tech for good” by enabling “more data and good quality data” to fuel innovation with a common public good (like better disease diagnostics) and improve public services.

The wider context is that personal data is already regulated in the bloc (such as under the General Data Protection Regulation; GDPR), which restricts reuse. While commercial considerations can limit how industrial data is shared.

The EU’s executive believes harmonzied requirements that set technical and/or legal conditions for data reuse are needed to foster legal certainty and trust — delivered via a framework that promises to maintain rights and protections and thus get more data usefully flowing.

The Commission sees major business benefits flowing from the proposed data governance regime. “Businesses, both small and large, will benefit from new business opportunities as well as from a reduction in costs for acquiring, integrating and processing data, from lower barriers to enter markets, and from a reduction in time-to-market for novel products and services,” it writes in a press release.

It has further data-related proposals incoming in 2021, in addition to a package of digital services legislation it’s due to lay out early next month — as part of a wider reboot of industrial strategy which prioritises digitalization and a green new deal.

All legislative components of the strategy will need to gain the backing of the European Council and parliament so there’s a long road ahead for implementing the plan.

Data Governance Act

EU lawmakers often talk in shorthand about the data strategy being intended to encourage the sharing and reuse of “industrial data” — although the Data Governance Plan (DGA) unveiled today has a wider remit.

The Commission envisages the framework enabling the sharing of data that’s subject to data protection legislation — which means personal data; where privacy considerations may (currently) restrain reuse — as well as industrial data subject to intellectual property, or which contains trade secrets or other commercially sensitive information (and is thus not typically shared by its creators primarily for commercial reasons). 

In a press conference on the data governance proposals, internal market commissioner Thierry Breton floated the notion of “data altruism” — saying the Commission wants to provide citizens with an organized way to share their own personal data for a common/public good, such as aiding research into rare diseases or helping cities map mobility for purposes like monitoring urban air quality.

“Through personal data spaces, which are novel personal information management tools and services, Europeans will gain more control over their data and decide on a detailed level who will get access to their data and for what purpose,” the Commission writes in a Q&A on the proposal.

It’s planning a public register where entities will be able to register as a “data altruism organisation” — provided they have a not-for-profit character; meet transparency requirements; and implement certain safeguards to “protect the rights and interests of citizens and companies” — with the aim of providing “maximum trust with minimum administrative burden”, as it puts it.

The DGA envisages different tools, techniques and requirements governing how private sector bodies share data versus private companies.

For public sector bodies there may be technical requirements (such as encryption or anonymization) attached to the data itself or further processing limitations (such as requiring it to take place in “dedicated infrastructures operated and supervised by the public sector”), as well as legally binding confidentiality agreements that must be signed by the reuser.

“Whenever data is being transferred to a reuser, mechanisms will be in place that ensure compliance with the GDPR and preserve the commercial confidentiality of the data,” the Commission’s PR says.

To encourage businesses to get on board with pooling their own data sets — for the promise of a collective economic upside via access to bigger volumes of pooled data — the plan is for regulated data intermediaries/marketplaces to provide “neutral” data-sharing services, acting as the “trusted” go-between/repository so data can flow between businesses.

“To ensure this neutrality, the data-sharing intermediary cannot exchange the data for its own interest (e.g. by selling it to another company or using it to develop their own product based on this data) and will have to comply with strict requirements to ensure this neutrality,” the Commission writes on this.

Under the plan, intermediaries’ compliance with data handling requirements would be monitored by public authorities at a national level.

But the Commission is also proposing the creation of a new pan-EU body, called the European Data Innovation Board, that would try to knit together best practices across Member States — in what looks like a mirror of the steering/coordinating role undertaken by the European Data Protection Board (which links up the EU’s patchwork of data protection supervisory authorities).

“These data brokers or intermediaries that will provide for data sharing will do that in a way that your rights are protected and that you have choices,” said EVP Margrethe Vestager, who heads up the bloc’s digital strategy, also speaking at today’s press conference.

“So that you can also have personal data spaces where your data is managed. Because, initially, when you ask people they say well actually we do want to share but we don’t really know how to do it. And this is not only the technicalities — it’s also the legal certainty that’s missing. And this proposal will provide that,” she added.

Data localization requirements — or not?

The commissioners faced a number of questions over the hot button issue of international data transfers.

Breton was asked whether the DGA will include any data localization requirements. He responded by saying — essentially — that the rules will bake in a series of conditions which, depending on the data itself and the intended destination, may mean that storing and processing the data in the EU is the only viable option.

“On data localization — what we do is to set a GDPR-type of approach, through adequacy decisions and standard contractual clauses for only sensitive data through a cascading of conditions to allow the international transfer under conditions and in full respect of the protected nature of the data. That’s really the philosophy behind it,” Breton said. “And of course for highly sensitive data [such as] in the public health domain it is necessary to be able to set further conditions, depending on the sensitivity, otherwise… Member States will not share them.”

“For instance it could be possible to limit the reuse of this data into public secure infrastructures so that companies will come to use the data but not keep them. It could be also about restricting the number of access in third countries, restricting the possibility to further transfer the data and if necessary also prohibiting the transfer to a third country,” he went on, adding that such conditions would be “in full respect” of the EU’s WTO obligations.

In a section of its Q&A that deals with data localization requirements, the Commission similarly dances around the question, writing: “There is no obligation to store and process data in the EU. Nobody will be prohibited from dealing with the partner of their choice. At the same time, the EU must ensure that any access to EU citizen’s personal data and certain sensitive data is in compliance with its values and legislative framework.”

At the presser, Breton also noted that companies that want to gain access to EU data that’s been made available for reuse will need to have legal representation in the region. “This is important of course to ensure the enforceability of the rules we are setting,” he said. “It is very important for us — maybe not for other continents but for us — to be fully compliant.”

The commissioners also faced questions about how the planned data reuse rules would be enforced — given ongoing criticism over the lack of uniformly vigorous enforcement of Europe’s data protection framework, GDPR.

“No rule is any good if not enforced,” agreed Vestager. “What we are suggesting here is that if you have a data-sharing service provider and they have notified themselves it’s then up to the authority with whom they have notified actually to monitor and to supervise the compliance with the different things that they have to live up to in order to preserve the protection of these legitimate interests — could be business confidentiality, could be intellectual property rights.

“This is a thing that we will keep on working on also in the future proposals that are upcoming — the Digital Services Act and the Digital Markets Act — but here you have sort of a precursor that the ones who receive the notification in Member States they will also have to supervise that things are actually in order.”

Also responding on the enforcement point, Breton suggested enforcement would be baked in up front, such as by careful control of who could become a data reuse broker.

“[Firstly] we are putting forward common rules and harmonized rules… We are creating a large internal market for data. The second thing is that we are asking Member States to create specific authorities to monitor. The third thing is that we will ensure coherence and enforcement through the European Data Innovation Board,” he said. “Just to give you an example… enforcement is embedded. To be a data broker you will need to fulfil a certain number of obligations and if you fulfil these obligations you can be a neutral data broker — if you don’t

Alongside the DGA, the Commission also announced an Intellectual Property Action Plan.

Vestager said this aims to build on the EU’s existing IP framework with a number of supportive actions — including financial support for SMEs involved in the Horizon Europe R&D program to file patents.

The Commission is also considering whether to reform the framework for filing standards essential patents. But in the short term Vestager said it would aim to encourage industry to engage in forums aimed at reducing litigation.

“One example could be that the Commission could set up an independent system of third party essentiality checks in view of improving legal certainty and reducing litigation costs,” she added of the potential reform, noting that protecting IP is an important component of the bloc’s industrial strategy.

Decrypted: Apple and Facebook’s privacy feud, Twitter hires Mudge, mysterious zero-days

By Zack Whittaker

Trump’s election denialism saw him retaliate in a way that isn’t just putting the remainder of his presidency in jeopardy, it’s already putting the next administration in harm’s way.

In a stunning display of retaliation, Trump fired CISA director Chris Krebs last week after declaring that there was “no evidence that any voting system deleted or lost votes, changed votes or was in any way compromised,” a direct contradiction to the conspiracy-fueled fever dreams of the president who repeatedly claimed, without evidence, that the election had been hijacked by the Democrats. CISA is left distracted by disarray, with multiple senior leaders leaving their posts — some walked, some were pushed — only for the next likely chief to stumble before he even starts because of concerns with his security clearance.

Until yesterday, Biden’s presidential transition team was stuck in cybersecurity purgatory because the incumbent administration refused to trigger the law that grants the incoming team access to government resources, including cybersecurity protections. That’s left the incoming president exposed to ongoing cyber threats, all while being shut out from classified briefings that describe those threats in detail.

As Biden builds his team, Silicon Valley is also gearing up for a change in government — and temperament. But don’t expect too much of the backlash to change. Much of the antitrust allegations, privacy violations and net neutrality remain hot button issues, and the tech titans resorting to cheap “charm offenses” are likely to face the music under the Biden administration — whether they like it or not.

Here’s more from the week.


THE BIG PICTURE

Apple and Facebook spar over privacy — again

Apple and Facebook are back in the ring, fighting over which company is a bigger existential threat to privacy. In a letter to a privacy rights group, Apple said its new anti-tracking feature will launch next year, which will give users the choice of blocking in-app tracking, a move that’s largely expected to cause havoc to the online advertising industry and data brokers.

Given an explicit option between being tracked and not, as the feature will do, most are expected to decline.

Apple’s letter specifically called out Facebook for showing a “disregard for user privacy.” Facebook, which made more than 98% of its global revenue last year from advertising, took its own potshot back at Apple, claiming the iPhone maker was “using their dominant market position to self-preference their own data collection, while making it nearly impossible for their competitors to use the same data.”

Australia’s spy agencies caught collecting COVID-19 app data

By Zack Whittaker

Australia’s intelligence agencies have been caught “incidentally” collecting data from the country’s COVIDSafe contact-tracing app during the first six months of its launch, a government watchdog has found.

The report, published Monday by the Australian government’s inspector general for the intelligence community, which oversees the government’s spy and eavesdropping agencies, said the app data was scooped up “in the course of the lawful collection of other data.”

But the watchdog said that there was “no evidence” that any agency “decrypted, accessed or used any COVID app data.”

Incidental collection is a common term used by spies to describe the data that was not deliberately targeted but collected as part of a wider collection effort. This kind of collection isn’t accidental, but more of a consequence of when spy agencies tap into fiber optic cables, for example, which carries an enormous firehose of data. An Australian government spokesperson told one outlet, which first reported the news, that incidental collection can also happen as a result of the “execution of warrants.”

The report did not say when the incidental collection stopped, but noted that the agencies were “taking active steps to ensure compliance” with the law, and that the data would be “deleted as soon as practicable,” without setting a firm date.

For some, fears that a government spy agency could access COVID-19 contact-tracing data was the worst possible outcome.

Since the start of the COVID-19 pandemic, countries — and states in places like the U.S. — have rushed to build contact-tracing apps to help prevent the spread of the virus. But these apps vary wildly in terms of functionality and privacy.

Most have adopted the more privacy-friendly approach of using Bluetooth to trace people with the virus with which you may have come into contact. Many have chosen to implement the Apple-Google system, which hundreds of academics have backed. But others, like Israel and Pakistan, are using more privacy-invasive techniques, like tracking location data, which governments can also use to monitor a person’s whereabouts. In Israel’s case, the tracking was so controversial that the courts shut it down.

Australia’s intelligence watchdog did not say specifically what data was collected by the spy agencies. The app uses Bluetooth and not location data, but the app requires the user to upload some personal information — like their name, age, postal code and phone number — to allow the government’s health department to contact those who may have come into contact with an infected person.

Australia has seen more than 27,800 confirmed coronavirus cases and more than 900 deaths since the start of the pandemic.

Brexit’s data compliance burden could cost UK firms up to £1.6BN, says think tank

By Natasha Lomas

An analysis of the total cost to UK businesses if the country fails to gain an adequacy agreement from the European Commission once it leaves the bloc at the end of the year — creating barriers to inbound data flows from the EU — suggests the price in pure compliance terms could be between £1BN and £1.6BN.

The assessment of the economic impacts if the UK is deemed a third country under EU data rules has been carried out by the New Economics Foundation (NEF) think tank and UCL’s European Institute research hub — with the researchers  conducting interviews with over 60 legal professionals, data protection officers, business representatives, and academics, from the UK and EU.

They are estimating that the average compliance cost for an affected micro business will be £3,000; or £10,000 for a small business; £19,555 for a medium business; and £162,790 for a large business.

“This extra cost stems from the additional compliance obligations – such as setting up standard contractual clauses (SCCs) – on companies that want to continue transferring data from the EU to the UK,” they write in the report. “We believe our modelling is a relatively conservative estimate as it is underpinned by moderate assumptions about the firm-level cost and number of companies affected.”

An adequacy agreement refers to a status that can be conferred on a country outside the European Economic Area (as the UK will be once the Brexit transition is over) — if the EU’s executive deems the levels of data protection in the country are essentially equivalent to what’s provided by European law.

The UK has said it wants to gain an adequacy agreement with the EU as it works on implementing the 2016 referendum vote to leave the bloc. But there are doubts over its chances of obtaining the coveted status — not least because of surveillance powers enshrined in UK law since the 2013 Snowden disclosures (which revealed the extent of Western governments’ snooping on digital data flows).

Broad powers that sanction UK state agencies’ digital surveillance have faced a number of legal challenges under UK and EU law.

The government has also signalled an intention to ‘liberalize’ domestic data laws as it leaves the EU — writing in a national data strategy published in September that it wants to ensure data is not “inappropriately constrained” by regulations “so that it can be used to its full potential”.

But any moves to denude the UK’s data protection standards risk an ‘inadequate’ finding by the Commission.

Europe’s top court, meanwhile, has set a clear line that governments cannot use national security to bypass general principles of EU law, such as proportionality and respect for privacy.

Another major — and highly pertinent — ruling by the CJEU this summer invalidated an adequacy status the Commission had previously conferred on the US, striking down the EU-US Privacy Shield transatlantic data transfer mechanism. It does not bode well for the UK’s chances of adequacy.

The court also made it clear that the most used alternative for international transfers (a legal tool called Standard Contractual Clauses, aka SCCs) must face proactive scrutiny from EU regulators when data is flowing to third countries where citizens’ information could be at risk.

The thousands of companies that had been relying on Privacy Shield to rubberstamp their EU to US data flows are now scrambling for alternatives on a case by case basis — with vastly inflated legal risk, complexity and administration requirements.

The same may be true in very short order for scores of UK-based data controllers that want to continue being able to receive inbound data flows from users in the EU after the end of the Brexit transition.

Earlier this month the European Data Protection Board (EDPB) put out 38 pages of guidance for those trying to navigate new legal uncertainty around SCCs — in which it warned there may be situations where no supplementary measures will suffice to ensure adequate protection for a specific transfer.

The solution in such a case might require relocation of the data processing to a site within the EU, the EDPB said.

“Although the UK has high standards of data protection via the Data Protection Act 2018, which enacted the General Data Protection Regulation (GDPR) in UK law, an EU adequacy decision is not guaranteed,” the NEF/UCL report warns. “Potential EU concerns with UK national security, surveillance and human rights frameworks, as well as a future trade deal with the US, render adequacy uncertain. Furthermore, EUUK data flows are at the whim of the wider Brexit process and negotiations.”

Per their analysis, if the UK does not get an adequacy decision it will face an increased risk of GDPR fines due to increased compliance requirements.

The General Data Protection Regulation sanctions financial penalties for violations of the framework that can scale up to 4% of an entity’s global annual turnover or €20M, whichever is greater.

The report also predicts a reduction in EU-UK trade, especially digital trade; reduced investment (both domestic and international); and the relocation of business functions, infrastructure, and personnel outside the UK.

The researchers argue that more research is needed to support a wider macroeconomic assessment of the value of data flows and adequacy decisions — saying there’s a paucity of research on “the value of data flows and adequacy decisions in general” — before adding: “EU-UK data flows are a crucial enabler for thousands of businesses. These flows underpin core business operations and activities which add significant value. This is not just a digital tech sector issue – the whole economy relies on data flows.”

The report makes a number of recommendations — including urging the UK government to make “relevant data and modelling tools” available to support empirical research on the social and economic impacts of data protection, digital trade, and the value of data flows to help shape better public policy and debate.

It also calls for the government to set aside funds for struggling UK SMEs to help them with the costs of complying with Brexit’s legal data burden.

“Our report concludes that no adequacy decision has the potential to be a contributing factor which undermines the competitiveness of key UK services and digital technology sectors, which have performed extremely strongly in recent years. Although we do not want to exaggerate the impacts — and no adequacy decision is far from economic armageddon — this outcome would not be ideal,” they add.

You can read the full report here.

Digital marketing firms file UK competition complaint against Google’s Privacy Sandbox

By Natasha Lomas

Google’s push to phase out third party tracking cookies — aka its ‘Privacy Sandbox’ initiative — is facing a competition challenge in Europe. A coalition of digital marketing companies announced today that it’s filed a complaint with the UK’s Competition and Markets Authority (CMA), calling for the regulator to block implementation of the Sandbox.

The coalition wants Google’s phasing out of third party tracking cookies to be put on ice to prevent the Sandbox launching in early 2021 to give regulators time to devise or propose what it dubs “long term competitive remedies to mitigate [Google’s dominance]”.

“[Our] letter is asking for the introduction of Privacy Sandbox to be delayed until such measures are put in place,” they write in a press release.

The group, which is badging itself as Marketers for an Open Web (MOW), says it’s comprised of “businesses in the online ecosystem who share a concern that Google is threatening the open web model that is vital to the functioning of a free and competitive media and online economy”.

A link on MOW’s website to a list of “members” was not functioning at the time of writing. But, per Companies House, the entity was incorporated on September 18, 2020 — listing James Roswell, CEO and co-founder of UK mobile marketing company, 51 Degrees, as its sole director.

The CMA confirmed to us that it’s received MOW’s complaint, adding that some of the coalition’s concerns reflect issues identified in a detailed review of the online ad market it published this summer.

However it has not yet taken a decision on whether or not to investigate.

“We can confirm we have received a complaint regarding Google raising certain concerns, some of which relate to those we identified in our online platforms and digital advertising market study,” said the CMA spokesperson. “We take the matters raised in the complaint very seriously, and will assess them carefully with a view to deciding whether to open a formal investigation under the Competition Act.

“If the urgency of the concerns requires us to intervene swiftly, we will also assess whether to impose interim measures to order the suspension of any suspected anti-competitive conduct pending the outcome of a full investigation.”

In its final report of the online ad market, the CMA concluded that the market power of Google and Facebook is now so great that a new regulatory approach — and a dedicated oversight body — is needed to address what it summarized as “wide ranging and self reinforcing” concerns.

Although the regulator chose not to take any enforcement action at that point — preferring to wait for the UK government to come forward with pro-competition legislation.

In its statement today, the CMA makes it clear it could still choose to act on related competition concerns if it feels an imperative to do so — including potentially blocking the launch of Privacy Sandbox to allow time for a full investigation — while it waits for legislators to come up with a regulatory framework. Though, again, it has not yet made any decision to do so.

Reached for a response to the MOW complaint, Google sent us this statement — attributed to a spokesperson:

The ad-supported web is at risk if digital advertising practices don’t evolve to reflect people’s changing expectations around how data is collected and used. That’s why Google introduced the Privacy Sandbox, an open initiative built in collaboration with the industry, to provide strong privacy for users while also supporting publishers.

Also commenting in a statement, MOW’s director Roswell said: “The concept of the open web is based on a decentralised, standards-based environment that is not under the control of any single commercial organisation.  This model is vital to the health of a free and independent media, to a competitive digital business environment and to the freedom and choice of all web users.  Privacy Sandbox creates new, Google-owned standards and is an irreversible step towards a Google-owned ‘walled garden’ web where they control how businesses and users interact online.”

The group’s complaint follows a similar one filed in France last month (via Reuters) — albeit, in that case targeting privacy changes incoming to Apple’s smartphone platform that are also set to limit advertisers access to an iPhone-specific tracking ID that’s generated for that purpose (IDFA).

Apple has said the incoming changes — which it recently delayed until early next year — will give users “greater control over whether or not they want to allow apps to track them by linking their information with data from third parties for the purpose of advertising, or sharing their information with data brokers”. But four online ad associations — IAB France, MMAF, SRI and UDECAM — bringing the complaint to France’s competition regulator argues Apple is abusing its market power to distort competition.

The move by the online ad industry to get European competition regulators to delay Apple’s and Google’s privacy squeeze on third party ad tracking is taking place at the same time as industry players band together to try to accelerate development of their own replacement for tracking cookies — announcing a joint effort called PRAM (Partnership for Responsible Addressable Media) this summer to “advance and protect critical functionalities like customization and analytics for digital media and advertising, while safeguarding privacy and improving consumer experience”, as they put it.

The adtech industry now appears to be coalescing behind a cookie replacement proposal called UnifiedOpen ID 2.0 (UID2).

A document detailing the proposal which had been posted to the public Internet — but was taken down after a privacy researcher drew attention to it — suggests they want to put in place a centralized system for tracking Internet users that’s based on personal data such as an email address or phone number.

“UID2 is based on authenticated PII (e.g. email, phone) that can be created and managed by constituents across advertising ecosystem, including Advertisers, Publishers, DSPs, SSPs,” runs a short outline of the proposal in a paper authored by two people from a Demand Side Platform called The Trade Desk (which is proposing to build the tech but then hand it off to an “independent and non-partial entity” to manage).

One component of the UID2 proposal consists of a “Unified ID Service” that it says would apply a salt and hash process to the PII to generate UID2 and encrypting that to create a UID2 Token, as well as provision login requests from publishers to access the token.

The other component is a user facing website that’s described as a “transparency & consent service” — to handle requests for data or UID2 logouts etc.

However the proposal by the online ad industry to centralize Internet users’ identity by attaching it to hashed pieces of actual personal data — and with a self-regulating “Trusted Ads Ecosystem” slated to be controlling the mapping of PII to UID2 — seems unlikely to assuage the self-same privacy concerns which are fuelling the demise of tracking cookies in the first place (to put it mildly).

Trusting the mass surveillance industry to self regulate a centralized ID system for Internet users is for the birds.

But adtech players are clearly hoping they can buy themselves enough time to cobble together a self-serving cookie alternative — and sell it to regulators as a competition remedy. (Their parallel bet is they can buy off inactive privacy regulators with bogus claims of ‘transparency and consent’.)

So it will certainly be interesting to see whether the adtech industry succeeds in forcing competition regulators to stand in the way of platform level privacy reforms, while pulling off a major reorg and rebranding exercise of its privacy-hostile tracking operations.

In a counter move this month, European privacy campaign group, noyb, filed two complaints against Apple for not obtaining consent from users to create and store the IDFA on their devices.

So that’s one bit of strategic pushback.

Real-time bidding, meanwhile, remains under regulatory scrutiny in Europe — with huge questions over the lawfulness of its processing of Internet users’ personal data. Privacy campaigners are also now challenging data protection regulators over their failure to act on those long-standing complaints.

A flagship online ad industry tool for gathering web users’ consent to tracking is also under attack and looks to be facing imminent action under the bloc’s General Data Protection Regulation (GDPR) .

Last month an investigation by Belgium’s data protection agency found the IAB Europe’s so-called Transparency and Consent Framework (TCF) didn’t offer either — failing to meet the GDPR standard for transparency, fairness and accountability, and the lawfulness of data processing. Enforcement action is expected in early 2021.

A bug meant Twitter Fleets could still be seen after they disappeared

By Zack Whittaker

Twitter is the latest social media site to allow users to experiment with posting disappearing content. Fleets, as Twitter calls them, allows its mobile users post short stories, like photos or videos with overlaying text, that are set to vanish after 24 hours.

But a bug meant that fleets weren’t deleting properly and could still be accessed long after 24 hours had expired. Details of the bug were posted in a series of tweets on Saturday, less than a week after the feature launched.

full disclosure: scraping fleets from public accounts without triggering the read notification

the endpoint is: https://t.co/332FH7TEmN

— cathode gay tube (@donk_enby) November 20, 2020

The bug effectively allowed anyone to access and download a user’s fleets without triggering a notification that the user’s fleet had been read and by whom. The implication is that this bug could be abused to archive a user’s fleets after they expire.

Using an app that’s designed to interact with Twitter’s back-end systems via its developer API. What returned was a list of fleets from the server. Each fleet had its own direct URL, which when opened in a browser would load the fleet as an image or a video. But even after the 24 hours elapsed, the server would still return links to fleets that had already disappeared from view in the Twitter app.

When reached, a Twitter spokesperson said a fix was on the way. “We’re aware of a bug accessible through a technical workaround where some Fleets media URLs may be accessible after 24 hours. We are working on a fix that should be rolled out shortly.”

Twitter acknowledged that the fix means that fleets should now expire properly, it said it won’t delete the fleet from its servers for up to 30 days — and that it may hold onto fleets for longer if they violate its rules. We checked that we could still load fleets from their direct URLs even after they expire.

Fleet with caution.

Ghostery’s Making a Privacy Browser—and Ad-Free Search Engine

By Brian Barrett
The tracker-blocking company will soon launch a privacy-friendly desktop browser as well.

Messaging app Go SMS Pro exposed millions of users’ private photos and files

By Zack Whittaker

Go SMS Pro, one of the most popular messaging apps for Android, is exposing photos, videos and other files sent privately by its users. Worse, the app maker has done nothing to fix the bug.

Security researchers at Trustwave discovered the flaw in August and contacted the app maker with a 90-day deadline to fix the issue, as is standard practice in vulnerability disclosure to allow enough time for a fix. But after the deadline elapsed without hearing back, the researchers went public.

Trustwave shared their findings with TechCrunch this week.

When a Go SMS Pro user sends a photo, video or other file to someone who doesn’t have the app installed, the app uploads the file to its servers, and lets the user share a web address by text message so the recipient can see the file without installing the app. But the researchers found that these web addresses were sequential. In fact, any time a file was shared — even between app users — a web address would be generated regardless. That meant anyone who knew about the predictable web address could have cycled through millions of different web addresses to users’ files.

Go SMS Pro has more than 100 million installs, according to its listing in Google Play.

TechCrunch verified the researcher’s findings. In viewing just a few dozen links, we found a person’s phone number, a screenshot of a bank transfer, an order confirmation including someone’s home address, an arrest record, and far more explicit photos than we were expecting, to be quite honest.

Karl Sigler, senior security research manager at Trustwave, said while it wasn’t possible to target any specific user, any file sent using the app is vulnerable to public access. “An attacker can create scripts that could throw a wide net across all the media files stored in the cloud instance,” he said.

We had about as much luck getting a response from the app maker as the researchers. TechCrunch emailed two email addresses associated with the app. One email immediately bounced back saying the email couldn’t be delivered due to a full inbox. The other email was opened, according to our email open tracker, but a follow-up email was not.

Since you might now want a messaging app that protects your privacy, we have you covered.

Apple’s IDFA gets targeted in strategic EU privacy complaints

By Natasha Lomas

A unique device identifier that Apple assigns to each iPhone for third parties to track users for ad targeting — aka the IDFA (Identifier for Advertisers) — is itself now the target of two new complaints filed by European privacy campaign not-for-profit, noyb.

The complaints, lodged with German and Spanish data protection authorities, contend that Apple’s setting of the IDFA breaches regional privacy laws on digital tracking because iOS users are not asked for their consent for the initial storage of the identifier.

noyb is also objecting to others’ being able to access the IDFA without prior consent — with one of its complainants writing that they were never asked for consent for third party access yet found several apps had shared their IDFA with Facebook (per their off-Facebook activity page).

We’ve reached out to the data protection agencies in question for comment.

While Apple isn’t the typical target for digital privacy campaigners, given it makes most of its money selling hardware and software instead of profiling users for ad targeting, as adtech giants like Facebook and Google do, its marketing rhetoric around taking special care over user privacy can look awkward when set against the existence of an Identifier for Advertisers baked into its hardware.

In the European Union there’s a specific legal dimension to this awkwardness — as existing laws require explicit consent from users to (non-essential) tracking. noyb’s complaints cite Article 5(3) of the EU’s ePrivacy Directive which mandates that users must be asked for consent to the storage of ad tracking technologies such as cookies. (And noyb argues the IDFA is just like a tracking cookie but for iPhones.)

Europe’s top court further strengthened the requirement last year when it made it clear that consent for non-essential tracking must be obtained prior to storing or accessing the trackers. The CJEU also ruled that such consent cannot be implied or assumed — such as by the use of pre-checked ‘consent’ boxes.

In a press release about the complaints, noyb’s Stefano Rossetti, a privacy lawyer, writes: “EU law protects our devices from external tracking. Tracking is only allowed if users explicitly consent to it. This very simple rule applies regardless of the tracking technology used. While Apple introduced functions in their browser to block cookies, it places similar codes in its phones, without any consent by the user. This is a clear breach of EU privacy laws.”

Apple has long controlled how third parties serving apps on its iOS platform can use the IDFA, wielding the stick of ejection from its App Store to drive their compliance with its rules.

Recently, though, it has gone further — telling advertisers this summer they will soon have to offer users an opt-out from ad tracking in a move billed as increasing privacy controls for iOS users — although Apple delayed implementation of the policy until early next year after facing anger from advertisers over the plan. But the idea is there will be a toggle in iOS 14 that users need to flip on before a third party app gets to access the IDFA to track iPhone users’ in-app activity for ad targeting.

However noyb’s complaint focuses on Apple’s setting of the IDFA in the first place — arguing that since the pseudonymised identifier constitutes private (personal) data under EU law they need to get permission before creating and storing it on their device.

“The IDFA is like a ‘digital license plate’. Every action of the user can be linked to the ‘license plate’ and used to build a rich profile about the user. Such profile can later be used to target personalised advertisements, in-app purchases, promotions etc. When compared to traditional internet tracking IDs, the IDFA is simply a ‘tracking ID in a mobile phone’ instead of a tracking ID in a browser cookie,” noyb writes in one complaint, noting that Apple’s privacy policy does not specify the legal basis it uses to “place and process” the IDFA.

noyb also argues that Apple’s planned changes to how the IDFA gets accessed — trailed as incoming in early 2021 — don’t go far enough.

“These changes seem to restrict the use of the IDFA for third parties (but not for Apple itself),” it writes. “Just like when an app requests access to the camera or microphone, the plans foresee a new dialog that asks the user if an app should be able to access the IDFA. However, the initial storage of the IDFA and Apple’s use of it will still be done without the users’ consent and therefore in breach of EU law. It is unclear when and if these changes will be implemented by the company.”

We reached out to Apple for comment on noyb’s complaints but at the time of writing an Apple spokesman said it did not have an on-the-record statement. The spokesman did tell us that Apple itself does not use unique customer identifiers for advertising.

In a separate but related recent development, last month publishers and advertisers in France filed an antitrust complaint against the iPhone maker over its plan to require opt-in consent for accessing the IDFA — with the coalition contending the move amounts to an abuse of market power.

Apple responded to the antitrust complaint in a statement that said: “With iOS 14, we’re giving users the choice whether or not they want to allow apps to track them by linking their information with data from third parties for the purpose of advertising, or sharing their information with data brokers.”

We believe privacy is a fundamental human right and support the European Union’s leadership in protecting privacy with strong laws such as the GDPR (General Data Protection Regulation),” Apple added then.

That antitrust complaint may explain why noyb has decided to file its own strategic complaints against Apple’s IDFA. Simply put, if no tracker ID can be created — because an iOS user refuses to give consent — there’s less surface area for advertisers to try to litigate against privacy by claiming tracking is a competitive right.

“We believe that Apple violated the law before, now and after these changes,” said Rossetti in another statement. “With our complaints we want to enforce a simple principle: trackers are illegal, unless a user freely consents. The IDFA should not only be restricted, but permanently deleted. Smartphones are the most intimate device for most people and they must be tracker-free by default.”

Another interesting component of the noyb complaints is they’re being filed under the ePrivacy Directive, rather than under Europe’s (newer) General Data Protection Regulation. This means noyb is able to target them to specific EU data protection agencies, rather than having complaints funnelled back to Ireland’s DPC — under the GDPR’s one-stop-shop mechanism for handling cross-border cases.

Its hope is this route will result in swifter regulatory action. These cases are based on the ‘old’ cookie law and do not trigger the cooperation mechanism of the GDPR. In other words, we are trying to avoid endless procedures like the ones we are facing in Ireland,” added Rossetti.

Apple responds to Gatekeeper issue with upcoming fixes

By Romain Dillet

Apple has updated a documentation page detailing the company’s next steps to prevent last week’s Gatekeeper bug from happening again, as Rene Ritchie spotted. The company plans to implement the fixes over the next year.

Apple had a difficult launch day last week. The company released macOS Big Sur, a major update for macOS. Apple then suffered from server-side issues.

Third-party apps failed to launch as your Mac couldn't check the developer certificate of the app. That feature, called Gatekeeper, makes sure that you didn't download a malware app that disguises itself as a legit app. If the certificate doesn’t match, macOS prevents the app launch.

Hey Apple users:

If you're now experiencing hangs launching apps on the Mac, I figured out the problem using Little Snitch.

It's trustd connecting to https://t.co/FzIGwbGRan

Denying that connection fixes it, because OCSP is a soft failure.

(Disconnect internet also fixes.) pic.twitter.com/w9YciFltrb

— Jeff Johnson (@lapcatsoftware) November 12, 2020

Many have been concerned about the privacy implications of the security feature. Does Apple log every app you launch on your Mac to gain competitive insights on app usage?

It turns out it's easy to answer that question as the server doesn't mandate encryption. Jacopo Jannone intercepted an unencrypted network request and found out that Apple is not secretly spying on you. Gatekeeper really does what it says it does.

“We have never combined data from these checks with information about Apple users or their devices. We do not use data from these checks to learn what individual users are launching or running on their devices,” the company wrote.

But Apple is going one step further and communicating on the company's next steps. The company has stopped logging IP addresses on its servers since last week. It doesn't have to store this data for Gatekeeper .

“These security checks have never included the user’s Apple ID or the identity of their device. To further protect privacy, we have stopped logging IP addresses associated with Developer ID certificate checks, and we will ensure that any collected IP addresses are removed from logs” Apple writes.

Finally, Apple is overhauling the design of the network request and adding a user-facing opt-out option.

“In addition, over the the next year we will introduce several changes to our security checks:

  • A new encrypted protocol for Developer ID certificate revocation checks
  • Strong protections against server failure
  • A new preference for users to opt out of these security protections”

The iOS Covid App Ecosystem Has Become a Privacy Minefield

By Andy Greenberg
An analysis of nearly 500 Covid-related apps worldwide shows major differences in how much data they expect you to give up.

Europe puts out advice on fixing international data transfers that’s cold comfort for Facebook

By Natasha Lomas

Following the landmark CJEU ‘Schrems II’ ruling in July, which invalidated the four-year-old EU-US Privacy Shield, European data protection regulators have today published 38-pages of guidance for businesses stuck trying to navigate the uncertainty around how to (legally) transfer personal data out of the European Union.

The European Data Protection Board’s (EDPB) recommendations focus on measures data controllers might be able to put in place to supplement the use of another transfer mechanism: so-called Standard Contractual Clauses (SCCs) to ensure they are complying with the bloc’s General Data Protection Regulation (GDPR) .

The Recommendations on measures that supplement transfer tools to ensure compliance with the EU level of protection of personal data are now available here: https://t.co/agY2BHZVku For a quick overview of the different steps data exporters need to take, check out the infographic: pic.twitter.com/sYTMdNgBkn

— EDPB (@EU_EDPB) November 11, 2020

Unlike Privacy Shield, SCCs were not struck down by the court but their use remains clouded with legal uncertainty. The court made it clear SCCs can only be relied upon for international transfers if the safety of EU citizens’ data can be guaranteed. It also said EU regulators have a duty to intervene when they suspect data is flowing to a location where it will not be safe — meaning options for data transfers out of the EU have both reduced in number and increased in complexity.

One company that’s said it’s waiting for the EDPB guidance is Facebook. It’s already faced a preliminary order to stop transferring EU users data to the US. It petitioned the Irish courts to obtain a stay as it seeks a judicial review of its data protection regulator’s process. It has also brought out its lobbying big guns — former UK deputy PM and ex-MEP Nick Clegg — to try to pressure EU lawmakers over the issue.

Most likely the tech giant is hoping for a ‘Privacy Shield 2.0‘ to be cobbled together and slapped into place to paper over the gap between EU fundamental rights and US surveillance law.

But the Commission has warned there won’t be a quick fix this time.

Changes to US surveillance law are slated as necessary — which means zero chance of anything happening before the Biden administration takes the reins next year. So the legal uncertainty around EU-US transfers is set to stretch well into next year at a minimum. (Politico suggests a new data deal isn’t likely in the first half of 2021.)

In the meanwhile, legal challenges to ongoing EU-US transfers are stacking up — at the same time as EU regulators know they have a legal duty to intervene when data is at risk.

“Standard contractual clauses and other transfer tools mentioned under Article 46 GDPR do not operate in a vacuum. The Court states that controllers or processors, acting as exporters, are responsible for verifying, on a case-by-case basis and, where appropriate, in collaboration with the importer in the third country, if the law or practice of the third country impinges on the effectiveness of the appropriate safeguards contained in the Article 46 GDPR transfer tools,” the EDPB writes in an executive summary.

“In those cases, the Court still leaves open the possibility for exporters to implement supplementary measures that fill these gaps in the protection and bring it up to the level required by EU law. The Court does not specify which measures these could be. However, the Court underlines that exporters will need to identify them on a case-by-case basis. This is in line with the principle of accountability of Article 5.2 GDPR, which requires controllers to be responsible for, and be able to demonstrate compliance with the GDPR principles relating to processing of personal data.”

The recommendations set out a series of steps for data exporters to take as they go through the complex task of determining whether their particular transfer can play nice with EU data protection law.

Six steps but no one-size-fits-all fix

The basic overview of the process it’s advising is: Step 1) map all intended international transfers; step 2) verify the transfer tools you want to use; step 3) assess whether there’s anything in the law/practice of the destination third country which “may impinge on the effectiveness of the appropriate safeguards of the transfer tools you are relying on, in the context of your specific transfer”, as it puts it; step 4) identify and adopt supplementary measure/s to bring the level of protection up to ‘essential equivalent’ with EU law; step 5) take any formal procedural steps required to adopt the supplementary measure/s; step 6) periodically re-evaluate the level of data protection and monitor any relevant developments.

In short, this is going to involve both a lot of work — and ongoing work. tl;dr: Your duty to watch over the safety of European users’ data is never done.

Moreover, the EDPB makes it clear that there very well may not be any supplementary measures to cover a particular transfer in legal glory.

“You may ultimately find that no supplementary measure can ensure an essentially equivalent level of protection for your specific transfer,” it warns. “In those cases where no supplementary measure is suitable, you must avoid, suspend or terminate the transfer to avoid compromising the level of protection of the personal data. You should also conduct this assessment of supplementary measures with due diligence and document it.”

In instances where supplementary measures could suffice the EDPB says they may have “a contractual, technical or organisational nature” — or, indeed, a combination of some or all of those.

“Combining diverse measures in a way that they support and build on each other may enhance the level of protection and may therefore contribute to reaching EU standards,” it suggests.

However it also goes on to state fairly plainly that technical measures are likely to be the most robust tool against the threat posed by foreign surveillance. But that in turn means there are necessarily limits on the business models that can tap in here — anyone wanting to decrypt and process data for themselves in the US, for instance, (hi Facebook!) isn’t going to find much comfort here.

The guidance goes on to include some sample scenarios where it suggests supplementary measures might suffice to render an international transfer legal.

Such as data storage in a third country where there’s no access to decrypted data at the destination and keys are held by the data exporter (or by a trusted entity in the EEA or in a third country that’s considered to have an adequate level of protection for data); or the transfer of pseudonymised data — so individuals can no longer be identified (which means ensuring data cannot be reidentified); or end-to-end encrypted data transiting third countries via encrypted transfer (again data must not be able to be decrypted in a jurisdiction that lacks adequate protection; the EDPB also specifies here that the existence of any ‘backdoors’ in hardware or software must have been ruled out, though it’s not clear how that could be done).

Another section of the document discusses scenarios in which no effective supplementary measures could be found — such as transfers to cloud service providers (or similar) which require access to the data in the clear and where “the power granted to public authorities of the recipient country to access the transferred data goes beyond what is necessary and proportionate in a democratic society”.

Again, this is a bit of the document that looks very bad for Facebook.

“The EDPB is, considering the current state of the art, incapable of envisioning an effective technical measure to prevent that access from infringing on data subject rights,” it writes on that, adding that it “does not rule out that further technological development may offer measures that achieve the intended business purposes, without requiring access in the clear”.

“In the given scenarios, where unencrypted personal data is technically necessary for the provision of the service by the processor, transport encryption and data-at-rest encryption even taken together, do not constitute a supplementary measure that ensures an essentially equivalent level of protection if the data importer is in possession of the cryptographic keys,” the EDPB further notes.

It also makes it clear that supplementary contractual clauses aren’t any kind of get-out on this front — so, no, Facebook can’t stick a clause in its SCCs that defuses FISA 702 — with the EDPB writing: “Contractual measures will not be able to rule out the application of the legislation of a third country which does not meet the EDPB European Essential Guarantees standard in those cases in which the legislation obliges importers to comply with the orders to disclose data they receive from public authorities.”

The EDPB does discuss examples of potential clauses data exporters could use to supplement SCCs, depending on the specifics of their data flow situation, while also specifying “conditions for effectiveness”.

But, again, there’s cold comfort for those wanting to process personal data in the US while it remains at risk from state surveillance.

“The exporter could add annexes to the contract with information that the importer would provide, based on its best efforts, on the access to data by public authorities, including in the field of intelligence provided the legislation complies with the EDPB European Essential Guarantees, in the destination country. This might help the data exporter to meet its obligation to document its assessment of the level of protection in the third country,” the EDPB suggests in a section of the guidance discussing transparency obligations.

However the point of such a clause would be for the data exporter to put up-front conditions on an importer to make it easier for them to avoid getting into a risky contract in the first place — or help with suspending/terminating a contract if a risk is determined — rather than providing any kind of legal sticking plaster for mass surveillance.

“This obligation can however neither justify the importer’s disclosure of personal data nor give rise to the expectation that there will be no further access requests,” the EDPB warns.

Another example the document discusses is the viability of adding clauses to try to get the importer to certify there’s no backdoors in their systems which could put the data at risk.

However it warns this may just be useless, writing: “The existence of legislation or government policies preventing importers from disclosing this information may render this clause ineffective.” So it could just be trying to kneecap dubious legal advice that tries to push contract clauses as a panacea for US surveillance overreach.

The full guidance can be found here.

We’ve reached out to Facebook to ask what next steps it’ll be taking over its EU-US data transfers in light of the EDPB guidance and will update this report with any response.

Amazon’s new ‘Care Hub’ lets Alexa owners keep tabs on aging family members

By Sarah Perez

Amazon today announced a set of new features aimed at making its Alexa devices more useful to aging adults. With the launch of “Care Hub,” an added option in the Alexa mobile app, family members can keep an eye on older parents and loved ones, with their permission, in order to receive general information about their activities and to be alerted if the loved one has called out for help.

The idea behind Care Hub, the company explains, is to offer reassurance to family members concerned about an elderly family member’s well-being, while also allowing those family members to maintain some independence.

This is not a novel use case for Alexa devices. Already, the devices are being used in senior living centers and other care facilities, by way of third-party providers.

Amazon stresses that while family members will be able to keep an eye on their loved ones’ Alexa use, it will respect their privacy by not offering specific information. For example, while a family member may be able to see that their parent had played music, it won’t say which song was played. Instead, all activity is displayed by category.

In addition, users will be able to configure alerts if there’s no activity or when the first interaction with the device occurs on a daily basis.

And if the loved one calls for help, the family member designated as the emergency contact can drop in on them through the Care Hub or contact emergency services.

Image Credits: Amazon

These new features are double-opt in, meaning that both the family member and their loved one need to first establish a connection between their Alexa accounts through an invitation process. This is begun through the new Care Hub feature in the Alexa app, then confirmed via text message or email.

That may seem like a reasonable amount of privacy protection, but in reality, many older adults either struggle with or tend to avoid technology. Even things seemingly simple — like using a smartphone, email or texting — can sometimes be a challenge.

That means there are scenarios where a family member could set up the Care Hub system by accessing the other person’s accounts without their knowledge or by inventing an email that becomes “the parent’s email” just for this purpose.

Alternatively, they could just mislead mom or dad by saying they are helping them set up the new Alexa device, and — oh, can I borrow your phone to confirm something for the setup? (Or some other such deception.)

A more appropriate option to protect user privacy would be to have Alexa periodically ask the loved one if they were still okay with the Care Hub monitoring option being enabled, and to alert the loved one via the Alexa mobile app that a monitoring option was still turned on.

Of course, there may certainly be older adults who appreciate the ability to be connected to family in this way, especially if they are located at a distance from their family or are feeling isolated due to the coronavirus pandemic and social distancing requirements that’s keeping family members from being able to visit.

Amazon says Care Hub is rolling out in the U.S. The company notes it will learn from customer feedback to expand the feature set over time.

Data audit of UK political parties finds laundry list of failings

By Natasha Lomas

In a finding that should surprise no one, an audit of how UK political parties are handling voter information has surfaced a damning lack of compliance with data protection rules across the political spectrum — with parties failing to come clean with voters about how individuals are being invisibly profiled and targeted by parties’ digital campaigning machines.

“Political parties may legitimately hold personal data belonging to millions of people to help them campaign effectively. But developments in the use of data analytics and social media by political parties mean that many voters are unaware of how their data is being used,” the Information Commissioner’s Office (ICO) warned today.

“All political parties must be clear and transparent with people about how their personal data is used and there should be improved governance and accountability,” it goes on to say in the report.

“Political parties have always wanted to use data to understand voters’ interests and priorities, and respond by explaining the right policies to the right people. Technology now makes that possible on a much more granular level. This can be positive: engaging people on topics that interest them contributes to greater turnout at elections. But engagement must be lawful, especially where there are risks of significant privacy intrusion – for instance around invisible profiling activities, use of sensitive categories of data and unwanted and intrusive marketing. The risk to democracy if elections are driven by unfair or opaque digital targeting is too great for us to shift our focus from this area.”

Despite flagging risks to democratic trust and engagement the regulator has chosen not to take enforcement action.

Instead it has issued a series of recommendations — almost a third of which are rated ‘urgent’ — saying it will carry out a further review later this year and could still take action if enough progress isn’t made. 

“Should our follow-up reviews indicate parties have failed to take appropriate steps to comply, we reserve the right to take further regulatory action in line with our Regulatory Action Policy,” it notes in the report which also includes  warm words for how “positively” parties have engaged with it on the issues. 

The ICO also says it will update its existing guidance on political campaigning later this year — which it notes will have wider relevance for (non-political) campaigners, pressure groups, data brokers and data analytic companies.

It has previously put out guidance for the direct marketing data broking sector as part of its follow up to the Cambridge Analytica Facebook data misuse scandal.

From Cambridge Analytica to ‘must do better’

The data audit of UK political parties was instigated by the ICO after the Cambridge Analytica scandal drew global attention to the role of social media and big data in digital campaigning.

In an earlier report on the topic, in July 2018, the ICO called for an ‘ethical pause’ around the use of microtargeting ad tools for political campaigning — warning there’s a risk of trust in democracy being undermined by a lack of transparency around the data-fuelled targeting techniques being applied to voters.

But there was no let up in the use of social media targeting before or during the 2019 UK general election, when concerns about how Boris Johnson’s Conservative Party was using Facebook ads to harvest voter data were among the issues raised.

The ICO report is determined to spare parties individual blushes, however — it’s only summarized ‘aggregated’ learnings from its deep dive into wtaf the Conservative Party; the Labour Party; the Liberal Democrats; the Scottish National Party (SNP); the Democratic Unionist Party (DUP); Plaid Cymru; and United Kingdom Independence Party (UKIP) are doing with people’s data.

Nor is the regulator handing out the marching orders, exactly.

“We recommended the following actions must be taken by the parties”, is the ICO’s preferred oxymoronic construction as it seeks to avoid putting any political noses out of joint. (Not least those belonging to people in government.) So it’s opting for a softly, softly ‘recommend and review’ approach to trying to clean up parties’ dubious data habits

Among its key findings are that political parties’ privacy notices are falling short of required levels of transparency and clarity; don’t have appropriate lawful bases for the data they’re processing in all cases, and where they’re claiming consent may not be obtaining this legally; aren’t being up front about how they’re combining data to profile voters, nor are they carrying out enough checks on data suppliers to ensure those third parties have legally obtained people’s data; aren’t putting proper contractual controls in place when using social media platforms to target voters; and are not staying on top of their obligations so as to be in a position to demonstrate accountability.

So quite the laundry list of data protection failings.

The ICO’s recommendations to political parties are also hilariously basic — saying they must:

  • undertake an information audit or data-mapping exercise to help find out what personal data they hold and where it is;
  • conduct a review to find out why they are using personal data, who they share it with and how long it is kept, by distributing questionnaires to relevant areas, meeting directly with key business functions and reviewing policies, procedures, contracts and agreements;
  • document their findings in writing, in a detailed and meaningful way.

Insert your own face-palm emoji as you imagine the chaotic evil underlying those bullet points.

“We recognise that achieving effective transparency to the UK adult population is challenging,” the ICO notes in a section of the report on transparency requirements, adding that its earlier report recommended “wider, joined-up approaches should be also taken to raising awareness of how data is used in campaigning”.

It adds that it will continue to work with the Electoral Commission on this recommendation.

The explosive growth of digital ads for UK political campaigning is quantified by a line in the report citing Electoral Commission data showing 42.8% of advertising spending by campaigners was on digital advertising in 2017, compared to just 1.7% in 2014.

So the use of social media platforms — which the report notes were used by all parties for political campaigning — is chain-linked to the troubling lack of transparency being called out by the regulator.

“Social media was used by all parties to promote their work to people who may be interested in their values. The majority was delivered via Facebook — including their Instagram platform — and Twitter. Where political parties were using audience choice tools, we had concerns with the lack of transparency of this practice,” the ICO writes. “Privacy information did not make it clear that personal data of voters collected or processed by the party would then be profiled and used to target marketing to them via social media platforms.

“A key recommendation made following our audits was that parties must inform individuals and be transparent about this processing, so that voters fully understand their personal data will be used in this way to comply with Article 13(1)(e) of the GDPR. For example, parties should tell voters that their email addresses will be used to match them on social media for the purposes of showing them political messaging.”

“Due diligence should be undertaken before any campaign begins so that parties can assure themselves that the social media company has: appropriate privacy information and tools in place; and the data processing they will be doing on the party’s behalf is lawful and transparent, and upholds the rights of individuals under data protection law,” it adds.

The report also discusses the need for political parties to fully understand the legal implications of using specific data-fuelled ad-targeting platforms/tools (i.e. before they rush in and upload people’s data to Facebook/Twitter) — so they can properly fulfil their obligations.

To wit:

When parties look to use a platform’s targeting tools, both the party and the platform itself should clearly identify the circumstances where joint controllership exists and put measures in place to fulfil those obligations. They must assess this on a case-by-case basis, irrespective of the content of any controller or processor arrangement. Joint controllership may exist in practice, if the platform exercises a significant degree of control over the tools and techniques they use to target individual users of their service with political messages on behalf of the party.

Article 26 of the GDPR specifies the requirements for joint controller situations. Parties should agree and fully understand who is responsible for what. This means they must work with any social media platform they use to make sure there are no gaps in compliance, and ensure they have appropriate contracts or agreements in place. They should also undertake in-life contract monitoring to ensure that the platforms are adhering to these contracts.

In the report, the ICO describes the data protection implications involved in joint controller situations as “complex”, adding: “We recognise that the solutions to the issues… may take more time to resolve and will require more guidance for all the actors involved.”

“Since our audits, we understand that some steps have been taken by social media companies within their revised terms and conditions of service for digital advertising,” it adds. 

The report also includes a passing mention to ongoing regulatory scrutiny of Facebook’s ad platform in Ireland under EU law — focused on concerns that the use of Facebook’s ‘lookalike audiences’ for targeting voters may not comply with the bloc’s GDPR framework.

Information commissioner, Elizabeth Denham, has previously suggested the tech giant will have to change its business model to maintain user trust. But Ireland’s data protection agency has not yet issued any GDPR decisions related to Facebook’s business.

“In the wider ecosystem, the ICO also recognises that there are still other matters that need to be addressed about the use of personal data in the political context,” the regulator writes now. “These include some of the issues set out in the report it made to the Irish Data Protection Commission (IDPC), as the lead authority under GDPR, about targeted advertising on Facebook and other issuing [sp] including where the platform could be used in political contexts. The ICO will continue to liaise with the technology platforms to consider what, if any, further steps might be required to address the issues raised by our Democracy Disrupted report. This will be of relevance to the parties’ use of social media platforms in future elections.”

Apple will release macOS Big Sur on November 12

By Zack Whittaker

Apple’s upcoming desktop and laptop operating system, macOS Big Sur, will be released on November 12, the company announced today.

macOS Big Sur — which stays with the company’s California-themed naming scheme — will arrive with a new and refreshed user interface, new features, and performance improvements.

Much of the features in iOS 14 are porting over — including improved Message threading and in-line replies and a redesigned Maps app. The new Apple software also comes with a new Control Center, with quick access to brightness, volume, Wi-Fi, and Bluetooth.

Safari also gets a much needed lick of paint. It comes with new privacy and security features, including an in-built intelligence tracking prevention that stops trackers following you across the web, and password monitoring to save you from using previously breached passwords.

If you’re wondering what macOS Big Sir is like to work on, TechCrunch’s Brian Heater took the new software for a spin in August.

macOS Big Sur will be supported on Macs and MacBooks dating back to 2013.

Read more:

❌