FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Scraping the Web Is a Powerful Tool. Clearview AI Abused It

By Louise Matsakis
The facial recognition startup claims it collected billions of photos from sites like Facebook and Twitter. What does the practice mean for the open web?

An adult sexting site exposed thousands of models’ passports and driver’s licenses

By Zack Whittaker

A popular sexting website has exposed thousands of photo IDs belonging to models and sex workers who earn commissions from the site.

SextPanther, an Arizona-based adult site, stored more than 11,000 identity documents on an exposed Amazon Web Services (AWS) storage bucket, including passports, driver’s licenses and Social Security numbers, without a password. The company says on its website that it uses these documents to verify the ages of models with whom users communicate.

Most of the exposed identity documents contain personal information, such as names, home addresses, dates of birth, biometrics and their photos.

Although most of the data came from models in the U.S., some of the documents were supplied by workers in Canada, India and the United Kingdom.

The site allows models and sex workers to earn money by exchanging with paying users text messages, photos and videos, including explicit and nude content. The exposed storage bucket also contained more than 100,000 photos and videos sent and received by the workers.

It was not immediately clear who owned the storage bucket. TechCrunch asked U.K.-based penetration testing company Fidus Information Security, which has experience in discovering and identifying exposed data, to help.

Researchers at Fidus quickly found evidence suggesting the exposed data could belong to SextPanther.

An hour after we alerted the site’s owner, Alexander Guizzetti, to the exposed data, the storage bucket was pulled offline.

“We have passed this on to our security and legal teams to investigate further. We take accusations like this very seriously,” Guizzetti said in an email, who did not explicitly confirm the bucket belonged to his company.

Using information from identity documents matched against public records, we contacted several models whose information was exposed by the security lapse.

“I’m sure I sent it to them,” said one model, referring to her driver’s license, which was exposed. (We agreed to withhold her name given the sensitivity of the data.) We passed along a photo of her license found in the exposed bucket. She confirmed it was her license, but said that the information on her license is no longer current.

“I truly feel awful for others whom have signed up with their legit information,” she said.

The security lapse comes a week after researchers found a similar cache of highly sensitive personal information of sex workers on adult webcam streaming site, PussyCash.

More than 850,000 documents were insecurely stored in another unprotected storage bucket.

Read more:


Got a tip? You can send tips securely over Signal and WhatsApp to +1 646-755–8849.

Facebook’s dodgy defaults face more scrutiny in Europe

By Natasha Lomas

Italy’s Competition and Markets Authority has launched proceedings against Facebook for failing to fully inform users about the commercial uses it makes of their data.

At the same time, a German court has today upheld a consumer group’s right to challenge the tech giant over data and privacy issues in the national courts.

Lack of transparency

The Italian authority’s action, which could result in a fine of €5 million for Facebook, follows an earlier decision by the regulator, in November 2018 — when it found the company had not been dealing plainly with users about the underlying value exchange involved in signing up to the “free” service, and fined Facebook €5 million for failing to properly inform users how their information would be used commercially.

In a press notice about its latest action, the watchdog notes Facebook has removed a claim from its homepage — which had stated that the service “is free and always will be” — but finds users are still not being informed, “with clarity and immediacy” about how the tech giant monetizes their data.

The Authority had prohibited Facebook from continuing what it dubs “deceptive practice” and ordered it to publish an amending declaration on its homepage in Italy, as well as on the Facebook app and on the personal page of each registered Italian user.

In a statement responding to the watchdog’s latest action, a Facebook spokesperson told us:

We are reviewing the Authority decision. We made changes last year — including to our Terms of Service — to further clarify how Facebook makes money. These changes were part of our ongoing commitment to give people more transparency and control over their information.

Last year Italy’s data protection agency also fined Facebook $1.1 million — in that case for privacy violations attached to the Cambridge Analytics data misuse scandal.

Dodgy defaults

In separate but related news, a ruling by a German court today found that Facebook can continue to use the advertising slogan that its service is “free and always will be” — on the grounds that it does not require users to hand over monetary payments in exchange for using the service.

A local consumer rights group, vzbv, had sought to challenge Facebook’s use of the slogan — arguing it’s misleading, given the platform’s harvesting of user data for targeted ads. But the court disagreed.

However, that was only one of a number of data protection complaints filed by the group — 26 in all. And the Berlin court found in its favor on a number of other fronts.

Significantly, vzbv has won the right to bring data protection-related legal challenges within Germany even with the pan-EU General Data Protection Regulation in force — opening the door to strategic litigation by consumer advocacy bodies and privacy rights groups in what is a very pro-privacy market.

This looks interesting because one of Facebook’s favored legal arguments in a bid to derail privacy challenges at an EU Member State level has been to argue those courts lack jurisdiction — given that its European HQ is sited in Ireland (and GDPR includes provision for a one-stop shop mechanism that pushes cross-border complaints to a lead regulator).

But this ruling looks like it will make it tougher for Facebook to funnel all data and privacy complaints via the heavily backlogged Irish regulator — which has, for example, been sitting on a GDPR complaint over forced consent by adtech giants (including Facebook) since May 2018.

The Berlin court also agreed with vzbv’s argument that Facebook’s privacy settings and T&Cs violate laws around consent — such as a location service being already activated in the Facebook mobile app; and a pre-ticked setting that made users’ profiles indexable by search engines by default

The court also agreed that certain pre-formulated conditions in Facebook’s T&C do not meet the required legal standard — such as a requirement that users agree to their name and profile picture being used “for commercial, sponsored or related content,” and another stipulation that users agree in advance to all future changes to the policy.

Commenting in a statement, Heiko Dünkel from the law enforcement team at vzbv, said: “It is not the first time that Facebook has been convicted of careless handling of its users’ data. The Chamber of Justice has made it clear that consumer advice centers can take action against violations of the GDPR.”

We’ve reached out to Facebook for a response.

London’s Met Police switches on live facial recognition, flying in face of human rights concerns

By Natasha Lomas

While EU lawmakers are mulling a temporary ban on the use of facial recognition to safeguard individuals’ rights, as part of risk-focused plan to regulate AI, London’s Met Police has today forged ahead with deploying the privacy hostile technology — flipping the switch on operational use of live facial recognition in the UK capital.

The deployment comes after a multi-year period of trials by the Met and police in South Wales.

The Met says its use of the controversial technology will be targeted to “specific locations… where intelligence suggests we are most likely to locate serious offenders”.

“Each deployment will have a bespoke ‘watch list’, made up of images of wanted individuals, predominantly those wanted for serious and violent offences,” it adds.

It also claims cameras will be “clearly signposted”, adding that officers will be “deployed to the operation will hand out leaflets about the activity”.

“At a deployment, cameras will be focused on a small, targeted area to scan passers-by,” it writes. “The technology, which is a standalone system, is not linked to any other imaging system, such as CCTV, body worn video or ANPR.”

The biometric system is being provided to the Met by Japanese IT and electronics giant, NEC.

In a press statement, assistant commissioner Nick Ephgrave claimed the force is taking a balanced approach to using the controversial tech.

“We all want to live and work in a city which is safe: the public rightly expect us to use widely available technology to stop criminals. Equally I have to be sure that we have the right safeguards and transparency in place to ensure that we protect people’s privacy and human rights. I believe our careful and considered deployment of live facial recognition strikes that balance,” he said.

London has seen a rise in violent crime in recent years, with murder rates hitting a ten-year peak last year.

The surge in violent crime has been linked to cuts to policing services — although the new Conservative government has pledged to reverse cuts enacted by earlier Tory administrations.

The Met says its hope for the AI-powered tech is will help it tackle serious crime, including serious violence, gun and knife crime, child sexual exploitation and “help protect the vulnerable”.

However its phrasing is not a little ironic, given that facial recognition systems can be prone to racial bias, for example, owing to factors such as bias in data-sets used to train AI algorithms.

So in fact there’s a risk that police-use of facial recognition could further harm vulnerable groups who already face a disproportionate risk of inequality and discrimination.

Yet the Met’s PR doesn’t mention the risk of the AI tech automating bias.

Instead it makes pains to couch the technology as “additional tool” to assist its officers.

“This is not a case of technology taking over from traditional policing; this is a system which simply gives police officers a ‘prompt’, suggesting “that person over there may be the person you’re looking for”, it is always the decision of an officer whether or not to engage with someone,” it adds.

While the use of a new tech tool may start with small deployments, as is being touting here, the history of software development underlines how potential to scale is readily baked in.

A ‘targeted’ small-scale launch also prepares the ground for London’s police force to push for wider public acceptance of a highly controversial and rights-hostile technology via a gradual building out process. Aka surveillance creep.

On the flip side, the text of the draft of an EU proposal for regulating AI which leaked last week — floating the idea of a temporary ban on facial recognition in public places — noted that a ban would “safeguard the rights of individuals”. Although it’s not yet clear whether the Commission will favor such a blanket measure, even temporarily.

UK rights groups have reacted with alarm to the Met’s decision to ignore concerns about facial recognition.

Liberty accused the force of ignoring the conclusion of a report it commissioned during an earlier trial of the tech — which it says concluded the Met had failed to consider human rights impacts.

It also suggested such use would not meet key legal requirements.

“Human rights law requires that any interference with individuals’ rights be in accordance with the law, pursue a legitimate aim, and be ‘necessary in a democratic society’,” the report notes, suggesting the Met earlier trials of facial recognition tech “would be held unlawful if challenged before the courts”.

When the Met trialled #FacialRecognition tech, it commissioned an independent review of its use.

Its conclusions:

❌The Met failed to consider the human rights impact of the tech
❌Its use was unlikely to pass the key legal test of being "necessary in a democratic society"

— Liberty (@libertyhq) January 24, 2020

A petition set up by Liberty to demand a stop to facial recognition in public places has passed 21,000 signatures.

Discussing the legal framework around facial recognition and law enforcement last week, Dr Michael Veale, a lecturer in digital rights and regulation at UCL, told us that in his view the EU’s data protection framework, GDPR, forbids facial recognition by private companies “in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate”.

A UK man who challenged a Welsh police force’s trial of facial recognition has a pending appeal after losing the first round of a human rights challenge. Although in that case the challenge pertains to police use of the tech — rather than, as in the Met’s case, a private company (NEC) providing the service to the police.

UN calls for investigation after Saudis linked to Bezos phone hack

By Zack Whittaker

United Nations experts are calling for an investigation after a forensic report said Saudi officials “most likely” used a mobile hacking tool built by mobile spyware maker, the NSO Group, to hack into the Amazon founder Jeff Bezos’ phone.

Remarks made by U.N. human rights experts on Wednesday said said the Israeli spyware maker’s flagship Pegasus mobile spyware was likely used to exfiltrate gigabytes of data from Bezos’ phone in May 2018, about six months after the Saudi government first obtained the spyware.

It comes a day after news emerged, citing a forensics report commissioned to examine the Amazon founder’s phone, that the malware was delivered from a number belonging to Saudi crown prince Mohammed bin Salman. The forensics report, carried out by FTI Consulting, said it was “highly probable” that the phone hack was triggered by a malicious video sent over WhatsApp to Bezos’ phone. Within hours, large amounts of data on Bezos’ phone had been exfiltrated.

U.N. experts Agnes Callamard and Davie Kaye, who were given a copy of the forensics report, said the breach of Bezos’ phone was part of “a pattern of targeted surveillance of perceived opponents and those of broader strategic importance to the Saudi authorities.”

But the report left open the possibility that technology developed by another mobile spyware maker may have been used.

The Saudi government has rejected the claims, calling them “absurd.”

NSO Group said in a statement that its technology “was not used in this instance,” saying its technology “cannot be used on U.S. phone numbers.” The company said any suggestion otherwise was “defamatory” and threatened legal action.

Forensics experts are said to have began looking at Bezos’ phone after he accused the National Enquirer of blackmail last year. In a tell-all Medium post, Bezos described how he was targeted by the tabloid, which obtained and published private text messages and photos from his phone, prompting an investigation into the leak.

The subsequent forensic report, which TechCrunch has not yet seen, claims the initial breach began after Bezos and the Saudi crown prince exchanged phone numbers in April 2018, a month before the hack.

The report said several other prominent figures, including Saudi dissidents and political activists, also had their phones infected with the same mobile malware around the time of the Bezos phone breach. Some whose phones were infected including those close to Jamal Khashoggi, a prominent Saudi critic and columnist for the Washington Post — which Bezos owns — who was murdered five months later.

“The information we have received suggests the possible involvement of the Crown Prince in surveillance of Mr. Bezos, in an effort to influence, if not silence, The Washington Post’s reporting on Saudi Arabia,” the U.N. experts said.

U.S. intelligence concluded that bin Salman ordered Khashoggi’s death.

The U.N. experts said the Saudis purchased the Pegasus malware, and used WhatsApp as a way to deliver the malware to Bezos’ phone.

WhatsApp, which is owned by Facebook, filed a lawsuit against the NSO Group for creating and using the Pegasus malware, which exploits a since-fixed vulnerability in the the messaging platform. Once exploited, sometimes silently and without the target knowing, the operators can download data from the user’s device. Facebook said at the time more than the malware was delivered on more than 1,400 targeted devices.

The U.N. experts said they will continue to investigate the “growing role of the surveillance industry” used for targeting journalists, human rights defenders, and owners of media outlets.

Amazon did not immediately comment.

Where top VCs are investing in adtech and martech

By Arman Tabatabai

Lately, the venture community’s relationship with advertising tech has been a rocky one.

Advertising is no longer the venture oasis it was in the past, with the flow of VC dollars in the space dropping dramatically in recent years. According to data from Crunchbase, adtech deal flow has fallen at a roughly 10% compounded annual growth rate over the last five years.

While subsectors like privacy or automation still manage to pull in funding, with an estimated 90%-plus of digital ad spend growth going to incumbent behemoths like Facebook and Google, the amount of high-growth opportunities in the adtech space seems to grow narrower by the week.

Despite these pains, funding for marketing technology has remained much more stable and healthy; over the last five years, deal flow in marketing tech has only dropped at a 3.5% compounded annual growth rate according to Crunchbase, with annual invested capital in the space hovering just under $2 billion.

Given the movement in the adtech and martech sectors, we wanted to try to gauge where opportunity still exists in the verticals and which startups may have the best chance at attracting venture funding today. We asked four leading VCs who work at firms spanning early to growth stages to share what’s exciting them most and where they see opportunity in marketing and advertising:

Several of the firms we spoke to (both included and not included in this survey) stated that they are not actively investing in advertising tech at present.

UK watchdog sets out ‘age appropriate’ design code for online services to keep kids’ privacy safe

By Natasha Lomas

The UK’s data protection watchdog has today published a set of design standards for Internet services which are intended to help protect the privacy and safety of children online.

The Information Commissioner’s Office (ICO) has been working on the Age Appropriate Design Code since the 2018 update of domestic data protection law — as part of a government push to create ‘world-leading’ standards for children when they’re online.

UK lawmakers have grown increasingly concerned about the ‘datafication’ of children when they go online and may be too young to legally consent to being tracked and profiled under existing European data protection law.

The ICO’s code is comprised of 15 standards of what it calls “age appropriate design” — which the regulator says reflects a “risk-based approach”, including stipulating that setting should be set by default to ‘high privacy’; that only the minimum amount of data needed to provide the service should be collected and retained; and that children’s data should not be shared unless there’s a reason to do so that’s in their best interests.

Profiling should also be off by default. While the code also takes aim at dark pattern UI designs that seek to manipulate user actions against their own interests, saying “nudge techniques” should not be used to “lead or encourage children to provide unnecessary personal data or weaken or turn off their privacy protections”.

“The focus is on providing default settings which ensures that children have the best possible access to online services whilst minimising data collection and use, by default,” the regulator writes in an executive summary.

While the age appropriate design code is focused on protecting children it is applies to a very broad range of online services — with the regulator noting that “the majority of online services that children use are covered” and also stipulating “this code applies if children are likely to use your service” [emphasis ours].

This means it could be applied to anything from games, to social media platforms to fitness apps to educational websites and on-demand streaming services — if they’re available to UK users.

“We consider that for a service to be ‘likely’ to be accessed [by children], the possibility of this happening needs to be more probable than not. This recognises the intention of Parliament to cover services that children use in reality, but does not extend the definition to cover all services that children could possibly access,” the ICO adds.

Here are the 15 standards in full as the regulator describes them:

  1. Best interests of the child: The best interests of the child should be a primary consideration when you design and develop online services likely to be accessed by a child.
  2. Data protection impact assessments: Undertake a DPIA to assess and mitigate risks to the rights and freedoms of children who are likely to access your service, which arise from your data processing. Take into account differing ages, capacities and development needs and ensure that your DPIA builds in compliance
    with this code.
  3. Age appropriate application: Take a risk-based approach to recognising the age of individual users and ensure you effectively apply the standards in this code to child users. Either establish age with a level of certainty that is appropriate to the risks to the rights and freedoms of children that arise from your data processing, or apply the standards in this code to all your users instead.
  4. Transparency: The privacy information you provide to users, and other published terms, policies and community standards, must be concise, prominent and in clear language suited to the age of the child. Provide additional specific ‘bite-sized’ explanations about how you use personal data at the point that use is activated.
  5. Detrimental use of data: Do not use children’s personal data in ways that have been shown to be detrimental to their wellbeing, or that go against industry codes of practice, other regulatory provisions or Government advice.
  6. Policies and community standards: Uphold your own published terms, policies and community standards (including but not limited to privacy policies, age restriction, behaviour rules and content policies).
  7. Default settings: Settings must be ‘high privacy’ by default (unless you can demonstrate a compelling reason for a different default setting, taking account of the best interests of the child).
  8. Data minimisation: Collect and retain only the minimum amount of personal data you need to provide the elements of your service in which a child is actively and knowingly engaged. Give children separate choices over which elements they wish to activate.
  9. Data sharing: Do not disclose children’s data unless you can demonstrate a compelling reason to do so, taking account of the best interests of the child.
  10. Geolocation: Switch geolocation options off by default (unless you can demonstrate a compelling reason for geolocation to be switched on by default, taking account of the best interests of the child). Provide an obvious sign for children when location tracking is active. Options which make a child’s location visible to others must default back to ‘off’ at the end of each session.
  11. Parental controls: If you provide parental controls, give the child age appropriate information about this. If your online service allows a parent or carer to monitor their child’s online activity or track their location, provide an obvious sign to the child when they are being monitored.
  12. Profiling: Switch options which use profiling ‘off’ by default (unless you can demonstrate a compelling reason for profiling to be on by default, taking account of the best interests of the child). Only allow profiling if you have appropriate measures in place to protect the child from any harmful effects (in particular, being fed content that is detrimental to their health or wellbeing).
  13. Nudge techniques: Do not use nudge techniques to lead or encourage children to provide unnecessary personal data or weaken or turn off their privacy protections.
  14. Connected toys and devices: If you provide a connected toy or device ensure you include effective tools to enable conformance to this code.
  15. Online tools: Provide prominent and accessible tools to help children exercise their data protection rights and report concerns.

The Age Appropriate Design Code also defines children as under the age of 18 — which offers a higher bar than current UK data protection law which, for example, puts only a 13-year-age limit for children to be legally able to give their consent to being tracked online.

So — assuming (very wildly) — that Internet services were to suddenly decide to follow the code to the letter, setting trackers off by default and not nudging users to weaken privacy-protecting defaults by manipulating them to give up more data, the code could — in theory — raise the level of privacy both children and adults typically get online.

However it’s not legally binding — so there’s a pretty fat chance of that.

Although the regulator does make a point of noting that the standards in the code are backed by existing data protection laws, which it does regulate and can legally enforceable (and which include clear principles like ‘privacy by design and default’) — pointing out it has powers to take action against law breakers, including “tough sanctions” such as orders to stop processing data and fines of up to 4% of a company’s global turnover.

So, in a way, the regulator appears to be saying: ‘Are you feeling lucky data punk?’

The code also still has to be laid before parliament for approval for a period of 40 sitting days — with the ICO saying it will come into force 21 days after that, assuming no objections. Then there’s a further 12 month transition period after it comes into force — to “give online services time to conform”. So there’s a fair bit of slack built in before any action may be taken to tackle flagrant nose-thumbers.

Last April the UK government published a white paper setting out its proposals for regulating a range of online harms — including seeking to address concern about inappropriate material that’s available on the Internet being accessed by children.

The ICO’s Age Appropriate Design Code is intended to support that effort. So there’s also a chance that some of the same sorts of stipulations could be baked into the planned online harms bill.

“This is not, and will not be, ‘law’. It is just a code of practice,” said Neil Brown, an Internet, telecoms and tech lawyer at Decoded Legal, discussing the likely impact of the suggested standards. “It shows the direction of the ICO’s thinking, and its expectations, and the ICO has to have regard to it when it takes enforcement action but it’s not something with which an organisation needs to comply as such. They need to comply with the law, which is the GDPR [General Data Protection Regulation] and the DPA [Data Protection Act] 2018.

“The code of practice sits under the DPA 2018, so companies which are within the scope of that are likely to want to understand what it says. The DPA 2018 and the UK GDPR (the version of the GDPR which will be in place after Brexit) covers controllers established in the UK, as well as overseas controllers which target services to people in the UK or monitor the behaviour of people in the UK. Merely making a service available to people in the UK should not be sufficient.”

“Overall, this is consistent with the general direction of travel for online services, and the perception that more needs to be done to protect children online,” Brown also told us.

“Right now, online services should be working out how to comply with the GDPR, the ePrivacy rules, and any other applicable laws. The obligation to comply with those laws does not change because of today’s code of practice. Rather, the code of practice shows the ICO’s thinking on what compliance might look like (and, possibly, goldplates some of the requirements of the law too).”

Organizations that choose to take note of the code — and are in a position to be able to demonstrate they’ve followed its standards — stand a better chance of persuading the regulator they’ve complied with relevant privacy laws, per Brown.

“Conversely, if they want to say that they comply with the law but not with the code, that is (legally) possible, but might be more of a struggle in terms of engagement with the ICO,” he added.

Zooming back out, the government said last fall that it’s committed to publishing draft online harms legislation for pre-legislative scrutiny “at pace”.

But at the same time it dropped a controversial plan included in a 2017 piece of digital legislation which would have made age checks for accessing online pornography mandatory — saying it wanted to focus on a developing “the most comprehensive approach possible to protecting children”, i.e. via the online harms bill.

How comprehensive the touted ‘child protections’ will end up being remains to be seen.

Brown suggests age verification could come through as a “general requirement”, given the age verification component of the Digital Economy Act 2017 was dropped — and “the government has said that these will be swept up in the broader online harms piece”.

The government has also been consulting with tech companies on possible ways to implement age verification online.

However the difficulties of regulating perpetually iterating Internet services — many of which are also operated by companies based outside the UK — have been writ large for years. (And are now mired in geopolitics.)

While the enforcement of existing European digital privacy laws remains, to put it politely, a work in progress

Adblock Plus’s Till Faida on the shifting shape of ad blocking

By Natasha Lomas

Publishers hate ad blockers, but millions of internet users embrace them — and many browsers even bake it in as a feature, including Google’s own Chrome. At the same time, growing numbers of publishers are walling off free content for visitors who hard-block ads, even asking users directly to be whitelisted.

It’s a fight for attention from two very different sides.

Some form of ad blocking is here to stay, so long as advertisements are irritating and the adtech industry remains deaf to genuine privacy reform. Although the nature of the ad-blocking business is generally closer to filtering than blocking, where is it headed?

We chatted with Till Faida, co-founder and CEO of eyeo, maker of Adblock Plus (ABP), to take the temperature of an evolving space that’s never been a stranger to controversy — including fresh calls for his company to face antitrust scrutiny.

Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

By Natasha Lomas

Alphabet and Google CEO, Sundar Pichai, is the latest tech giant kingpin to make a public call for AI to be regulated while simultaneously encouraging lawmakers towards a dilute enabling framework that does not put any hard limits on what can be done with AI technologies.

In an op-ed published in today’s Financial Times, Pichai makes a headline-grabbing call for artificial intelligence to be regulated. But his pitch injects a suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale — with the Google chief claiming: “AI has the potential to improve billions of lives, and the biggest risk may be failing to do so” — thereby seeking to frame ‘no hard limits’ as actually the safest option for humanity.

Simultaneously the pitch downplays any negatives that might cloud the greater good that Pichai implies AI will unlock — presenting “potential negative consequences” as simply the inevitable and necessary price of technological progress.

It’s all about managing the level of risk, is the leading suggestion, rather than questioning outright whether the use of a hugely risk-laden technology such as facial recognition should actually be viable in a democratic society.

“Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents,” Pichai writes, raiding history for a self-serving example while ignoring the vast climate costs of combustion engines (and the resulting threat now posed to the survival of countless species on Earth).

“The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread,” he goes on. “These lessons teach us that we need to be clear-eyed about what could go wrong.”

For “clear-eyed” read: Accepting of the technology-industry’s interpretation of ‘collateral damage’. (Which, in the case of misinformation and Facebook, appears to run to feeding democracy itself into the ad-targeting meat-grinder.)

Meanwhile, not at all mentioned in Pichai’s discussion of AI risks: The concentration of monopoly power that artificial intelligence appears to be very good at supercharging.

Funny that.

Of course it’s hardly surprising a tech giant that, in recent years, rebranded an entire research division to ‘Google AI’ — and has previously been called out by some of its own workforce over a project involving applying AI to military weapons technology — should be lobbying lawmakers to set AI ‘limits’ that are as dilute and abstract as possible.

The only thing that’s better than zero regulation are laws made by useful idiots who’ve fallen hook, line and sinker for industry-expounded false dichotomies — such as those claiming it’s ‘innovation or privacy’.

Pichai’s intervention also comes at a strategic moment, with US lawmakers eyeing AI regulation and the White House seemingly throwing itself into alignment with tech giants’ desires for ‘innovation-friendly’ rules which make their business easier. (To wit: This month White House CTO Michael Kratsios warned in a Bloomberg op-ed against “preemptive, burdensome or duplicative rules that would needlessly hamper AI innovation and growth”.)

The new European Commission, meanwhile, has been sounding a firmer line on both AI and big tech.

It has made tech-driven change a key policy priority, with president Ursula von der Leyen making public noises about reining in tech giants. She has also committed to publish “a coordinated European approach on the human and ethical implications of Artificial Intelligence” within her first 100 days in office. (She took up the post on December 1, 2019 so the clock is ticking.)

Last week a leaked draft of the Commission proposals for pan-EU AI regulation suggest it’s leaning towards a relatively light touch approach (albeit, the European version of light touch is considerably more involved and interventionist than anything born in a Trump White House, clearly) — although the paper does float the idea of a temporary ban on the use of facial recognition technology in public places.

The paper notes that such a ban would “safeguard the rights of individuals, in particular against any possible abuse of the technology” — before arguing against such a “far-reaching measure that might hamper the development and uptake of this technology”, in favor of relying on provisions in existing EU law (such as the EU data protection framework, GDPR), in addition to relevant tweaks to current product safety and liability laws.

While it’s not yet clear which way the Commission will jump on regulating AI, even the lightish-touch version its considering would likely be a lot more onerous than Pichai would like.

In the op-ed he calls for what he couches as “sensible regulation” — aka taking a “proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities”.

For “social opportunities” read: The plentiful ‘business opportunities’ Google is spying — assuming the hoped for vast additional revenue scale it can get by supercharging expansion of AI-powered services into all sorts of industries and sectors (from health to transportation to everywhere else in between) isn’t derailed by hard legal limits on where AI can actually be applied.

“Regulation can provide broad guidance while allowing for tailored implementation in different sectors,” Pichai urges, setting out a preference for enabling “principles” and post-application “reviews”, to keep the AI spice flowing.

The op-ed only touches very briefly on facial recognition — despite the FT editors choosing to illustrate it with an image of the tech. Here Pichai again seeks to reframe the debate around what is, by nature, an extremely rights-hostile technology — talking only in passing of “nefarious uses” of facial recognition.

Of course this wilfully obfuscates the inherent risks of letting blackbox machines make algorithmic guesses at identity every time a face happens to pass through a public space.

You can’t hope to protect people’s privacy in such a scenario. Many other rights are also at risk, depending on what else the technology is being used for. So, really, any use of facial recognition is laden with individual and societal risk.

But Pichai is seeking to put blinkers on lawmakers. He doesn’t want them to see inherent risks baked into such a potent and powerful technology — pushing them towards only a narrow, ill-intended subset of “nefarious” and “negative” AI uses and “consequences” as being worthy of “real concerns”. 

And so he returns to banging the drum for “a principled and regulated approach to applying AI” [emphasis ours] — putting the emphasis on regulation that, above all, gives the green light for AI to be applied.

What technologists fear most here is rules that tell them when artificial intelligence absolutely cannot apply.

Ethics and principles are, to a degree, mutable concepts — and ones which the tech giants have become very practiced at claiming as their own, for PR purposes, including by attaching self-styled ‘guard-rails’ to their own AI operations. (But of course there’s no actual legal binds there.)

At the same time data-mining giants like Google are very smooth operators when it comes to gaming existing EU rules around data protection, such as by infesting their user-interfaces with confusing dark patterns that push people to click or swipe their rights away.

But a ban on applying certain types of AI would change the rules of the game. Because it would put society in the driving seat.

Laws that contained at least a moratorium on certain “dangerous” applications of AI — such as facial recognition technology, or autonomous weapons like the drone-based system Google was previously working on — have been called for by some far-sighted regulators.

And a ban would be far harder for platform giants to simply bend to their will.

So for a while I was willing to buy into the whole tech ethics thing but now I’m fully on the side of tech refusal. We need to be teaching refusal.

— Jonathan Senchyne (@jsench) January 16, 2020

EU lawmakers are eyeing risk-based rules for AI, per leaked white paper

By Natasha Lomas

The European Commission is considering a temporary ban on the use of facial recognition technology, according to a draft proposal for regulating artificial intelligence obtained by Euroactiv.

Creating rules to ensure AI is ‘trustworthy and human’ has been an early flagship policy promise of the new Commission, led by president Ursula von der Leyen.

But the leaked proposal suggests the EU’s executive body is in fact leaning towards tweaks of existing rules and sector/app specific risk-assessments and requirements, rather than anything as firm as blanket sectoral requirements or bans.

The leaked Commission white paper floats the idea of a three-to-five-year period in which the use of facial recognition technology could be prohibited in public places — to give EU lawmakers time to devise ways to assess and manage risks around the use of the technology, such as to people’s privacy rights or the risk of discriminatory impacts from biased algorithms.

“This would safeguard the rights of individuals, in particular against any possible abuse of the technology,” the Commission writes, adding that: “It would be necessary to foresee some exceptions, notably for activities in the context of research and development and for security purposes.”

However the text raises immediate concerns about imposing even a time-limited ban — which is described as “a far-reaching measure that might hamper the development and uptake of this technology” — and the Commission goes on to state that its preference “at this stage” is to rely on existing EU data protection rules, aka the General Data Protection Regulation (GDPR).

The white paper contains a number of options the Commission is still considering for regulating the use of artificial intelligence more generally.

These range from voluntary labelling; to imposing sectorial requirements for the public sector (including on the use of facial recognition tech); to mandatory risk-based requirements for “high-risk” applications (such as within risky sectors like healthcare, transport, policing and the judiciary, as well as for applications which can “produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage”); to targeted amendments to existing EU product safety and liability legislation.

The proposal also emphasizes the need for an oversight governance regime to ensure rules are followed — though the Commission suggests leaving it open to Member States to choose whether to rely on existing governance bodies for this task or create new ones dedicated to regulating AI.

Per the draft white paper, the Commission says its preference for regulating AI are options 3 combined with 4 & 5: Aka mandatory risk-based requirements on developers (of whatever sub-set of AI apps are deemed “high-risk”) that could result in some “mandatory criteria”, combined with relevant tweaks to existing product safety and liability legislation, and an overarching governance framework.

Hence it appears to be leaning towards a relatively light-touch approach, focused on “building on existing EU legislation” and creating app-specific rules for a sub-set of “high-risk” AI apps/uses — and which likely won’t stretch to even a temporary ban on facial recognition technology.

Much of the white paper is also take up with discussion of strategies about “supporting the development and uptake of AI” and “facilitating access to data”.

“This risk-based approach would focus on areas where the public is at risk or an important legal interest is at stake,” the Commission writes. “This strictly targeted approach would not add any new additional administrative burden on applications that are deemed ‘low-risk’.”

EU commissioner Thierry Breton, who oversees the internal market portfolio, expressed resistance to creating rules for artificial intelligence last year — telling the EU parliament then that he “won’t be the voice of regulating AI“.

For “low-risk” AI apps, the white paper notes that provisions in the GDPR which give individuals the right to receive information about automated processing and profiling, and set a requirement to carry out a data protection impact assessment, would apply.

Albeit the regulation only defines limited rights and restrictions over automated processing — in instances where there’s a legal or similarly significant effect on the people involved. So it’s not clear how extensively it would in fact apply to “low-risk” apps.

If it’s the Commission’s intention to also rely on GDPR to regulate higher risk stuff — such as, for example, police forces’ use of facial recognition tech — instead of creating a more explicit sectoral framework to restrict their use of a highly privacy-hostile AI technologies — it could exacerbate an already confusingly legislative picture where law enforcement is concerned, according to Dr Michael Veale, a lecturer in digital rights and regulation at UCL.

“The situation is extremely unclear in the area of law enforcement, and particularly the use of public private partnerships in law enforcement. I would argue the GDPR in practice forbids facial recognition by private companies in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate. However, the merchants of doubt at facial recognition firms wish to sow heavy uncertainty into that area of law to legitimise their businesses,” he told TechCrunch.

“As a result, extra clarity would be extremely welcome,” Veale added. “The issue isn’t restricted to facial recognition however: Any type of biometric monitoring, such a voice or gait recognition, should be covered by any ban, because in practice they have the same effect on individuals.”

An advisory body set up to advise the Commission on AI policy set out a number of recommendations in a report last year — including suggesting a ban on the use of AI for mass surveillance and social credit scoring systems of citizens.

But its recommendations were criticized by privacy and rights experts for falling short by failing to grasp wider societal power imbalances and structural inequality issues which AI risks exacerbating — including by supercharging existing rights-eroding business models.

In a paper last year Veale dubbed the advisory body’s work a “missed opportunity” — writing that the group “largely ignore infrastructure and power, which should be one of, if not the most, central concern around the regulation and governance of data, optimisation and ‘artificial intelligence’ in Europe going forwards”.

Privacy experts slam UK’s ‘disastrous’ failure to tackle unlawful adtech

By Natasha Lomas

The UK’s data protection regulator has been slammed by privacy experts for once again failing to take enforcement action over systematic breaches of the law linked to behaviorally targeted ads — despite warning last summer that the adtech industry is out of control.

The Information Commissioner’s Office (ICO) has also previously admitted it suspects the real-time bidding (RTB) system involved in some programmatic online advertising to be unlawfully processing people’s sensitive information. But rather than take any enforcement against companies it suspects of law breaches it has today issued another mildly worded blog post — in which it frames what it admits is a “systemic problem” as fixable via (yet more) industry-led “reform”.

Yet it’s exactly such industry-led self-regulation that’s created the unlawful adtech mess in the first place, data protection experts warn.

The pervasive profiling of Internet users by the adtech ‘data industrial complex’ has been coming under wider scrutiny by lawmakers and civic society in recent years — with sweeping concerns being raised in parliaments around the world that individually targeted ads provide a conduit for discrimination, exploit the vulnerable, accelerate misinformation and undermine democratic processes as a consequence of platform asymmetries and the lack of transparency around how ads are targeted.

In Europe, which has a comprehensive framework of data protection rights, the core privacy complaint is that these creepy individually targeted ads rely on a systemic violation of people’s privacy from what amounts to industry-wide, Internet-enabled mass surveillance — which also risks the security of people’s data at vast scale.

It’s now almost a year and a half since the ICO was the recipient of a major complaint into RTB — filed by Dr Johnny Ryan of private browser Brave; Jim Killock, director of the Open Rights Group; and Dr Michael Veale, a data and policy lecturer at University College London — laying out what the complainants described then as “wide-scale and systemic” breaches of Europe’s data protection regime.

The complaint — which has also been filed with other EU data protection agencies — agues that the systematic broadcasting of people’s personal data to bidders in the adtech chain is inherently insecure and thereby contravenes Europe’s General Data Protection Regulation (GDPR), which stipulates that personal data be processed “in a manner that ensures appropriate security of the personal data”.

The regulation also requires data processors to have a valid legal basis for processing people’s information in the first place — and RTB fails that test, per privacy experts — either if ‘consent’ is claimed (given the sheer number of entities and volumes of data being passed around, which means it’s not credible to achieve GDPR’s ‘informed, specific and freely given’ threshold for consent to be valid); or ‘legitimate interests’ — which requires data processors carry out a number of balancing assessment tests to demonstrate it does actually apply.

“We have reviewed a number of justifications for the use of legitimate interests as the lawful basis for the processing of personal data in RTB. Our current view is that the justification offered by organisations is insufficient,” writes Simon McDougall, the ICO’s executive director of technology and innovation, developing a warning over the industry’s rampant misuse of legitimate interests to try to pass off RTB’s unlawful data processing as legit.

The ICO also isn’t exactly happy about what it’s found adtech doing on the Data Protection Impact Assessment front — saying, in so many words, that it’s come across widespread industry failure to actually, er, assess impacts.

“The Data Protection Impact Assessments we have seen have been generally immature, lack appropriate detail, and do not follow the ICO’s recommended steps to assess the risk to the rights and freedoms of the individual,” writes McDougall.

“We have also seen examples of basic data protection controls around security, data retention and data sharing being insufficient,” he adds.

Yet — again — despite fresh admissions of adtech’s lawfulness problem the regulator is choosing more stale inaction.

In the blog post McDougall does not rule out taking “formal” action at some point — but there’s only a vague suggestion of such activity being possible, and zero timeline for “develop[ing] an appropriate regulatory response”, as he puts it. (His preferred ‘E’ word in the blog is ‘engagement’; you’ll only find the word ‘enforcement’ in the footer link on the ICO’s website.)

“We will continue to investigate RTB. While it is too soon to speculate on the outcome of that investigation, given our understanding of the lack of maturity in some parts of this industry we anticipate it may be necessary to take formal regulatory action and will continue to progress our work on that basis,” he adds.

McDougall also trumpets some incremental industry fiddling — such as trade bodies agreeing to update their guidance — as somehow relevant to turning the tanker in a fundamentally broken system.

(Trade body the Internet Advertising Bureau’s UK branch has responded to developments with an upbeat note from its head of policy and regulatory affairs, Christie Dennehy-Neil, who lauds the ICO’s engagement as “a constructive process”, claiming: “We have made good progress” — before going on to urge its members and the wider industry to implement “the actions outlined in our response to the ICO” and “deliver meaningful change”. The statement climaxes with: “We look forward to continuing to engage with the ICO as this process develops.”)

McDougall also points to Google removing content categories from its RTB platform from next month (a move it announced months back, in November) as an important development; and seizes on the tech giant’s recent announcement of a proposal to phase out support for third party cookies within the next two years as ‘encouraging’.

Privacy experts have responded with facepalmed outrage to yet another can-kicking exercise by the UK regulator — warning that cosmetic tweaks to adtech won’t fix a system that’s designed to feast off an unlawful and inherently insecure high velocity background trading of Internet users’ personal data.

“When an industry is premised and profiting from clear and entrenched illegality that breach individuals’ fundamental rights, engagement is not a suitable remedy,” said UCL’s Veale in a statement. “The ICO cannot continue to look back at its past precedents for enforcement action, because it is exactly that timid approach that has led us to where we are now.”

ICO believes that cosmetic fixes can do the job when it comes to #adtech. But no matter how secure data flows are and how beautiful cookie notices are, can people really understand the consequences of their consent? I'm convinced that this consent will *never* be informed. 1/2 https://t.co/1avYt6lgV3

— Karolina Iwańska (@ka_iwanska) January 17, 2020

The trio behind the RTB complaints (which includes Veale) have also issued a scathing collective response to more “regulatory ambivalence” — denouncing the lack of any “substantive action to end the largest data breach ever recorded in the UK”.

“The ‘Real-Time Bidding’ data breach at the heart of RTB market exposes every person in the UK to mass profiling, and the attendant risks of manipulation and discrimination,” they warn. “Regulatory ambivalence cannot continue. The longer this data breach festers, the deeper the rot sets in and the further our data gets exploited. This must end. We are considering all options to put an end to the systemic breach, including direct challenges to the controllers and judicial oversight of the ICO.”

Wolfie Christl, a privacy researcher who focuses on adtech — including contributing to a recent study looking at how extensively popular apps are sharing user data with advertisers — dubbed the ICO’s response “disastrous”.

“Last summer the ICO stated in their report that millions of people were affected by thousands of companies’ GDPR violations. I was sceptical when they announced they would give the industry six more months without enforcing the law. My impression is they are trying to find a way to impose cosmetic changes and keep the data industry happy rather than acting on their own findings and putting an end to the ubiquitous data misuse in today’s digital marketing, which should have happened years ago. The ICO seems to prioritize appeasing the industry over the rights of data subjects, and this is disastrous,” he told us.

“The way data-driven online marketing currently works is illegal at scale and it needs to be stopped from happening,” Christl added. “Each day EU data protection authorities allow these practices to continue further violates people’s rights and freedoms and perpetuates a toxic digital economy.

“This undermines the GDPR and generally trust in tech, perpetuates legal uncertainty for businesses, and punishes companies who comply and create privacy-respecting services and business models.

“Twenty months after the GDPR came into full force, it is still not enforced in major areas. We still see large-scale misuse of personal information all over the digital world. There is no GDPR enforcement against the tech giants and there is no enforcement against thousands of data companies beyond the large platforms. It seems that data protection authorities across the EU are either not able — or not willing — to stop many kinds of GDPR violations conducted for business purposes. We won’t see any change without massive fines and data processing bans. EU member states and the EU Commission must act.”

Mass surveillance for national security does conflict with EU privacy rights, court advisor suggests

By Natasha Lomas

Mass surveillance regimes in the UK, Belgium and France which require bulk collection of digital data for a national security purpose may be at least partially in breach of fundamental privacy rights of European Union citizens, per the opinion of an influential advisor to Europe’s top court issued today.

Advocate general Campos Sánchez-Bordona’s (non-legally binding) opinion, which pertains to four references to the Court of Justice of the European Union (CJEU), takes the view that EU law covering the privacy of electronic communications applies in principle when providers of digital services are required by national laws to retain subscriber data for national security purposes.

A number of cases related to EU states’ surveillance powers and citizens’ privacy rights are dealt with in the opinion, including legal challenges brought by rights advocacy group Privacy International to bulk collection powers enshrined in the UK’s Investigatory Powers Act; and a La Quadrature du Net (and others’) challenge to a 2015 French decree related to specialized intelligence services.

At stake is a now familiar argument: Privacy groups contend that states’ bulk data collection and retention regimes have overreached the law, becoming so indiscriminately intrusive as to breach fundamental EU privacy rights — while states counter-claim they must collect and retain citizens’ data in bulk in order to fight national security threats such as terrorism.

Hence, in recent years, we’ve seen attempts by certain EU Member States to create national frameworks which effectively rubberstamp swingeing surveillance powers — that then, in turn, invite legal challenge under EU law.

The AG opinion holds with previous case law from the CJEU — specifically the Tele2 Sverige and Watson judgments — that “general and indiscriminate retention of all traffic and location data of all subscribers and registered users is disproportionate”, as the press release puts it.

Instead the recommendation is for “limited and discriminate retention” — with also “limited access to that data”.

“The Advocate General maintains that the fight against terrorism must not be considered solely in terms of practical effectiveness, but in terms of legal effectiveness, so that its means and methods should be compatible with the requirements of the rule of law, under which power and strength are subject to the limits of the law and, in particular, to a legal order that finds in the defence of fundamental rights the reason and purpose of its existence,” runs the PR in a particularly elegant passage summarizing the opinion.

The French legislation is deemed to fail on a number of fronts, including for imposing “general and indiscriminate” data retention obligations, and for failing to include provisions to notify data subjects that their information is being processed by a state authority where such notifications are possible without jeopardizing its action.

Belgian legislation also falls foul of EU law, per the opinion, for imposing a “general and indiscriminate” obligation on digital service providers to retain data — with the AG also flagging that its objectives are problematically broad (“not only the fight against terrorism and serious crime, but also defence of the territory, public security, the investigation, detection and prosecution of less serious offences”).

The UK’s bulk surveillance regime is similarly seen by the AG to fail the core “general and indiscriminate collection” test.

There’s a slight carve out for national legislation that’s incompatible with EU law being, in Sánchez-Bordona’s view, permitted to maintain its effects “on an exceptional and temporary basis”. But only if such a situation is justified by what is described as “overriding considerations relating to threats to public security or national security that cannot be addressed by other means or other alternatives, but only for as long as is strictly necessary to correct the incompatibility with EU law”.

If the court follows the opinion it’s possible states might seek to interpret such an exceptional provision as a degree of wiggle room to keep unlawful regimes running further past their legal sell-by-date.

Similarly, there could be questions over what exactly constitutes “limited” and “discriminate” data collection and retention — which could encourage states to push a ‘maximal’ interpretation of where the legal line lies.

Nonetheless, privacy advocates are viewing the opinion as a positive sign for the defence of fundamental rights.

In a statement welcoming the opinion, Privacy International dubbed it “a win for privacy”. “We all benefit when robust rights schemes, like the EU Charter of Fundamental Rights, are applied and followed,” said legal director, Caroline Wilson Palow. “If the Court agrees with the AG’s opinion, then unlawful bulk surveillance schemes, including one operated by the UK, will be reined in.”

The CJEU will issue its ruling at a later date — typically between three to six months after an AG opinion.

The opinion comes at a key time given European Commission lawmakers are set to rethink a plan to update the ePrivacy Directive, which deals with the privacy of electronic communications, after Member States failed to reach agreement last year over an earlier proposal for an ePrivacy Regulation — so the AG’s view will likely feed into that process.

This makes the revised e-Privacy Regulation a *huge* national security battleground for the MSes (they will miss the UK fighting for more surveillance) and is v relevant also to the ongoing debates on “bulk”/mass surveillance, and MI5’s latest requests… #ePR

— Ian Brown (@1Br0wn) January 15, 2020

The opinion may also have an impact on other legislative processes — such as the talks on the EU e-evidence package and negotiations on various international agreements on cross-border access to e-evidence — according to Luca Tosoni, a research fellow at the Norwegian Research Center for Computers and Law at the University of Oslo.

“It is worth noting that, under Article 4(2) of the Treaty on the European Union, “national security remains the sole responsibility of each Member State”. Yet, the advocate general’s opinion suggests that this provision does not exclude that EU data protection rules may have direct implications for national security,” Tosoni also pointed out. 

“Should the Court decide to follow the opinion… ‘metadata’ such as traffic and location data will remain subject to a high level of protection in the European Union, even when they are accessed for national security purposes.  This would require several Member States — including Belgium, France, the UK and others — to amend their domestic legislation.”

A Twitter app bug was used to match 17 million phone numbers to user accounts

By Zack Whittaker

A security researcher said he has matched 17 million phone numbers to Twitter user accounts by exploiting a flaw in Twitter’s Android app.

Ibrahim Balic found that it was possible to upload entire lists of generated phone numbers through Twitter’s contacts upload feature. “If you upload your phone number, it fetches user data in return,” he told TechCrunch.

He said Twitter’s contact upload feature doesn’t accept lists of phone numbers in sequential format — likely as a way to prevent this kind of matching. Instead, he generated more than two billion phone numbers, one after the other, then randomized the numbers, and uploaded them to Twitter through the Android app. (Balic said the bug did not exist in the web-based upload feature.)

Over a two-month period, Balic said he matched records from users in Israel, Turkey, Iran, Greece, Armenia, France and Germany, he said, but stopped after Twitter blocked the effort on December 20.

Balic provided TechCrunch with a sample of the phone numbers he matched. Using the site’s password reset feature, we verified his findings by comparing a random selection of usernames with the phone numbers that were provided.

In one case, TechCrunch was able to identify a senior Israeli politician using their matched phone number.

While he did not alert Twitter to the vulnerability, he took many of the phone numbers of high-profile Twitter users — including politicians and officials — to a WhatsApp group in an effort to warn users directly.

It’s not believed Balic’s efforts are related to a Twitter blog post published this week, which confirmed a bug could have allowed “a bad actor to see nonpublic account information or to control your account,” such as tweets, direct messages and location information.

A Twitter spokesperson told TechCrunch the company was working to “ensure this bug cannot be exploited again.”

“Upon learning of this bug, we suspended the accounts used to inappropriately access people’s personal information. Protecting the privacy and safety of the people who use Twitter is our number one priority and we remain focused on rapidly stopping spam and abuse originating from use of Twitter’s APIs,” the spokesperson said.

It’s the latest security lapse involving Twitter data in the past year. In May, Twitter admitted it gave account location data to one of its partners, even if the user had opted-out of having their data shared. In August, the company said it inadvertently gave its ad partners more data than it should have. And just last month, Twitter confirmed it used phone numbers provided by users for two-factor authentication for serving targeted ads.

Balic is previously known for identifying a security flaw breach that affected Apple’s developer center in 2013.

FBI secretly demands a ton of consumer data from credit agencies. Now lawmakers want answers

By Zack Whittaker

Recently released documents revealed the FBI has for years secretly demanded vast amounts of Americans’ consumer and financial information from the largest U.S. credit agencies.

The FBI has used these secret demands — known as national security letters — to compel credit giants to turn over non-content information, such as records of purchases and locations, that the agency deems necessary in national security investigations. But these letters have no judicial oversight and are typically filed with a gag order, preventing the recipient from disclosing the demand to anyone else — including the target of the letter.

Only a few tech companies, including Facebook, Google, and Microsoft, have disclosed that they have ever received one or more national security letters. Since the law changed in 2015 in the wake of the Edward Snowden disclosures that revealed the scope of the U.S. government’s surveillance operations, recipients have been allowed to petition the FBI to be cut loose from the gag provisions and publish the letters with redactions.

Since the Snowden revelations, tech companies have embraced transparency reports to inform their users of government demands for their data. But other major data collectors, such as smart home makers, have lagged behind. Some, like credit agencies, have failed to step up altogether.

Three lawmakers — Democratic senators Ron Wyden and Elizabeth Warren, and Republican senator Rand Paul — have sent letters to Equifax, Experian, and TransUnion, expressing their “alarm” as to why the credit giants have failed to disclose the number of government demands for consumer data they receive.

“Because your company holds so much potentially sensitive data on so many Americans and collects this information without obtaining consent from these individuals, you have a responsibility to be transparent about how you handle that data,” the letters said. “Unfortunately, your company has not provided information to policymakers or the public about the type or the number of disclosures that you have made to the FBI.”

Spokespeople for Equifax, Experian, and TransUnion did not respond to a request for comment outside business hours.

It’s not known how many national security letters were issued to the credit agencies since the legal powers were signed into law in 2001. The New York Times said the national security letters to credit agencies were a “small but telling fraction” of the overall half-million FBI-issued demands made to date.

Other banks and financial institutions, as well as universities, cell service and internet providers, were targets of national security letters, the documents revealed.

The senators have given the agencies until December 27 to disclose the number of demands each has received.

Many smart home device makers still won’t say if they give your data to the government

By Zack Whittaker

A year ago, we asked some of the most prominent smart home device makers if they have given customer data to governments. The results were mixed.

The big three smart home device makers — Amazon, Facebook and Google (which includes Nest) — all disclosed in their transparency reports if and when governments demand customer data. Apple said it didn’t need a report, as the data it collects was anonymized.

As for the rest, none had published their government data-demand figures.

In the year that’s past, the smart home market has grown rapidly, but the remaining device makers have made little to no progress on disclosing their figures. And in some cases, it got worse.

Smart home and other internet-connected devices may be convenient and accessible, but they collect vast amounts of information on you and your home. Smart locks know when someone enters your house, and smart doorbells can capture their face. Smart TVs know which programs you watch and some smart speakers know what you’re interested in. Many smart devices collect data when they’re not in use — and some collect data points you may not even think about, like your wireless network information, for example — and send them back to the manufacturers, ostensibly to make the gadgets — and your home — smarter.

Because the data is stored in the cloud by the devices manufacturers, law enforcement and government agencies can demand those companies turn over that data to solve crimes.

But as the amount of data collection increases, companies are not being transparent about the data demands they receive. All we have are anecdotal reports — and there are plenty: Police obtained Amazon Echo data to help solve a murder; Fitbit turned over data that was used to charge a man with murder; Samsung helped catch a sex predator who watched child abuse imagery; Nest gave up surveillance footage to help jail gang members; and recent reporting on Amazon-owned Ring shows close links between the smart home device maker and law enforcement.

Here’s what we found.

Smart lock and doorbell maker August gave the exact same statement as last year, that it “does not currently have a transparency report and we have never received any National Security Letters or orders for user content or non-content information under the Foreign Intelligence Surveillance Act (FISA).” But August spokesperson Stephanie Ng would not comment on the number of non-national security requests — subpoenas, warrants and court orders — that the company has received, only that it complies with “all laws” when it receives a legal demand.

Roomba maker iRobot said, as it did last year, that it has “not received” any government demands for data. “iRobot does not plan to issue a transparency report at this time,” but it may consider publishing a report “should iRobot receive a government request for customer data.”

Arlo, a former Netgear smart home division that spun out in 2018, did not respond to a request for comment. Netgear, which still has some smart home technology, said it does “not publicly disclose a transparency report.”

Amazon-owned Ring, whose cooperation with law enforcement has drawn ire from lawmakers and faced questions over its ability to protect users’ privacy, said last year it planned to release a transparency report in the future, but did not say when. This time around, Ring spokesperson Yassi Shahmiri would not comment and stopped responding to repeated follow-up emails.

Honeywell spokesperson Megan McGovern would not comment and referred questions to Resideo, the smart home division Honeywell spun out a year ago. Resideo’s Bruce Anderson did not comment.

And just as last year, Samsung, a maker of smart devices and internet-connected televisions and other appliances, also did not respond to a request for comment.

On the whole, the companies’ responses were largely the same as last year.

But smart switch and sensor maker Ecobee, which last year promised to publish a transparency report “at the end of 2018” did not follow through with its promise. When we asked why, Ecobee spokesperson Kristen Johnson did not respond to repeated requests for comment.

Based on the best available data, August, iRobot, Ring and the rest of the smart home device makers have hundreds of millions of users and customers around the world, with the potential to give governments vast troves of data — and users and customers are none the wiser.

Transparency reports may not be perfect, and some are less transparent than others. But if big companies — even after bruising headlines and claims of co-operation with surveillance states — disclose their figures, there’s little excuse for the smaller companies.

This time around, some companies fared better than their rivals. But for anyone mindful of their privacy, you can — and should — expect better.

India proposes new rules to access its citizens’ data

By Manish Singh

India has proposed groundbreaking new rules that would require companies to garner consent from citizens in the country before collecting and processing their personal data. But at the same time, the new rules also state that companies would have to hand over “non-personal” data of their users to the government, and New Delhi would also hold the power to collect any data of its citizens without consent to serve sovereignty and larger public interest.

The new rules, proposed in “Personal Data Protection Bill 2019,” a copy of which leaked on Tuesday, would permit New Delhi to “exempt any agency of government from application of Act in the interest of sovereignty and integrity of India, the security of the state, friendly relations with foreign states, public order.”

If the bill passes — and it is expected to be discussed in the parliament in the coming weeks — select controversial laws drafted more than a decade ago would remain unchanged.

Another proposed rule would grant New Delhi the power to ask any “data fiduciary or data processor” to hand over “anonymized” “non-personal data” for the purpose of better governance, inform its policies and deliver services to citizens.

New Delhi’s new bill — which was passed by the Union Cabinet last week, but has yet to be formally shared with the public — could create new challenges for Google, Facebook, Twitter, ByteDance’s TikTok and other companies that are already facing some regulatory heat in the nation.

India conceptualized this bill two years ago and in the years since, it has undergone significant changes. An earlier draft of the bill that was formally made public last year had stated that the Indian government must not have the ability to collect or process personal data of its citizens, unless a lawful procedure was followed.

Ambiguity over who the Indian government considers an “intermediary” or a “social media” platform, or a “social media intermediary” are yet to be fully resolved, however. In the latest version, the bill appears to not include payment services, internet service providers, search engines, online encyclopedias, email services and online storage services as “social media intermediaries.”

One of the proposed rules, that is directly aimed at Facebook, Twitter, and any other social media company that enables “interaction between two or more users,” requires them to give their users an option to verify their identity and then publicly have such status displayed on their profile — similar to the blue tick that Facebook and Twitter reserve for celebrities and other accounts of public interest.

Last week news outlet Reuters reported portions of the bill, citing unnamed sources. The report claimed that India was proposing the voluntary identity-verification requirement to curb the spread of false information.

As social media companies grapple with the spread of false information, that have resulted in at least 30 deaths in India, the Narendra Modi-led government, which itself is a big consumer of social media platforms, has sought to take measures to address several issues.

Over the last two years, the Indian government has asked WhatsApp, which has amassed more than 400 million users in India, to “bring traceability” to its platform in a move that would allow the authority to identify the people who are spreading the information.

WhatsApp has insisted that any such move would require breaking encryption, which would compromise the privacy and security of more than a billion people globally.

The bill has not specifically cited government’s desires to contain false information for this proposal, however. Instead the bill insists that this would bring more “transparency and accountability.”

Some critics have expressed concerns over the proposed rules. Udbhav Tiwari, a public policy advisor at Mozilla, said New Delhi’s bill would “represent new, significant threats to Indians’ privacy. If Indians are to be truly protected, it is urgent that parliament reviews and addresses these dangerous provisions before they become law.”

Indian news site MediaNama has outlined several more changes in this Twitter thread.

Another US court says police cannot force suspects to turn over their passwords

By Zack Whittaker

The highest court in Pennsylvania has ruled that the state’s law enforcement cannot force suspects to turn over their passwords that would unlock their devices.

The state’s Supreme Court said compelling a password from a suspect is a violation of the Fifth Amendment, a constitutional protection that protects suspects from self-incrimination.

It’s not an surprising ruling, given other state and federal courts have almost always come to the same conclusion. The Fifth Amendment grants anyone in the U.S. the right to remain silent, which includes the right to not turn over information that could incriminate them in a crime. These days, those protections extend to the passcodes that only a device owner knows.

But the ruling is not expected to affect the ability by police to force suspects to use their biometrics — like their face or fingerprints — to unlock their phone or computer.

Because your passcode is stored in your head and your biometrics are not, prosecutors have long argued that police can compel a suspect into unlocking a device with their biometrics, which they say are not constitutionally protected. The court also did not address biometrics. In a footnote of the ruling, the court said it “need not address” the issue, blaming the U.S. Supreme Court for creating “the dichotomy between physical and mental communication.”

Peter Goldberger, president of the ACLU of Pennsylvania, who presented the arguments before the court, said it was “fundamental” that suspects have the right to “to avoid self-incrimination.”

Despite the spate of rulings in recent years, law enforcement have still tried to find their way around compelling passwords from suspects. The now-infamous Apple-FBI case saw the federal agency try to force the tech giant to rewrite its iPhone software in an effort to beat the password on the handset of the terrorist Syed Rizwan Farook, who with his wife killed 14 people in his San Bernardino workplace in 2015. Apple said the FBI’s use of the 200-year-old All Writs Act would be “unduly burdensome” by putting potentially every other iPhone at risk if the rewritten software leaked or was stolen.

The FBI eventually dropped the case without Apple’s help after the agency paid hackers to break into the phone.

Brett Max Kaufman, a senior staff attorney at the ACLU’s Center for Democracy, said the Pennsylvania case ruling sends a message to other courts to follow in its footsteps.

“The court rightly rejects the government’s effort to create a giant, digital-age loophole undermining our time-tested Fifth Amendment right against self-incrimination,” he said. “The government has never been permitted to force a person to assist in their own prosecution, and the courts should not start permitting it to do so now simply because encrypted passwords have replaced the combination lock.”

“We applaud the court’s decision and look forward to more courts to follow in the many pending cases to be decided next,” he added.

Congress extends NSA call records collection powers to March

By Zack Whittaker

In passing a short-term funding bill to avoid a U.S. government shutdown, Congress has also extended the government’s legal powers allowing it to collect daily millions of Americans’ call records.

Buried in a funding bill passed by the House this week was a clause that extended the government’s so-called Section 215 powers, which allow the National Security Agency to compel phone providers to turn over daily logs — known as “metadata” — of their customers’ calls, including their phone numbers, when the call was made and the call’s duration. The program is designed to allow intelligence analysts to sift through vast amounts of data to identify links between suspected terrorists. But the program also collects millions of wholly domestic phone calls between Americans, which courts have ruled unconstitutional.

Although it’s believed all the major phone carriers have been told to feed their call logs to the government, a top secret court order leaked by whistleblower Edward Snowden only confirmed Verizon — which owns TechCrunch — as an unwitting participant in the program.

The Senate approved the funding bill on Thursday after a 74-20 vote. The bill will now go to the president’s desk, averting a midnight government shut down, but also confirming the Section 215 powers will be extended until March 15.

The Senate voted 74-20 to approve the bill (Image: C-SPAN)

But although the powers are to be extended, the program itself is said to have been shut down.

After the Snowden disclosures in 2013, Congress moved to rein in the NSA’s call collection powers amid public outcry. In 2015, lawmakers passed the Freedom Act, which allowed the continued collection of call records but ostensibly with greater oversight. Since the Freedom Act passed, the number of records collected has rocketed. But during that time the NSA was forced to come clean and admit that it “overcollected” Americans’ call records on two separate occasions, prompting the agency to delete hundreds of millions of call logs.

The second incident led the NSA to shut down the call records collection program. But the Trump administration has renewed efforts to restart the program by pushing for the legal powers to be reauthorized.

The Electronic Frontier Foundation, which first noted the legal extension, said it was “disappointed” that lawmakers “hid an extension of these authorities in a funding bill, without debate and without consideration of meaningful privacy and civil liberties safeguards to include.”

A source in the Senate said the three-month extension came as a surprise, but that the additional time would allow lawmakers more time to properly debate reforms to the program without rushing it through before the end of the year.

Several privacy advocacy and rights groups, including the EFF and the ACLU, have called on the government to end the call records collection program.

Amnesty International latest to slam surveillance giants Facebook and Google as ‘incompatible’ with human rights

By Natasha Lomas

Human rights charity Amnesty International is the latest to call for reform of surveillance capitalism — blasting the business models of “surveillance giants” Facebook and Google in a new report which warns the pair’s market dominating platforms are “enabling human rights harm at a population scale”.

“[D]espite the real value of the services they provide, Google and Facebook’s platforms come at a systemic cost,” Amnesty warns. “The companies’ surveillance-based business model forces people to make a Faustian bargain, whereby they are only able to enjoy their human rights online by submitting to a system predicated on human rights abuse. Firstly, an assault on the right to privacy on an unprecedented scale, and then a series of knock-on effects that pose a serious risk to a range of other rights, from freedom of expression and opinion, to freedom of thought and the right to non-discrimination.”

“This isn’t the internet people signed up for,” it adds.

What’s most striking about the report is the familiarly of the arguments. There is now a huge weight of consensus criticism around surveillance-based decision-making — from Apple’s own Tim Cook through scholars such as Shoshana Zuboff and Zeynep Tufekci to the United Nations — that’s itself been fed by a steady stream of reportage of the individual and societal harms flowing from platforms’ pervasive and consentless capturing and hijacking of people’s information for ad-based manipulation and profit.

This core power asymmetry is maintained and topped off by self-serving policy positions which at best fiddle around the edges of an inherently anti-humanitarian system. While platforms have become practiced in dark arts PR — offering, at best, a pantomime ear to the latest data-enabled outrage that’s making headlines, without ever actually changing the underlying system. That surveillance capitalism’s abusive modus operandi is now inspiring governments to follow suit — aping the approach by developing their own data-driven control systems to straitjacket citizens — is exceptionally chilling.

But while the arguments against digital surveillance are now very familiar what’s still sorely lacking is an effective regulatory response to force reform of what is at base a moral failure — and one that’s been allowed to scale so big it’s attacking the democratic underpinnings of Western society.

“Google and Facebook have established policies and processes to address their impacts on privacy and freedom of expression – but evidently, given that their surveillance-based business model undermines the very essence of the right to privacy and poses a serious risk to a range of other rights, the companies are not taking a holistic approach, nor are they questioning whether their current business models themselves can be compliant with their responsibility to respect human rights,” Amnesty writes.

“The abuse of privacy that is core to Facebook and Google’s surveillance-based business model is starkly demonstrated by the companies’ long history of privacy scandals. Despite the companies’ assurances over their commitment to privacy, it is difficult not to see these numerous privacy infringements as part of the normal functioning of their business, rather than aberrations.”

Needless to say Facebook and Google do not agree with Amnesty’s assessment. But, well, they would say that wouldn’t they?

Amnesty’s report notes there is now a whole surveillance industry feeding this beast — from adtech players to data brokers — while pointing out that the dominance of Facebook and Google, aka the adtech duopoly, over “the primary channels that most of the world relies on to engage with the internet” is itself another harm, as it lends the pair of surveillance giants “unparalleled power over people’s lives online”.

“The power of Google and Facebook over the core platforms of the internet poses unique risks for human rights,” it warns. “For most people it is simply not feasible to use the internet while avoiding all Google and Facebook services. The dominant internet platforms are no longer ‘optional’ in many societies, and using them is a necessary part of participating in modern life.”

Amnesty concludes that it is “now evident that the era of self-regulation in the tech sector is coming to an end” — saying further state-based regulation will be necessary. Its call there is for legislators to follow a human rights-based approach to rein in surveillance giants.

You can read the report in full here (PDF).

Opinion: Websites Ask for Permissions And Attack Forgiveness

By Lukasz Olejnik
Web pages are increasingly powerful—asking for notifications, webcam access, or location—but this great power comes with great vulnerabilities.
❌