FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Today — August 21st 2019Your RSS feeds

Splunk acquires cloud monitoring service SignalFx for $1.05B

By Frederic Lardinois

Splunk, the publicly traded data processing and analytics company, today announced that it has acquired SignalFx for a total price of about $1.05 billion. Approximately 60% of this will be in cash and 40% in Splunk common stock. The companies expect the acquisition to close in the second half of 2020.

SignalFx, which emerged from stealth in 2015, provides real-time cloud monitoring solutions, predictive analytics and more. Upon close, Splunk argues, this acquisition will allow it to become a leader “in observability and APM for organizations at every stage of their cloud journey, from cloud-native apps to homegrown on-premises applications.”

Indeed, the acquisition will likely make Splunk a far stronger player in the cloud space as it expands its support for cloud-native applications and the modern infrastructures and architectures those rely on.

2019 08 21 1332

Ahead of the acquisition, SignalFx had raised a total of $178.5 million, according to Crunchbase, including a recent Series E round. Investors include General Catalyst, Tiger Global Management, Andreessen Horowitz and CRV. Its customers include the likes of AthenaHealth, Change.org, Kayak, NBCUniversal and Yelp.

“Data fuels the modern business, and the acquisition of SignalFx squarely puts Splunk in position as a leader in monitoring and observability at massive scale,” said Doug Merritt, president and CEO, Splunk, in today’s announcement. “SignalFx will support our continued commitment to giving customers one platform that can monitor the entire enterprise application lifecycle. We are also incredibly impressed by the SignalFx team and leadership, whose expertise and professionalism are a strong addition to the Splunk family.”

Before yesterdayYour RSS feeds

Privacy researchers devise a noise-exploitation attack that defeats dynamic anonymity

By Natasha Lomas

Privacy researchers in Europe believe they have the first proof that a long-theorised vulnerability in systems designed to protect privacy by aggregating and adding noise to data to mask individual identities is no longer just a theory.

The research has implications for the immediate field of differential privacy and beyond — raising wide-ranging questions about how privacy is regulated if anonymization only works until a determined attacker figures out how to reverse the method that’s being used to dynamically fuzz the data.

Current EU law doesn’t recognise anonymous data as personal data. Although it does treat pseudoanonymized data as personal data because of the risk of re-identification.

Yet a growing body of research suggests the risk of de-anonymization on high dimension data sets is persistent. Even — per this latest research — when a database system has been very carefully designed with privacy protection in mind.

It suggests the entire business of protecting privacy needs to get a whole lot more dynamic to respond to the risk of perpetually evolving attacks.

Academics from Imperial College London and Université Catholique de Louvain are behind the new research.

This week, at the 28th USENIX Security Symposium, they presented a paper detailing a new class of noise-exploitation attacks on a query-based database that uses aggregation and noise injection to dynamically mask personal data.

The product they were looking at is a database querying framework, called Diffix — jointly developed by a German startup called Aircloak and the Max Planck Institute for Software Systems.

On its website Aircloak bills the technology as “the first GDPR-grade anonymization” — aka Europe’s General Data Protection Regulation, which began being applied last year, raising the bar for privacy compliance by introducing a data protection regime that includes fines that can scale up to 4% of a data processor’s global annual turnover.

What Aircloak is essentially offering is to manage GDPR risk by providing anonymity as a commercial service — allowing queries to be run on a data-set that let analysts gain valuable insights without accessing the data itself. The promise being it’s privacy (and GDPR) ‘safe’ because it’s designed to mask individual identities by returning anonymized results.

The problem is personal data that’s re-identifiable isn’t anonymous data. And the researchers were able to craft attacks that undo Diffix’s dynamic anonymity.

“What we did here is we studied the system and we showed that actually there is a vulnerability that exists in their system that allows us to use their system and to send carefully created queries that allow us to extract — to exfiltrate — information from the data-set that the system is supposed to protect,” explains Imperial College’s Yves-Alexandre de Montjoye, one of five co-authors of the paper.

“Differential privacy really shows that every time you answer one of my questions you’re giving me information and at some point — to the extreme — if you keep answering every single one of my questions I will ask you so many questions that at some point I will have figured out every single thing that exists in the database because every time you give me a bit more information,” he says of the premise behind the attack. “Something didn’t feel right… It was a bit too good to be true. That’s where we started.”

The researchers chose to focus on Diffix as they were responding to a bug bounty attack challenge put out by Aircloak.

“We start from one query and then we do a variation of it and by studying the differences between the queries we know that some of the noise will disappear, some of the noise will not disappear and by studying noise that does not disappear basically we figure out the sensitive information,” he explains.

“What a lot of people will do is try to cancel out the noise and recover the piece of information. What we’re doing with this attack is we’re taking it the other way round and we’re studying the noise… and by studying the noise we manage to infer the information that the noise was meant to protect.

“So instead of removing the noise we study statistically the noise sent back that we receive when we send carefully crafted queries — that’s how we attack the system.”

A vulnerability exists because the dynamically injected noise is data-dependent. Meaning it remains linked to the underlying information — and the researchers were able to show that carefully crafted queries can be devised to cross-reference responses that enable an attacker to reveal information the noise is intended to protect.

Or, to put it another way, a well designed attack can accurately infer personal data from fuzzy (‘anonymized’) responses.

This despite the system in question being “quite good,” as de Montjoye puts it of Diffix. “It’s well designed — they really put a lot of thought into this and what they do is they add quite a bit of noise to every answer that they send back to you to prevent attacks”.

“It’s what’s supposed to be protecting the system but it does leak information because the noise depends on the data that they’re trying to protect. And that’s really the property that we use to attack the system.”

The researchers were able to demonstrate the attack working with very high accuracy across four real-world data-sets. “We tried US censor data, we tried credit card data, we tried location,” he says. “What we showed for different data-sets is that this attack works very well.

“What we showed is our attack identified 93% of the people in the data-set to be at risk. And I think more importantly the method actually is very high accuracy — between 93% and 97% accuracy on a binary variable. So if it’s a true or false we would guess correctly between 93-97% of the time.”

They were also able to optimise the attack method so they could exfiltrate information with a relatively low level of queries per user — up to 32.

“Our goal was how low can we get that number so it would not look like abnormal behaviour,” he says. “We managed to decrease it in some cases up to 32 queries — which is very very little compared to what an analyst would do.”

After disclosing the attack to Aircloak, de Montjoye says it has developed a patch — and is describing the vulnerability as very low risk — but he points out it has yet to publish details of the patch so it’s not been possible to independently assess its effectiveness. 

“It’s a bit unfortunate,” he adds. “Basically they acknowledge the vulnerability [but] they don’t say it’s an issue. On the website they classify it as low risk. It’s a bit disappointing on that front. I think they felt attacked and that was really not our goal.”

For the researchers the key takeaway from the work is that a change of mindset is needed around privacy protection akin to the shift the security industry underwent in moving from sitting behind a firewall waiting to be attacked to adopting a pro-active, adversarial approach that’s intended to out-smart hackers.

“As a community to really move to something closer to adversarial privacy,” he tells TechCrunch. “We need to start adopting the red team, blue team penetration testing that have become standard in security.

“At this point it’s unlikely that we’ll ever find like a perfect system so I think what we need to do is how do we find ways to see those vulnerabilities, patch those systems and really try to test those systems that are being deployed — and how do we ensure that those systems are truly secure?”

“What we take from this is really — it’s on the one hand we need the security, what can we learn from security including open systems, verification mechanism, we need a lot of pen testing that happens in security — how do we bring some of that to privacy?”

“If your system releases aggregated data and you added some noise this is not sufficient to make it anonymous and attacks probably exist,” he adds.

“This is much better than what people are doing when you take the dataset and you try to add noise directly to the data. You can see why intuitively it’s already much better.  But even these systems are still are likely to have vulnerabilities. So the question is how do we find a balance, what is the role of the regulator, how do we move forward, and really how do we really learn from the security community?

“We need more than some ad hoc solutions and only limiting queries. Again limiting queries would be what differential privacy would do — but then in a practical setting it’s quite difficult.

“The last bit — again in security — is defence in depth. It’s basically a layered approach — it’s like we know the system is not perfect so on top of this we will add other protection.”

The research raises questions about the role of data protection authorities too.

During Diffix’s development, Aircloak writes on its website that it worked with France’s DPA, the CNIL, and a private company that certifies data protection products and services — saying: “In both cases we were successful in so far as we received essentially the strongest endorsement that each organization offers.”

Although it also says that experience “convinced us that no certification organization or DPA is really in a position to assert with high confidence that Diffix, or for that matter any complex anonymization technology, is anonymous”, adding: “These organizations either don’t have the expertise, or they don’t have the time and resources to devote to the problem.”

The researchers’ noise exploitation attack demonstrates how even a level of regulatory “endorsement” can look problematic. Even well designed, complex privacy systems can contain vulnerabilities and cannot offer perfect protection. 

“It raises a tonne of questions,” says de Montjoye. “It is difficult. It fundamentally asks even the question of what is the role of the regulator here?

When you look at security my feeling is it’s kind of the regulator is setting standards and then really the role of the company is to ensure that you meet those standards. That’s kind of what happens in data breaches.

“At some point it’s really a question of — when something [bad] happens — whether or not this was sufficient or not as a [privacy] defence, what is the industry standard? It is a very difficult one.”

“Anonymization is baked in the law — it is not personal data anymore so there are really a lot of implications,” he adds. “Again from security we learn a lot of things on transparency. Good security and good encryption relies on open protocol and mechanisms that everyone can go and look and try to attack so there’s really a lot at this moment we need to learn from security.

“There’s no going to be any perfect system. Vulnerability will keep being discovered so the question is how do we make sure things are still ok moving forward and really learning from security — how do we quickly patch them, how do we make sure there is a lot of research around the system to limit the risk, to make sure vulnerabilities are discovered by the good guys, these are patched and really [what is] the role of the regulator?

“Data can have bad applications and a lot of really good applications so I think to me it’s really about how to try to get as much of the good while limiting as much as possible the privacy risk.”

WebKit’s new anti-tracking policy puts privacy on a par with security

By Natasha Lomas

WebKit, the open source engine that underpins Internet browsers including Apple’s Safari browser, has announced a new tracking prevention policy that takes the strictest line yet on the background and cross-site tracking practices and technologies which are used to creep on Internet users as they go about their business online.

Trackers are technologies that are invisible to the average web user, yet which are designed to keep tabs on where they go and what they look at online — typically for ad targeting but web user profiling can have much broader implications than just creepy ads, potentially impacting the services people can access or the prices they see, and so on. Trackers can also be a conduit for hackers to inject actual malware, not just adtech.

This translates to stuff like tracking pixels; browser and device fingerprinting; and navigational tracking to name just a few of the myriad methods that have sprouted like weeds from an unregulated digital adtech industry that’s poured vast resource into ‘innovations’ intended to strip web users of their privacy.

WebKit’s new policy is essentially saying enough: Stop the creeping.

But — and here’s the shift — it’s also saying it’s going to treat attempts to circumvent its policy as akin to malicious hack attacks to be responded to in kind; i.e. with privacy patches and fresh technical measures to prevent tracking.

“WebKit will do its best to prevent all covert tracking, and all cross-site tracking (even when it’s not covert),” the organization writes (emphasis its), adding that these goals will apply to all types of tracking listed in the policy — as well as “tracking techniques currently unknown to us”.

“If we discover additional tracking techniques, we may expand this policy to include the new techniques and we may implement technical measures to prevent those techniques,” it adds.

“We will review WebKit patches in accordance with this policy. We will review new and existing web standards in light of this policy. And we will create new web technologies to re-enable specific non-harmful practices without reintroducing tracking capabilities.”

Spelling out its approach to circumvention, it states in no uncertain terms: “We treat circumvention of shipping anti-tracking measures with the same seriousness as exploitation of security vulnerabilities,” adding: “If a party attempts to circumvent our tracking prevention methods, we may add additional restrictions without prior notice. These restrictions may apply universally; to algorithmically classified targets; or to specific parties engaging in circumvention.”

It also says that if a certain tracking technique cannot be completely prevented without causing knock-on effects with webpage functions the user does intend to interact with, it will “limit the capability” of using the technique” — giving examples such as “limiting the time window for tracking” and “reducing the available bits of entropy” (i.e. limiting how many unique data points are available to be used to identify a user or their behavior).

If even that’s not possible “without undue user harm” it says it will “ask for the user’s informed consent to potential tracking”.

“We consider certain user actions, such as logging in to multiple first party websites or apps using the same account, to be implied consent to identifying the user as having the same identity in these multiple places. However, such logins should require a user action and be noticeable by the user, not be invisible or hidden,” it further warns.

WebKit credits Mozilla’s anti-tracking policy as inspiring and underpinning its new approach.

Commenting on the new policy, Dr Lukasz Olejnik, an independent cybersecurity advisor and research associate at the Center for Technology and Global Affairs Oxford University, says it marks a milestone in the evolution of how user privacy is treated in the browser — setting it on the same footing as security.

Equating circumvention of anti-tracking with security exploitation is unprecedented. This is exactly what we need to treat privacy as first class citizen. Enough with hand-waving. It's making technology catch up with regulations (not the other way, for once!) #ePrivacy #GDPR https://t.co/G1Dx7F2MXu

— Lukasz Olejnik (@lukOlejnik) August 15, 2019

“Treating privacy protection circumventions on par with security exploitation is a first of its kind and unprecedented move,” he tells TechCrunch. “This sends a clear warning to the potential abusers but also to the users… This is much more valuable than the still typical approach of ‘we treat the privacy of our users very seriously’ that some still think is enough when it comes to user expectation.”

Asked how he sees the policy impacting pervasive tracking, Olejnik does not predict an instant, overnight purge of unethical tracking of users of WebKit-based browsers but argues there will be less room for consent-less data-grabbers to manoeuvre.

“Some level of tracking, including with unethical technologies, will probably remain in use for the time being. But covert tracking is less and less tolerated,” he says. “It’s also interesting if any decisions will follow, such as for example the expansion of bug bounties to reported privacy vulnerabilities.”

“How this policy will be enforced in practice will be carefully observed,” he adds.

As you’d expect, he credits not just regulation but the role played by active privacy researchers in helping to draw attention and change attitudes towards privacy protection — and thus to drive change in the industry.

There’s certainly no doubt that privacy research is a vital ingredient for regulation to function in such a complex area — feeding complaints that trigger scrutiny that can in turn unlock enforcement and force a change of practice.

Although that’s also a process that takes time.

“The quality of cybersecurity and privacy technology policy, including its communication still leave much to desire, at least at most organisations. This will not change fast,” says says Olejnik. “Even if privacy is treated at the ‘C-level’, this then still tends to be about the purely risk of compliance. Fortunately, some important industry players with good understanding of both technology policy and the actual technology, even the emerging ones still under active research, treat it increasingly seriously.

“We owe it to the natural flow of the privacy research output, the talent inflows, and the slowly moving strategic shifts as well to a minor degree to the regulatory pressure and public heat. This process is naturally slow and we are far from the end.”

For its part, WebKit has been taking aim at trackers for several years now, adding features intended to reduce pervasive tracking — such as, back in 2017, Intelligent Tracking Prevention (ITP), which uses machine learning to squeeze cross-site tracking by putting more limits on cookies and other website data.

Apple immediately applied ITP to its desktop Safari browser — drawing predictable fast-fire from the Internet Advertising Bureau whose membership is comprised of every type of tracker deploying entity on the Internet.

But it’s the creepy trackers that are looking increasingly out of step with public opinion. And, indeed, with the direction of travel of the industry.

In Europe, regulation can be credited with actively steering developments too — following last year’s application of a major update to the region’s comprehensive privacy framework (which finally brought the threat of enforcement that actually bites). The General Data Protection Regulation (GDPR) has also increased transparency around security breaches and data practices. And, as always, sunlight disinfects.

Although there remains the issue of abuse of consent for EU regulators to tackle — with research suggesting many regional cookie consent pop-ups currently offer users no meaningful privacy choices despite GDPR requiring consent to be specific, informed and freely given.

It also remains to be seen how the adtech industry will respond to background tracking being squeezed at the browser level. Continued aggressive lobbying to try to water down privacy protections seems inevitable — if ultimately futile. And perhaps, in Europe in the short term, there will be attempts by the adtech industry to funnel more tracking via cookie ‘consent’ notices that nudge or force users to accept.

As the security space underlines, humans are always the weakest link. So privacy-hostile social engineering might be the easiest way for adtech interests to keep overriding user agency and grabbing their data anyway. Stopping that will likely need regulators to step in and intervene.

Another question thrown up by WebKit’s new policy is which way Chromium will jump, aka the browser engine that underpins Google’s hugely popular Chrome browser.

Of course Google is an ad giant, and parent company Alphabet still makes the vast majority of its revenue from digital advertising — so it maintains a massive interest in tracking Internet users to serve targeted ads.

Yet Chromium developers did pay early attention to the problem of unethical tracking. Here, for example, are two discussing potential future work to combat tracking techniques designed to override privacy settings in a blog post from nearly five years ago.

There have also been much more recent signs Google paying attention to Chrome users’ privacy, such as changes to how it handles cookies which it announced earlier this year.

But with WebKit now raising the stakes — by treating privacy as seriously as security — that puts pressure on Google to respond in kind. Or risk being seen as using its grip on browser marketshare to foot-drag on baked in privacy standards, rather than proactively working to prevent Internet users from being creeped on.

Fantasy football startup Sleeper nabs VC funding to take on ESPN

By Lucas Matney

Sleeper is looking to take on fantasy league apps from major players like ESPN and has amassed venture funding from Silicon Valley investors to take them down.

The Bay Area startup is aiming to treat a fantasy football league more like a social platform than a loose jumble of league mechanics, distinguishing itself as a simple and free, ad-free option.

Sleeper has done limited press as it has been ramping up its app over the past two seasons, but the team has been courting the interest of investors to scale the product, raising more than $7 million from VCs to date. The company closed a $5.3 million Series A late last year led by General Catalyst. In early 2017, the startup also closed a $2 million seed led by Birchmere Ventures with participation from Uber co-founder Garrett Camp’s startup studio, Expa.

There isn’t much in terms of monetization options at the moment. CEO Nan Wang tells TechCrunch that the focus right now is “amassing a large base of users and making it the stickiest and highest engagement product in the category.”

Wang says the app’s users spend 50 minutes per day on average during the season, numbers he calls “Instagram-like.” The main contributor to that number seems to be that chat is always a swipe away and that all of the actions that are happening during the season show up inside chats to encourage engagement.

This unifies the experience for users, many of whom have had to piecemeal their experience by using a WhatsApp or GroupMe group in addition to the other fantasy league apps that they’ve been using. Sleeper’s more differentiated UI seems to be largely popular among early vocal users as well as the up-to-the-minute notifications that deliver league updates.

Screen Shot 2019 08 14 at 1.41.00 PM

Poaching users from other platforms is definitely a priority, but Wang says the team has really been looking at how to nab users who have stayed away from the convoluted confusion of fantasy leagues as well. Taking on the leading apps from ESPN, Yahoo and NFL can be daunting; another stress for the younger startup is just how tight the user acquisition window is, though things compound quickly if you can create one loyal user that brings their entire league to the platform.

“The user acquisition window for fantasy football leagues is strongest from the second week of August until the first week of September. Historically, we’ve seen that about 70% of users create their leagues in that three-week window,” Wang tells me.

The funding has been used to build out its team, which is still just 10 full-time employees, as well as expand their ambitions beyond fantasy football alone into other sports, including basketball and soccer.

Most EU cookie ‘consent’ notices are meaningless or manipulative, study finds

By Natasha Lomas

New research into how European consumers interact with the cookie consent mechanisms which have proliferated since a major update to the bloc’s online privacy rules last year casts an unflattering light on widespread manipulation of a system that’s supposed to protect consumer rights.

As Europe’s General Data Protection Regulation (GDPR) came into force in May 2018, bringing in a tough new regime of fines for non-compliance, websites responded by popping up legal disclaimers which signpost visitor tracking activities. Some of these cookie notices even ask for consent to track you.

But many don’t — even now, more than a year later.

The study, which looked at how consumers interact with different designs of cookie pop-ups and how various design choices can nudge and influence people’s privacy choices, also suggests consumers are suffering a degree of confusion about how cookies function, as well as being generally mistrustful of the term ‘cookie’ itself. (With such baked in tricks, who can blame them?)

The researchers conclude that if consent to drop cookies was being collected in a way that’s compliant with the EU’s existing privacy laws only a tiny fraction of consumers would agree to be tracked.

The paper, which we’ve reviewed in draft ahead of publication, is co-authored by academics at Ruhr-University Bochum, Germany, and the University of Michigan in the US — and entitled: (Un)informed Consent: Studying GDPR Consent Notices in the Field.

The researchers ran a number of studies, gathering ~5,000 of cookie notices from screengrabs of leading websites to compile a snapshot (derived from a random sub-sample of 1,000) of the different cookie consent mechanisms in play in order to paint a picture of current implementations.

They also worked with a German ecommerce website over a period of four months to study how more than 82,000 unique visitors to the site interacted with various cookie consent designs which the researchers’ tweaked in order to explore how different defaults and design choices affected individuals’ privacy choices.

Their industry snapshot of cookie consent notices found that the majority are placed at the bottom of the screen (58%); not blocking the interaction with the website (93%); and offering no options other than a confirmation button that does not do anything (86%). So no choice at all then.

A majority also try to nudge users towards consenting (57%) — such as by using ‘dark pattern’ techniques like using a color to highlight the ‘agree’ button (which if clicked accepts privacy-unfriendly defaults) vs displaying a much less visible link to ‘more options’ so that pro-privacy choices are buried off screen.

And while they found that nearly all cookie notices (92%) contained a link to the site’s privacy policy, only a third (39%) mention the specific purpose of the data collection or who can access the data (21%).

The GDPR updated the EU’s long-standing digital privacy framework, with key additions including tightening the rules around consent as a legal basis for processing people’s data — which the regulation says must be specific (purpose limited), informed and freely given for consent to be valid.

Even so, since May last year there has been an outgrown in cookie ‘consent’ mechanisms popping up or sliding atop websites that still don’t offer EU visitors the necessary privacy choices, per the research.

“Given the legal requirements for explicit, informed consent, it is obvious that the vast majority of cookie consent notices are not compliant with European privacy law,” the researchers argue.

“Our results show that a reasonable amount of users are willing to engage with consent notices, especially those who want to opt out or do not want to opt in. Unfortunately, current implementations do not respect this and the large majority offers no meaningful choice.”

The researchers also record a large differential in interaction rates with consent notices — of between 5 and 55% — generated by tweaking positions, options, and presets on cookie notices.

This is where consent gets manipulated — to flip visitors’ preference for privacy.

They found that the more choices offered in a cookie notice, the more likely visitors were to decline the use of cookies. (Which is an interesting finding in light of the vendor laundry lists frequently baked into the so-called “transparency and consent framework” which the industry association, the Internet Advertising Bureau (IAB), has pushed as the standard for its members to use to gather GDPR consents.)

“The results show that nudges and pre-selection had a high impact on user decisions, confirming previous work,” the researchers write. “It also shows that the GDPR requirement of privacy by default should be enforced to make sure that consent notices collect explicit consent.”

Here’s a section from the paper discussing what they describe as “the strong impact of nudges and pre-selections”:

Overall the effect size between nudging (as a binary factor) and choice was CV=0.50. For example, in the rather simple case of notices that only asked users to confirm that they will be tracked, more users clicked the “Accept” button in the nudge condition, where it was highlighted (50.8% on mobile, 26.9% on desktop), than in the non-nudging condition where “Accept” was displayed as a text link (39.2% m, 21.1% d). The effect was most visible for the category-and vendor-based notices, where all checkboxes were pre-selected in the nudging condition, while they were not in the privacy-by-default version. On the one hand, the pre-selected versions led around 30% of mobile users and 10% of desktop users to accept all third parties. On the other hand, only a small fraction (< 0.1%) allowed all third parties when given the opt-in choice and around 1 to 4 percent allowed one or more third parties (labeled “other” in 4). None of the visitors with a desktop allowed all categories. Interestingly, the number of non-interacting users was highest on average for the vendor-based condition, although it took up the largest part of any screen since it offered six options to choose from.

The key implication is that just 0.1% of site visitors would freely choose to enable all cookie categories/vendors — i.e. when not being forced to do so by a lack of choice or via nudging with manipulative dark patterns (such as pre-selections).

Rising a fraction, to between 1-4%, who would enable some cookie categories in the same privacy-by-default scenario.

“Our results… indicate that the privacy-by-default and purposed-based consent requirements put forth by the GDPR would require websites to use consent notices that would actually lead to less than 0.1 % of active consent for the use of third parties,” they write in conclusion.

They do flag some limitations with the study, pointing out that the dataset they used that arrived at the 0.1% figure is biased — given the nationality of visitors is not generally representative of public Internet users, as well as the data being generated from a single retail site. But they supplemented their findings with data from a company (Cookiebot) which provides cookie notices as a SaaS — saying its data indicated a higher accept all clicks rate but still only marginally higher: Just 5.6%.

Hence the conclusion that if European web users were given an honest and genuine choice over whether or not they get tracked around the Internet, the overwhelming majority would choose to protect their privacy by rejecting tracking cookies.

This is an important finding because GDPR is unambiguous in stating that if an Internet service is relying on consent as a legal basis to process visitors’ personal data it must obtain consent before processing data (so before a tracking cookie is dropped) — and that consent must be specific, informed and freely given.

Yet, as the study confirms, it really doesn’t take much clicking around the regional Internet to find a gaslighting cookie notice that pops up with a mocking message saying by using this website you’re consenting to your data being processed how the site sees fit — with just a single ‘Ok’ button to affirm your lack of say in the matter.

It’s also all too common to see sites that nudge visitors towards a big brightly colored ‘click here’ button to accept data processing — squirrelling any opt outs into complex sub-menus that can sometimes require hundreds of individual clicks to deny consent per vendor.

You can even find websites that gate their content entirely unless or until a user clicks ‘accept’ — aka a cookie wall. (A practice that has recently attracted regulatory intervention.)

Nor can the current mess of cookie notices be blamed on a lack of specific guidance on what a valid and therefore legal cookie consent looks like. At least not any more. Here, for example, is a myth-busting blog which the UK’s Information Commissioner’s Office (ICO) published last month that’s pretty clear on what can and can’t be done with cookies.

For instance on cookie walls the ICO writes: “Using a blanket approach such as this is unlikely to represent valid consent. Statements such as ‘by continuing to use this website you are agreeing to cookies’ is not valid consent under the higher GDPR standard.” (The regulator goes into more detailed advice here.)

While France’s data watchdog, the CNIL, also published its own detailed guidance last month — if you prefer to digest cookie guidance in the language of love and diplomacy.

(Those of you reading TechCrunch back in January 2018 may also remember this sage plain english advice from our GDPR explainer: “Consent requirements for processing personal data are also considerably strengthened under GDPR — meaning lengthy, inscrutable, pre-ticked T&Cs are likely to be unworkable.” So don’t say we didn’t warn you.)

Nor are Europe’s data protection watchdogs lacking in complaints about improper applications of ‘consent’ to justify processing people’s data.

Indeed, ‘forced consent’ was the substance of a series of linked complaints by the pro-privacy NGO noyb, which targeted T&Cs used by Facebook, WhatsApp, Instagram and Google Android immediately GDPR started being applied in May last year.

While not cookie notice specific, this set of complaints speaks to the same underlying principle — i.e. that EU users must be provided with a specific, informed and free choice when asked to consent to their data being processed. Otherwise the ‘consent’ isn’t valid.

So far Google is the only company to be hit with a penalty as a result of that first wave of consent-related GDPR complaints; France’s data watchdog issued it a $57M fine in January.

But the Irish DPC confirmed to us that three of the 11 open investigations it has into Facebook and its subsidiaries were opened after noyb’s consent-related complaints. (“Each of these investigations are at an advanced stage and we can’t comment any further as these investigations are ongoing,” a spokeswoman told us. So, er, watch that space.)

The problem, where EU cookie consent compliance is concerned, looks to be both a failure of enforcement and a lack of regulatory alignment — the latter as a consequence of the ePrivacy Directive (which most directly concerns cookies) still not being updated, generating confusion (if not outright conflict) with the shiny new GDPR.

However the ICO’s advice on cookies directly addresses claimed inconsistencies between ePrivacy and GDPR, stating plainly that Recital 25 of the former (which states: “Access to specific website content may be made conditional on the well-informed acceptance of a cookie or similar device, if it is used for a legitimate purpose”) does not, in fact, sanction gating your entire website behind an ‘accept or leave’ cookie wall.

Here’s what the ICO says on Recital 25 of the ePrivacy Directive:

  • ‘specific website content’ means that you should not make ‘general access’ subject to conditions requiring users to accept non-essential cookies – you can only limit certain content if the user does not consent;
  • the term ‘legitimate purpose’ refers to facilitating the provision of an information society service – ie, a service the user explicitly requests. This does not include third parties such as analytics services or online advertising;

So no cookie wall; and no partial walls that force a user to agree to ad targeting in order to access the content.

It’s worth point out that other types of privacy-friendly online advertising are available with which to monetize visits to a website. (And research suggests targeted ads offer only a tiny premium over non-targeted ads, even as publishers choosing a privacy-hostile ads path must now factor in the costs of data protection compliance to their calculations — as well as the cost and risk of massive GDPR fines if their security fails or they’re found to have violated the law.)

Negotiations to replace the now very long-in-the-tooth ePrivacy Directive — with an up-to-date ePrivacy Regulation which properly takes account of the proliferation of Internet messaging and all the ad tracking techs that have sprung up in the interim — are the subject of very intense lobbying, including from the adtech industry desperate to keep a hold of cookie data. But EU privacy law is clear.

“[Cookie consent]’s definitely broken (and has been for a while). But the GDPR is only partly to blame, it was not intended to fix this specific problem. The uncertainty of the current situation is caused the delay of the ePrivacy regulation that was put on hold (thanks to lobbying),” says Martin Degeling, one of the research paper’s co-authors, when we suggest European Internet users are being subject to a lot of ‘consent theatre’ (ie noisy yet non-compliant cookie notices) — which in turn is causing knock-on problems of consumer mistrust and consent fatigue for all these useless pop-ups. Which work against the core aims of the EU’s data protection framework.

“Consent fatigue and mistrust is definitely a problem,” he agrees. “Users that have experienced that clicking ‘decline’ will likely prevent them from using a site are likely to click ‘accept’ on any other site just because of one bad experience and regardless of what they actually want (which is in most cases: not be tracked).”

“We don’t have strong statistical evidence for that but users reported this in the survey,” he adds, citing a poll the researchers also ran asking site visitors about their privacy choices and general views on cookies. 

Degeling says he and his co-authors are in favor of a consent mechanism that would enable web users to specify their choice at a browser level — rather than the current mess and chaos of perpetual, confusing and often non-compliant per site pop-ups. Although he points out some caveats.

“DNT [Do Not Track] is probably also not GDPR compliant as it only knows one purpose. Nevertheless  something similar would be great,” he tells us. “But I’m not sure if shifting the responsibility to browser vendors to design an interface through which they can obtain consent will lead to the best results for users — the interfaces that we see now, e.g. with regard to cookies, are not a good solution either.

“And the conflict of interest for Google with Chrome are obvious.”

The EU’s unfortunate regulatory snafu around privacy — in that it now has one modernized, world-class privacy regulation butting up against an outdated directive (whose progress keeps being blocked by vested interests intent on being able to continue steamrollering consumer privacy) — likely goes some way to explaining why Member States’ data watchdogs have generally been loath, so far, to show their teeth where the specific issue of cookie consent is concerned.

At least for an initial period the hope among data protection agencies (DPAs) was likely that ePrivacy would be updated and so they should wait and see.

They have also undoubtedly been providing data processors with time to get their data houses and cookie consents in order. But the frictionless interregnum while GDPR was allowed to ‘bed in’ looks unlikely to last much longer.

Firstly because a law that’s not enforced isn’t worth the paper it’s written on (and EU fundamental rights are a lot older than the GDPR). Secondly, with the ePrivacy update still blocked DPAs have demonstrated they’re not just going to sit on their hands and watch privacy rights be rolled back — hence them putting out guidance that clarifies what GDPR means for cookies. They’re drawing lines in the sand, rather than waiting for ePrivacy to do it (which also guards against the latter being used by lobbyists as a vehicle to try to attack and water down GDPR).

And, thirdly, Europe’s political institutions and policymakers have been dining out on the geopolitical attention their shiny privacy framework (GDPR) has attained.

Much has been made at the highest levels in Europe of being able to point to US counterparts, caught on the hop by ongoing tech privacy and security scandals, while EU policymakers savor the schadenfreude of seeing their US counterparts being forced to ask publicly whether it’s time for America to have its own GDPR.

With its extraterritorial scope, GDPR was always intended to stamp Europe’s rule-making prowess on the global map. EU lawmakers will feel they can comfortably check that box.

However they are also aware the world is watching closely and critically — which makes enforcement a very key piece. It must slot in too. They need the GDPR to work on paper and be seen to be working in practice.

So the current cookie mess is a problematic signal which risks signposting regulatory failure — and that simply isn’t sustainable.

A spokesperson for the European Commission told us it cannot comment on specific research but said: “The protection of personal data is a fundamental right in the European Union and a topic the Juncker commission takes very seriously.”

“The GDPR strengthens the rights of individuals to be in control of the processing of personal data, it reinforces the transparency requirements in particular on the information that is crucial for the individual to make a choice, so that consent is given freely, specific and informed,” the spokesperson added. 

“Cookies, insofar as they are used to identify users, qualify as personal data and are therefore subject to the GDPR. Companies do have a right to process their users’ data as long as they receive consent or if they have a legitimate interest.”

All of which suggests that the movement, when it comes, must come from a reforming adtech industry.

With robust privacy regulation in place the writing is now on the wall for unfettered tracking of Internet users for the kind of high velocity, real-time trading of people’s eyeballs that the ad industry engineered for itself when no one knew what was being done with people’s data.

GDPR has already brought greater transparency. Once Europeans are no longer forced to trade away their privacy it’s clear they’ll vote with their clicks not to be ad-stalked around the Internet too.

The current chaos of non-compliant cookie notices is thus a signpost pointing at an underlying privacy lag — and likely also the last gasp signage of digital business models well past their sell-by-date.

Facebook could face billions in potential damages as court rules facial recognition lawsuit can proceed

By Jonathan Shieber

Facebook is facing exposure to billions of dollars in potential damages as a federal appeals court on Thursday rejected Facebook’s arguments to halt a class action lawsuit claiming it illegally collected and stored the biometric data of millions of users.

The class action lawsuit has been working its way through the courts since 2015, when Illinois Facebook users sued the company for alleged violations of the state’s Biometric Information Privacy Act by automatically collecting and identifying people in photographs posted to the service.

Now, thanks to a unanimous decision from the 9th U.S. Circuit Court of Appeals in San Francisco, the lawsuit can proceed.

The most significant language from the decision from the circuit court seems to be this:

We conclude that the development of face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests. Similar conduct is actionable at common law.

The American Civil Liberties Union came out in favor of the court’s ruling.

“This decision is a strong recognition of the dangers of unfettered use of face surveillance technology,” said Nathan Freed Wessler, staff attorney with the ACLU Speech, Privacy, and Technology Project, in a statement. “The capability to instantaneously identify and track people based on their faces raises chilling potential for privacy violations at an unprecedented scale. Both corporations and the government are now on notice that this technology poses unique risks to people’s privacy and safety.”

As April Glaser noted in Slate, Facebook already may have the world’s largest database of faces, and that’s something that should concern regulators and privacy advocates.

“Facebook wants to be able to certify identity in a variety of areas of life just as it has been trying to corner the market on identify verification on the web,” Siva Vaidhyanathan told Slate in an interview. “The payoff for Facebook is to have a bigger and broader sense of everybody’s preferences, both individually and collectively. That helps it not only target ads but target and develop services, too.”

That could apply to facial recognition technologies as well. Facebook, thankfully, doesn’t sell its facial recognition data to other people, but it does allow companies to use its data to target certain populations. It also allows people to use its information for research and to develop new services that could target Facebook’s billion-strong population of users.

As our own Josh Constine noted in an article about the company’s planned cryptocurrency wallet, the developer community poses as much of a risk to how Facebook’s products and services are used and abused as Facebook itself.

Facebook has said that it plans to appeal the decision. “We have always disclosed our use of face recognition technology and that people can turn it on or off at any time,” a spokesman said in an email to Reuters.

Now, the lawsuit will go back to the court of U.S. District Judge James Donato in San Francisco who approved the class action lawsuit last April for a possible trial.

Under the privacy law in Illinois, negligent violations could be subject to damages of up to $1,000 and intentional violations of privacy are subject to up to $5,000 in penalties. For the potential 7 million Facebook users that could be included in the lawsuit, those figures could amount to real money.

“BIPA’s innovative protections for biometric information are now enforceable in federal court,” added Rebecca Glenberg, senior staff attorney at the ACLU of Illinois. “If a corporation violates a statute by taking your personal information without your consent, you do not have to wait until your data is stolen or misused to go to court. As our General Assembly understood when it enacted BIPA, a strong enforcement mechanism is crucial to hold companies accountable when they violate our privacy laws. Corporations that misuse Illinoisans sensitive biometric data now do so at their own peril.”

These civil damages could come on top of fines that Facebook has already paid to the U.S. government for violating its agreement with the Federal Trade Commission over its handling of private user data. That resulted in one of the single largest penalties levied against a U.S. technology company. Facebook is potentially on the hook for a $5 billion payout to the U.S. government. That penalty is still subject to approval by the Justice Department.

Keith Rabois, BoxGroup back New York-based Brex competitor

By Kate Clark

Considering its unparalleled success, it was only a matter of time before a Brex copycat emerged.

Ramp Financial, a new startup led by Capital One-acquired Paribus founders Eric Glyman and Karim Atiyeh (pictured), has raised $7 million, TechCrunch has learned. The capital came from Keith Rabois of Founders Fund, BoxGroup’s Adam Rothenberg and Coatue Management, a hedge fund that recently launched a $700 million early-stage investment vehicle.

Ramp Financial, Founders Fund, BoxGroup and Coatue Management declined to comment.

Ramp Financial is in the very early stages of product development, though we’re told, “It’s the same as Brex .” Other details available on the new startup, which raised on a pre-money valuation of $25 million, according to sources, are slim. Even its name may be subject to change.

Brex, founded in 2017 by a pair of now 23-year-olds, created a corporate charge card tailored for startups. The Y Combinator graduate doesn’t require cardholders to submit Social Security numbers or credit scores, granting entrepreneurs a new avenue to credit and method of protecting their credit scores. Brex’s software also expedites the time-consuming expense management, and accounting and budgeting processes for employees. Quickly, it has become essential to the company-building process in Silicon Valley.

It helps that VCs are wild for Brex. The startup has raised more than $300 million in VC funding in only two years. Most recently, it closed a $100 million round led by Kleiner Perkins at a valuation of $2.6 billion.

Given Brex’s rapid growth and the uptick in venture capital investment in challenger banks, or new financial services competing with incumbent financiers, we’re guessing Ramp Financial didn’t have a tough time pitching VCs. Plus, its founders Glyman and Atiyeh have a clear track record of success.

The duo previously built Paribus, a startup acquired by Capital One roughly one year after launching onstage at TechCrunch Disrupt New York 2015. Paribus, which raised just over $2 million from Slow Ventures, General Catalyst, Greylock and others before the M&A transaction, helps online shoppers get money back when prices drop on items they’ve purchased. Terms of Capital One’s acquisition were not disclosed.

Paribus is also a graduate of Y Combinator, completing the startup accelerator in the summer of 2015.

Aside from both completing Y Combinator, the founders of Brex and Ramp Financial share connections to the PayPal mafia. Rabois, a general partner at Founders Fund, was an executive at the business in the early 2000s. PayPal co-founders Peter Thiel and Max Levchin are Brex investors.

Twitter ‘fesses up to more adtech leaks

By Natasha Lomas

Twitter has disclosed more bugs related to how it uses personal data for ad targeting that means it may have shared users data with advertising partners even when a user had expressly told it not to.

Back in May the social network disclosed a bug that in certain conditions resulted in an account’s location data being shared with a Twitter ad partner, during real-time bidding (RTB) auctions.

In a blog post on its Help Center about the latest “issues” Twitter says it “recently” found, it admits to finding two problems with users’ ad settings choices that mean they “may not have worked as intended”.

It claims both problems were fixed on August 5. Though it does not specify when it realized it was processing user data without their consent.

The first bug relates to tracking ad conversions. This meant that if a Twitter user clicked or viewed an ad for a mobile application on the platform and subsequently interacted with the mobile app Twitter says it “may have shared certain data (e.g., country code; if you engaged with the ad and when; information about the ad, etc)” with its ad measurement and advertising partners — regardless of whether the user had agreed their personal data could be shared in this way.

It suggests this leak of data has been happening since May 2018 — which is also the day when Europe’s updated privacy framework, GDPR, came into force. The regulation mandates disclosure of data breaches (which explains why you’re hearing about all these issues from Twitter) — and means that quite a lot is riding on how “recently” Twitter found these latest bugs. Because GDPR also includes a supersized regime of fines for confirmed data protection violations.

Though it remains to be seen whether Twitter’s now repeatedly leaky adtech will attract regulatory attention…

Twitter may have /accidentally/ shared data on users to ads partners even for those who opted out from personalised ads. That would be a violation of user settings and expectations, which #GDPR makes a quasi-contract. https://t.co/s0acfllEhG

— Lukasz Olejnik (@lukOlejnik) August 7, 2019

Twitter specifies that it does not share users’ names, Twitter handles, email or phone number with ad partners. However it does share a user’s mobile device identifier, which GDPR treats as personal data as it acts as a unique identifier. Using this identifier, Twitter and Twitter’s ad partners can work together to link a device identifier to other pieces of identity-linked personal data they collectively hold on the same user to track their use of the wider Internet, thereby allowing user profiling and creepy ad targeting to take place in the background.

The second issue Twitter discloses in the blog post also relates to tracking users’ wider web browsing to serve them targeted ads.

Here Twitter admits that, since September 2018, it may have served targeted ads that used inferences made about the user’s interests based on tracking their wider use of the Internet — even when the user had not given permission to be tracked.

This sounds like another breach of GDPR, given that in cases where the user did not consent to being tracked for ad targeting Twitter would lack a legal basis for processing their personal data. But it’s saying it processed it anyway — albeit, it claims accidentally.

This type of creepy ad targeting — based on so-called ‘inferences’ — is made possible because Twitter associates the devices you use (including mobile and browsers) when you’re logged in to its service with your Twitter account, and then receives information linked to these same device identifiers (IP addresses and potentially browser fingerprinting) back from its ad partners, likely gathered via tracking cookies (including Twitter’s own social plug-ins) which are larded all over the mainstream Internet for the purpose of tracking what you look at online.

These third party ad cookies link individuals’ browsing data (which gets turned into inferred interests) with unique device/browser identifiers (linked to individuals) to enable the adtech industry (platforms, data brokers, ad exchanges and so on) to track web users across the web and serve them “relevant” (aka creepy) ads.

“As part of a process we use to try and serve more relevant advertising on Twitter and other services since September 2018, we may have shown you ads based on inferences we made about the devices you use, even if you did not give us permission to do so,” it how Twitter explains this second ‘issue’.

“The data involved stayed within Twitter and did not contain things like passwords, email accounts, etc.,” it adds. Although the key point here is one of a lack of consent, not where the data ended up.

(Also, the users’ wider Internet browsing activity linked to their devices via cookie tracking did not originate with Twitter — even if it’s claiming the surveillance files it received from its “trusted” partners stayed on its servers. Bits and pieces of that tracked data would, in any case, exist all over the place.)

In an explainer on its website on “personalization based on your inferred identity” Twitter seeks to reassure users that it will not track them without their consent, writing:

We are committed to providing you meaningful privacy choices. You can control whether we operate and personalize your experience based on browsers or devices other than the ones you use to log in to Twitter (or if you’re logged out, browsers or devices other than the one you’re currently using), or email addresses and phone numbers similar to those linked to your Twitter account. You can do this by visiting your Personalization and data settings and adjusting the Personalize based on your inferred identity setting.

The problem in this case is that users’ privacy choices were simply overridden. Twitter says it did not do so intentionally. But either way it’s not consent. Ergo, a breach.

“We know you will want to know if you were personally affected, and how many people in total were involved. We are still conducting our investigation to determine who may have been impacted and If we discover more information that is useful we will share it,” Twitter goes on. “What is there for you to do? Aside from checking your settings, we don’t believe there is anything for you to do.

“You trust us to follow your choices and we failed here. We’re sorry this happened, and are taking steps to make sure we don’t make a mistake like this again. If you have any questions, you may contact Twitter’s Office of Data Protection through this form.”

While the company may “believe” there is nothing Twitter users can do — but accept its apology for screwing up, European Twitter users who believe it processed their data without their consent do have a course of action they can take: They can complain to their local data protection watchdog.

Zooming out, there are also major legal question marks hanging over behaviourally targeted ads in Europe.

The UK’s privacy regulator warned in June that systematic profiling of web users via invasive tracking technologies such as cookies is in breach of pan-EU privacy laws — following multiple complaints filed in the region that argue RTB is in breach of the GDPR.

While, back in May Google’s lead regulator in Europe, the Irish Data Protection Commission, confirmed it has opened a formal investigation into use of personal data in the context of its online Ad Exchange.

So the wider point here is that the whole leaky business of creepy ads looks to be operating on borrowed time.

Important statement from Brian O’Kelley, inventor of much of RTB https://t.co/02oHP6ZPVp pic.twitter.com/AS5Bh2zFqh

— Johnny Ryan (@johnnyryan) August 6, 2019

StockX was hacked, exposing millions of customers’ data

By Zack Whittaker

It wasn’t “system updates” as it claimed. StockX was mopping up after a data breach, TechCrunch can confirm.

The fashion and sneaker trading platform pushed out a password reset email to its users on Thursday citing “system updates,” but left users confused and scrambling for answers. StockX told users that the email was legitimate and not a phishing email as some had suspected, but did not say what caused the alleged system update or why there was no prior warning.

A spokesperson eventually told TechCrunch that the company was “alerted to suspicious activity” on its site but declined to comment further.

But that wasn’t the whole truth.

An unnamed data breached seller contacted TechCrunch claiming more than 6.8 million records were stolen from the site in May by a hacker. The seller declined to say how they obtained the data, but promised to soon put the stolen records for sale on the dark web.

The seller provided TechCrunch a sample of 1,000 records. We contacted customers and provided them information only they would know from their stolen records, such as their real name and username combination and shoe size. Every person who responded confirmed their data as accurate.

The stolen data contained names, email addresses, hashed passwords, and other profile information — such as shoe size and trading currency. The data also included the user’s device type, such as Android or iPhone, and the software version. Several other internal flags were found in each record, such as whether or not the user was banned or if European users had accepted the company’s GDPR message.

Under those GDPR rules, a company can be fined up to four percent of its global annual revenue for violations.

When reached prior to publication, neither spokesperson Katy Cockrel nor StockX founder Josh Luber responded to a request for comment.

StockX was last month valued at over $1 billion after a $110 million fundraise.

Northrup Grumman is among the companies tapped to make the US Army’s drone-killing lasers

By Jonathan Shieber

Northrop Grumman is going to be working on the U.S. Army’s long-planned drone-killing lasers.

The Army wants to mount 50 kilowatt laser systems on top of its General Dynamics designed Stryker vehicle as part of its U.S. Army Maneuver Short Range Air Defense (M-SHORAD) directed energy prototyping initiative.

Basically, the army wants to use these lasers to protect frontline combat troops against drone attacks.

The initiative includes integrating a directed energy weapon system on a Stryker vehicle as a pathfinding effort toward the U.S. Army M-SHORAD objective to provide more comprehensive protection of frontline combat units.

“Northrop Grumman is eager to leverage its portfolio of innovative, proven technologies and integration expertise to accelerate delivery of next-generation protection to our maneuver forces,” said Dan Verwiel, vice president and general manager, missile defense and protective systems, Northrop Grumman, in a statement.

The drone, helicopter, rocket, artillery and mortar defense system that the Army is looking to mount on a group of Stryker all-terrain vehicles could come from either Northrop Grumman or Raytheon, which was also tapped to develop tech for the project.

“The time is now to get directed energy weapons to the battlefield,” said Lt. Gen. L. Neil Thurgood, director of hypersonics, directed energy, space and rapid acquisition, in a statement. “The Army recognizes the need for directed energy lasers as part of the Army’s modernization plan. This is no longer a research effort or a demonstration effort. It is a strategic combat capability, and we are on the right path to get it in soldiers’ hands.”

For the Army, lasers extend the promise of reducing supply chain hurdles that are associated with conventional kinetic weapons. In May, the Army decided gave the green light to a strategy for accelerated prototyping and field use of a wide array of lasers for infantry, vehicles, and air support.

While Raytheon and Northrop Grumman have both been tapped by the Army, the military will also entertain pitches from other vendors who want to carry out their own research, according to the Army.

It’s a potential $490 million contract for whoever wins the demonstration, and the Army expects to have the vehicles equipped in Fiscal Year 2022.

“Both the Army and commercial industry have made substantial improvements in the efficiency of high energy lasers — to the point where we can get militarily significant laser power onto a tactically relevant platform,” said Dr. Craig Robin, RCCTO Senior Research Scientist for Directed Energy Applications, in a statement. “Now, we are in position to quickly prototype, compete for the best solution, and deliver to a combat unit.”

 

Don’t miss this epic Twitter fight between the IAB’s CEO and actual publishers

By Natasha Lomas

Grab popcorn. As Internet fights go this one deserves your full attention — because the fight is over your attention. Your eyeballs and the creepy ads that trade data on you to try to swivel ’em.

A Clockwork Orange Eyes GIF - Find & Share on GIPHY

In the blue corner, the Internet Advertising Association’s CEO, Randall Rothenberg, who has been taking to Twitter increasingly loudly in recent days to savage Europe’s privacy framework, the GDPR, and bleat dire warnings about California’s Consumer Privacy Act (CCPA) — including amplifying studies he claims show “the negative impact” on publishers.

Exhibit A, tweeted August 1:

More on the negative impact of #GDPR on publishers (and more reasons @iab is trying to fix #CCPA so publishers, brands, & retailers don’t get killed). https://t.co/BWtGNYGTJq

Randall Rothenberg (@r2rothenberg) July 31, 2019

NB: The IAB is a mixed membership industry organization which combines advertisers, brands, publishers, data brokers* and adtech platform tech giants — including the dominant adtech duopoly, Google and Facebook, who take home ~60% of digital ad spend. The only entity capable of putting a dent in the duopoly, Amazon, is also in the club. Its membership reflects the sprawling interests attached to the online ad industry, and, well, the personal data that currently feeds it (your eyeballs again!), although some members clearly have pots more money to spend on lobbying against digital privacy regs than others.

In a what now looks to have been deleted tweet last month Rothenberg publicly professed himself proud to have Facebook as a member of his ‘publisher defence’ club. Though, admittedly, per the above tweet, he’s also worried about brands and retailers getting “killed”. He doesn’t need to worry about Google and Facebook’s demise because that would just be ridiculous.

Now, in the — I wish I could call it ‘red top’ corner, except these newspaper guys are anything but tabloid — we find premium publishers biting back at Rothenberg’s attempts to trash-talk online privacy legislation.

Here’s the New York Times‘ data governance & privacy guy, Robin Berjon, demolishing Rothenberg via the exquisite medium of quote-tweet

One of the primary reasons we need the #GDPR and #CCPA (and more) today is because the @iab, under @r2rothenberg's leadership, has been given 20 years to self-regulate and has used the time to do [checks notes] nothing whatsoever.https://t.co/hBS9d671LU

— Robin Berjon (@robinberjon) August 1, 2019

I’m going to quote Berjon in full because every single tweet packs a beautifully articulated punch:

  • One of the primary reasons we need the #GDPR and #CCPA (and more) today is because the @iab, under @r2rothenberg’s leadership, has been given 20 years to self-regulate and has used the time to do [checks notes] nothing whatsoever.
  • I have spent much of my adult life working in self-regulatory environments. They are never perfect, but when they work they really deliver.
  • #Adtech had a chance to self-reg when the FTC asked them to — from which we got the joke known as AdChoices.
  • They got a second major chance with DNT. But the notion of a level playing field between #adtech and consumers didn’t work for them so they did everything to prevent it from existing.
  • At some point it became evident that the @iab lacked the vision and leadership to shepherd the industry towards healthy, sustainable behaviour. That’s when regulation became unavoidable. No one has done as much as the @iab has to bring about strong privacy regulation.
  • And to make things funnier the article that @r2rothenberg was citing as supporting his view is… calling for stronger enforcement of the #GDPR.
  • If that’s not a metaphor for where the @iab’s at, I don’t know what is.

Next time Facebook talks about how it can self-regulate its access to data I suggest you cc that entire thread.

Also chipping in on Twitter to champion Berjon’s view about the IAB’s leadership vacuum in cleaning up the creepy online ad complex, is Aram Zucker-Scharff, aka the ad engineering director at — checks notes — The Washington Post.

His punch is more of a jab — but one that’s no less painful for the IAB’s current leadership.

“I say this rarely, but this is a must read,” he writes, in a quote tweet pointing to Berjon’s entire thread.

I say this rarely, but this is a must read, Thread: https://t.co/FxKmT9bp7r

— Aram Zucker-Scharff (@Chronotope) August 2, 2019

Another top tier publisher’s commercial chief also told us in confidence that they “totally agree with Robin” — although they didn’t want to go on the record today.

In an interesting twist to this ‘mixed member online ad industry association vs people who work with ads and data at actual publishers’ slugfest, Rothenberg replied to Berjon’s thread, literally thanking him for the absolute battering.

Yes, thank you – that’s exactly where we’re at & why these pieces are important!” he tweeted, presumably still dazed and confused from all the body blows he’d just taken. “@iab supports the competitiveness of the hundreds of small publishers, retailers, and brands in our global membership. We appreciate the recognition and your explorations,@robinberjon.”

Yes, thank you – that’s exactly where we’re at & why these pieces are important! @iab supports the competitiveness of the hundreds of small publishers, retailers, and brands in our global membership. We appreciate the recognition and your explorations, @robinberjon & @Bershidsky https://t.co/WDxrWIyHXd

— Randall Rothenberg (@r2rothenberg) August 2, 2019

Rothenberg also took the time to thank Bloomberg columnist, Leonid Bershidsky, who’d chipped into the thread to point out that the article Rothenberg had furiously retweeted actually says the GDPR “should be enforced more rigorously against big companies, not that the GDPR itself is bad or wrong”.

Who is Bershidsky? Er, just the author of the article Rothenberg tried to nega-spin. So… uh… owned.

May I point out that the piece that's cited here (mine) says the GDPR should be enforced more rigorously against big companies, not that the GDPR itself is bad or wrong?

— Leonid Bershidsky (@Bershidsky) August 1, 2019

But there’s more! Berjon tweeted a response to Rothenberg’s thanks for what the latter tortuously referred to as “your explorations” — I mean, the mind just boggles as to what he was thinking to come up with that euphemism — thanking him for reversing his position on GDPR, and for reversing his prior leadership vacuum on supporting robustly enforced online privacy laws. 

It’s great to hear that you’re now supporting strong GDPR enforcement,” he writes. “It’s indeed what most helps the smaller players. A good next step to this conversation would be an @iab statement asking to transpose the GDPR to US federal law. Want to start drafting something?”

It's great to hear that you're now supporting strong GDPR enforcement. It's indeed what most helps the smaller players. A good next step to this conversation would be an @iab statement asking to transpose the GDPR to US federal law. Want to start drafting something?

— Robin Berjon (@robinberjon) August 2, 2019

We’ve asked the IAB if, in light of Rothenberg’s tweet, it now wishes to share a public statement in support of transposing the GDPR into US law. We’ll be sure to update this post if it says anything at all.

We’ve also screengrabbed the vinegar strokes of this epic fight — as an insurance policy against any further instances of the IAB hitting the tweet delete button. (Plus, I mean, you might want to print it out and get it framed.)

Screenshot 2019 08 02 at 18.48.08

Some light related reading can be found here:

Google ordered to halt human review of voice AI recordings over privacy risks

By Natasha Lomas

A German privacy watchdog has ordered Google to cease manual reviews of audio snippets generated by its voice AI. 

This follows a leak last month of scores of audio snippets from the Google Assistant service. A contractor working as a Dutch language reviewer handed more than 1,000 recordings to the Belgian news site VRT which was then able to identify some of the people in the clips. It reported being able to hear people’s addresses, discussion of medical conditions, and recordings of a woman in distress.

The Hamburg data protection authority used Article 66 powers of the General Data Protection Regulation (GDPR) to make the order — which allows a DPA to order data processing to stop if it believes there is “an urgent need to act in order to protect the rights and freedoms of data subjects”.

The Article 66 order to Google appears to be the first use of the power since GDPR came into force across the bloc in May last year.

Google says it received the order on July 26 — which requires it to stop manually reviewing audio snippets in Germany for a period of three months. Although the company had already taken the decision to manually suspend audio reviews of Google Assistant across the whole of Europe — doing so on July 10, after learning of the data leak.

Last month it also informed its lead privacy regulator in Europe, the Irish Data Protection Commission (DPC), of the breach — which also told us it is now “examining” the issue that’s been highlighted by Hamburg’s order.

The Irish DPC’s head of communications, Graham Doyle, said Google Ireland filed an Article 33 breach notification for the Google Assistant data “a couple of weeks ago”, adding: “We note that as of 10 July Google Ireland ceased the processing in question and that they have committed to the continued suspension of processing for a period of at least three months starting today (1 August). In the meantime we are currently examining the matter.”

It’s not clear whether Google will be able to reinstate manual reviews in Europe in a way that’s compliant with the bloc’s privacy rules. The Hamburg DPA writes in a statement [in German] on its website that it has “significant doubts” about whether Google Assistant complies with EU data-protection law.

“We are in touch with the Hamburg data protection authority and are assessing how we conduct audio reviews and help our users understand how data is used,” Google’s spokesperson also told us.

In a blog post published last month after the leak, Google product manager for search, David Monsees, claimed manual reviews of Google Assistant queries are “a critical part of the process of building speech technology”, couching them as “necessary” to creating such products.

“These reviews help make voice recognition systems more inclusive of different accents and dialects across languages. We don’t associate audio clips with user accounts during the review process, and only perform reviews for around 0.2% of all clips,” Google’s spokesperson added now.

But it’s far from clear whether human review of audio recordings captured by any of the myriad always-on voice AI products and services now on the market will be able to be compatible with European’s fundamental privacy rights.

These AIs typically have trigger words for activating the recording function that streams audio data to the cloud but the technology can easily be accidentally triggered — and leaks have shown they are able to hoover up sensitive and intimate personal data of anyone in their vicinity (which can include people who never got within sniffing distance of any T&Cs).

In its website the Hamburg DPA says the order against Google is intended to protect the privacy rights of affected users in the immediate term, noting that GDPR allows for concerned authorities in EU Member States to issue orders of up to three months.

In a statement Johannes Caspar, the Hamburg commissioner for data protection, added: “The use of language assistance systems in the EU must comply with the data protection requirements of the GDPR. In the case of the Google Assistant, there are currently significant doubts. The use of language assistance systems must be done in a transparent way, so that an informed consent of the users is possible. In particular, this involves providing sufficient information and transparently informing those concerned about the processing of voice commands, but also about the frequency and risks of mal-activation. Finally, due regard must be given to the need to protect third parties affected by the recordings. First of all, further questions about the functioning of the speech analysis system have to be clarified. The data protection authorities will then have to decide on definitive measures that are necessary for a privacy-compliant operation. ”

The DPA also urges other regional privacy watchdogs to prioritize checking on other providers of language assistance systems — and “implement appropriate measures” — name-checking providers of voice AIs, such as Apple and Amazon .

This suggests there could be wider ramifications for other tech giants operating voice AIs in Europe, flowing from this single Article 66 order.

As we’ve said before, the real enforcement punch packed by GDPR is not the headline-grabbing fines, which can scale as high as 4% of a company’s global annual turnover — it’s the power that Europe’s DPAs now have in their regulatory toolbox to order that data stops flowing.

“This is just the beginning,” one expert on European data protection legislation told us, speaking on condition of anonymity. “The Article 66 chest is open and it has a lot on offer.”

In a sign of the potential scale of the looming privacy problems for voice AIs Apple also said earlier today that it’s suspending a quality control program for its Siri voice assistant.

The move, which does not appear to be linked to any regulatory order, follows a Guardian report last week detailing claims by a whistleblower that contractors working for Apple ‘regularly hear confidential details’ on Siri recordings, such as audio of people having sex and identifiable financial details, regardless of the processes Apple uses to anonymize the records.

Apple’s suspension of manual reviews of Siri snippets applies worldwide.

Europe’s top court sharpens guidance for sites using leaky social plug-ins

By Natasha Lomas

Europe’s top court has made a ruling that could affect scores of websites that embed the Facebook ‘Like’ button and receive visitors from the region.

The ruling by the Court of Justice of the EU states such sites are jointly responsible for the initial data processing — and must either obtain informed consent from site visitors prior to data being transferred to Facebook, or be able to demonstrate a legitimate interest legal basis for processing this data.

The ruling is significant because, as currently seems to be the case, Facebook’s Like buttons transfer personal data automatically, when a webpage loads — without the user even needing to interact with the plug-in — which means if websites are relying on visitors’ ‘consenting’ to their data being shared with Facebook they will likely need to change how the plug-in functions to ensure no data is sent to Facebook prior to visitors being asked if they want their browsing to be tracked by the adtech giant.

The background to the case is a complaint against online clothes retailer, Fashion ID, by a German consumer protection association, Verbraucherzentrale NRW — which took legal action in 2015 seeking an injunction against Fashion ID’s use of the plug-in which it claimed breached European data protection law.

Like ’em or loath ’em, Facebook’s ‘Like’ buttons are an impossible-to-miss component of the mainstream web. Though most Internet users are likely unaware that the social plug-ins are used by Facebook to track what other websites they’re visiting for ad targeting purposes.

Last year the company told the UK parliament that between April 9 and April 16 the button had appeared on 8.4M websites, while its Share button social plug-in appeared on 931K sites. (Facebook also admitted to 2.2M instances of another tracking tool it uses to harvest non-Facebook browsing activity — called a Facebook Pixel — being invisibly embedded on third party websites.)

The Fashion ID case predates the introduction of the EU’s updated privacy framework, GDPR, which further toughens the rules around obtaining consent — meaning it must be purpose specific, informed and freely given.

Today’s CJEU decision also follows another ruling a year ago, in a case related to Facebook fan pages, when the court took a broad view of privacy responsibilities around platforms — saying both fan page administrators and host platforms could be data controllers. Though it also said joint controllership does not necessarily imply equal responsibility for each party.

In the latest decision the CJEU has sought to draw some limits on the scope of joint responsibility, finding that a website where the Facebook Like button is embedded cannot be considered a data controller for any subsequent processing, i.e. after the data has been transmitted to Facebook Ireland (the data controller for Facebook’s European users).

The joint responsibility specifically covers the collection and transmission of Facebook Like data to Facebook Ireland.

“It seems, at the outset, impossible that Fashion ID determines the purposes and means of those operations,” the court writes in a press release announcing the decision.

“By contrast, Fashion ID can be considered to be a controller jointly with Facebook Ireland in respect of the operations involving the collection and disclosure by transmission to Facebook Ireland of the data at issue, since it can be concluded (subject to the investigations that it is for the Oberlandesgericht Düsseldorf [German regional court] to carry out) that Fashion ID and Facebook Ireland determine jointly the means and purposes of those operations.”

Responding the judgement in a statement attributed to its associate general counsel, Jack Gilbert, Facebook told us:

Website plugins are common and important features of the modern Internet. We welcome the clarity that today’s decision brings to both websites and providers of plugins and similar tools. We are carefully reviewing the court’s decision and will work closely with our partners to ensure they can continue to benefit from our social plugins and other business tools in full compliance with the law.

The company said it may make changes to the Like button to ensure websites that use it are able to comply with Europe’s GDPR.

Though it’s not clear what specific changes these could be, such as — for example — whether Facebook will change the code of its social plug-ins to ensure no data is transferred at the point a page loads. (We’ve asked Facebook and will update this report with any response.)

Facebook also points out that other tech giants, such as Twitter and LinkedIn, deploy similar social plug-ins — suggesting the CJEU ruling will apply to other social platforms, as well as to thousands of websites across the EU where these sorts of plug-ins crop up.

“Sites with the button should make sure that they are sufficiently transparent to site visitors, and must make sure that they have a lawful basis for the transfer of the user’s personal data (e.g. if just the user’s IP address and other data stored on the user’s device by Facebook cookies) to Facebook,” Neil Brown, a telecoms, tech and internet lawyer at U.K. law firm Decoded Legal, told TechCrunch.

“If their lawful basis is consent, then they’ll need to get consent before deploying the button for it to be valid — otherwise, they’ll have done the transfer before the visitor has consented

“If relying on legitimate interests — which might scrape by — then they’ll need to have done a legitimate interests assessment, and kept it on file (against the (admittedly unlikely) day that a regulator asks to see it), and they’ll need to have a mechanism by which a site visitor can object to the transfer.”

“Basically, if organisations are taking on board the recent guidance from the ICO and CNIL on cookie compliance, wrapping in Facebook ‘Like’ and other similar things in with that work would be sensible,” Brown added.

Also commenting on the judgement, Michael Veale, a UK-based researcher in tech and privacy law/policy, said it raises questions about how Facebook will comply with Europe’s data protection framework for any further processing it carries out of the social plug-in data.

“The whole judgement to me leaves open the question ‘on what grounds can Facebook justify further processing of data from their web tracking code?'” he told us. “If they have to provide transparency for this further processing, which would take them out of joint controllership into sole controllership, to whom and when is it provided?

“If they have to demonstrate they would win a legitimate interests test, how will that be affected by the difficulty in delivering that transparency to data subjects?’

“Can Facebook do a backflip and say that for users of their service, their terms of service on their platform justifies the further use of data for which individuals must have separately been made aware of by the website where it was collected?

“The question then quite clearly boils down to non-users, or to users who are effectively non-users to Facebook through effective use of technologies such as Mozilla’s browser tab isolation.”

How far a tracking pixel could be considered a ‘similar device’ to a cookie is another question to consider, he said.

The tracking of non-Facebook users via social plug-ins certainly continues to be a hot-button legal issue for Facebook in Europe — where the company has twice lost in court to Belgium’s privacy watchdog on this issue. (Facebook has continued to appeal.)

Facebook founder Mark Zuckerberg also faced questions about tracking non-users last year, from MEPs in the European Parliament — who pressed him on whether Facebook uses data on non-users for any other purposes than the security purposes of “keeping bad content out” that he claimed requires Facebook to track everyone on the mainstream Internet.

MEPs also wanted to know how non-users can stop their data being transferred to Facebook? Zuckerberg gave no answer, likely because there’s currently no way for non-users to stop their data being sucked up by Facebook’s servers — short of staying off the mainstream Internet.

Emergence’s Jason Green joins TC Sessions: Enterprise this September

By Robert Frawley

Picking winners from the herd of early-stage enterprise startups is challenging — so much competition, so many disruptive technologies, including mobile, cloud and AI. One investor who has consistently identified winners is Jason Green, founder and general partner at Emergence, and TechCrunch is very pleased to announce that he will join the investor panel at TC Sessions: Enterprise on September 5 at the Yerba Buena Center in San Francisco. He will join two other highly accomplished VCs, Maha Ibrahim, general partner at Canaan Partners and Rebecca Lynn, co-founder and general partner at Canvas Ventures. They will join TechCrunch’s Connie Loizos to discuss important trends in early-stage enterprise investments as well as the sectors and companies that have their attention. Green will also join us for the investor Q&A in a separate session.

Jason Green founded Emergence in 2003 with the aim of “looking around the corner, identifying themes and aiming to win big in the long run.” The firm has made 162 investments, led 64 rounds and seen 29 exits to date. Among the firm’s wins are Zoom, Box, Sage Intacct, ServiceMax, Box and SuccessFactors. Emergence has raised $1.4 billion over six funds.

Green is also the founding chairman of the Kauffman Fellow Program and a founding member of Endeavor. He serves on the boards of BetterWorks, Drishti, GroundTruth, Lotame, Replicon and SalesLoft.

Come hear from Green and these other amazing investors at TC Sessions: Enterprise by booking your tickets today — $249 early-bird tickets are still on sale for the next two weeks before prices go up by $100. Book your tickets here.

Startups, get noticed with a demo table at the conference. Demo tables come with four tickets to the show and prime exhibition space for you to showcase your latest enterprise technology to some of the most influential people in the business. Book your $2,000 demo table right here.

Inside the GM factory where Cruise’s autonomous Bolt is made

By Kirsten Korosec

TechCrunch took a field trip to GM’s Orion Assembly plant in Michigan to get an up-close view of how this factory has evolved since the 1980s.

What we found at the plant that employs 1,100 people is an unusual sight: a batch of Cruise autonomous vehicles produced on the same line — and sandwiched in between — the Bolt electric vehicle and an internal combustion engine compact sedan, the Chevrolet Sonic.

This inside look at how autonomous vehicles are built is just one of the topics coming up at TC Sessions: Mobility, which kicked off July 10 in San Jose. The inaugural event is digging in to the present and future of transportation, from the onslaught of scooters and electric mobility to autonomous vehicle tech and even flying cars.

Fresh tickets to our 14th Annual TechCrunch Summer Party

By Emma Comeau

Our 14th Annual TechCrunch Summer Party is a mere two weeks away, and we’re serving up a fresh new batch of tickets to this popular Silicon Valley tradition. Jump on this opportunity, folks, because our previous releases sold out in a flash — and these babies won’t last long, either. Buy your ticket today.

Our summer soiree takes place on July 25 at Park Chalet, San Francisco’s coastal beer garden. Picture it: A cold brew, an ocean view, tasty food and relaxed conversations with other amazing members of the early-startup tech community.

TechCrunch parties have a reputation as a place where startup magic happens. And there will be plenty of magical opportunity afoot this year as heavy-hitter VCs from Merus Capital, August Capital, Battery Ventures, Cowboy Ventures, Data Collective, General Catalyst and Uncork Capital join the party.

There’s more than one way to make magic at our summer fete. If you’re serious about catching the eye of these major VCs, consider buying a Startup Demo Package, which includes four attendee tickets.

Fun fact: Box founders Aaron Levie and Dylan Smith met one of their first investors, DFJ, at a party hosted by TechCrunch founder Michael Arrington. It’s one of our favorite success stories.

Check out the party details:

  • When: July 25 from 5:30 p.m. – 9:00 p.m.
  • Where: Park Chalet in San Francisco
  • How much: $95
  • Startup Demo Package: $2,000

No TechCrunch party is complete without a chance to win great door prizes, including TechCrunch swag, Amazon Echos and tickets to Disrupt San Francisco 2019.

Buy your ticket today and enjoy a convivial evening of connection and community in a beautiful setting. Opportunity happens, and it’s waiting for you at the TechCrunch Summer Party.

Pro Tip: If you miss out this time, sign up here and we’ll let you know when we release the next group of tickets.

Is your company interested in sponsoring or exhibiting at the TechCrunch 14th Annual Summer Party? Contact our sponsorship sales team by filling out this form.

Marriott to face $123 million fine by UK authorities over data breach

By Zack Whittaker

The U.K. data protection authority said it will serve hotel giant Marriott with a £99 million ($123M) fine for a data breach that exposed up to 383 million guests.

Marriott revealed last year that its acquired Starwood properties had its central reservation database hacked, including five million unencrypted passport numbers and eight million credit card records. The breach dated back to 2014 but was not discovered until November 2018. Marriott later pulled the hacked reservation system from its operations.

The U.K.’s Information Commissioner’s Office (ICO) said its investigation found that Marriott “failed to undertake sufficient due diligence when it bought Starwood and should also have done more to secure its systems.”

The breach affected about 30 million residents of European Union, according to the ICO, which confirmed the proposed fine in a statement Tuesday.

But Marriott said it “has the right to respond” before a fine is imposed and “intends to respond and vigorously defend” its position.

“We are disappointed with this notice of intent from the ICO, which we will contest,” said Marriott’s chief executive Arne Sorenson, in a statement. “Marriott has been cooperating with the ICO throughout its investigation into the incident, which involved a criminal attack against the Starwood guest reservation database.”

Under the new GDPR regime, the ICO has the right to fine up to four percent of a company’s annual turnover.  Given Marriott made about $3.6 billion in revenue during 2018, the ICO’s fine represents about 3 percent of the company’s global revenue.

The ICO said Marriott will be given an opportunity to discuss the proposed findings and sanctions.

“The ICO will consider carefully the representations made by the company and the other concerned data protection authorities before it takes its final decision,” said the U.K. data protection authority.

The proposed Marriott fine comes hot on the heels of a record fine imposed of $230 million by the ICO on Monday following the British Airways data breach. The airline confirmed about 500,000 customers had their credit cards skimmed over a three week period between August and September 2018.

Researchers said a credit card stealing group known as Magecart was to blame.

Italy stings Facebook with $1.1M fine for Cambridge Analytica data misuse

By Natasha Lomas

Italy’s data protection watchdog has issued Facebook with a €1 million (~$1.1M) fine for violations of local privacy law attached to the Cambridge Analytica data misuse scandal.

Last year it emerged that up to 87 million Facebook users had had their data siphoned out of the social media giant’s platform by an app developer working for the controversial (and now defunct) political data company, Cambridge Analytica.

The offences in question occurred prior to Europe’s tough new data protection framework, GDPR, coming into force — hence the relatively small size of the fine in this case, which has been calculated under Italy’s prior data protection regime. (Whereas fines under GDPR can scale as high as 4% of a company’s annual global turnover.)

We’ve reached out to Facebook for comment.

Last year the UK’s DPA similarly issued Facebook with a £500k penalty for the Cambridge Analytica breach, although Facebook is appealing.

The Italian regulator says 57 Italian Facebook users downloaded Dr Aleksandr Kogan‘s Thisisyourdigitallife quiz app, which was the app vehicle used to scoop up Facebook user data en masse — with a further 214,077 Italian users’ also having their personal information processed without their consent as a result of how the app could access data on each user’s Facebook friends.

In an earlier intervention in March, the Italian regulator challenged Facebook over the misuse of the data — and the company opted to pay a reduced amount of €52,000 in the hopes of settling the matter.

However the Italian DPA has decided that the scale of the violation of personal data and consent disqualifies the case for a reduced payment — so it has now issued Facebook with a €1M fine.

The sum takes into account, in addition to the size of the database, also the economic conditions of Facebook and the number of global and Italian users of the company,” it writes in a press release on its website [translated by Google Translate]. 

At the time of writing its full decision on the case was not available.

Late last year the Italian regulator fined Facebook €10M for misleading users over its sign in practices.

While, in 2017, it also slapped the company with a €3M penalty for a controversial decision to begin helping itself to WhatsApp users’ data — despite the latter’s prior claims that user data would never be shared with Facebook.

Going forward, where Facebook’s use (and potential misuse) of Europeans’ data is concerned, all eyes are on the Irish Data Protection Commission; aka its lead regulator in the region on account of the location of Facebook’s international HQ.

The Irish DPC has a full suite of open investigations into Facebook and Facebook-owned companies — covering major issues such as security breaches and questions over the legal basis it claims to process people’s data, among a number of other big tech related probes.

The watchdog has suggested decisions on some of this tech giant-related case-load could land this summer.

Europe should ban AI for mass surveillance and social credit scoring, says advisory group

By Natasha Lomas

An independent expert group tasked with advising the European Commission to inform its regulatory response to artificial intelligence — to underpin EU lawmakers’ stated aim of ensuring AI developments are “human centric” — has published its policy and investment recommendations.

This follows earlier ethics guidelines for “trustworthy AI”, put out by the High Level Expert Group (HLEG) for AI back in April, when the Commission also called for participants to test the draft rules.

The AI HLEG’s full policy recommendations comprise a highly detailed 50-page document — which can be downloaded from this web page. The group, which was set up in June 2018, is made up of a mix of industry AI experts, civic society representatives, political advisers and policy wonks, academics and legal experts.

The document includes warnings on the use of AI for mass surveillance and scoring of EU citizens, such as China’s social credit system, with the group calling for an outright ban on “AI-enabled mass scale scoring of individuals”. It also urges governments to commit to not engage in blanket surveillance of populations for national security purposes. (So perhaps it’s just as well the UK has voted to leave the EU, given the swingeing state surveillance powers it passed into law at the end of 2016.) 

“While there may be a strong temptation for governments to ‘secure society’ by building a pervasive surveillance system based on AI systems, this would be extremely dangerous if pushed to extreme levels,” the HLEG writes. “Governments should commit not to engage in mass surveillance of individuals and to deploy and procure only Trustworthy AI systems, designed to be respectful of the law and fundamental rights, aligned with ethical principles and socio-technically robust.”

The group also calls for commercial surveillance of individuals and societies to be “countered” — suggesting the EU’s response to the potency and potential for misuse of AI technologies should include ensuring that online people-tracking is “strictly in line with fundamental rights such as privacy”, including (the group specifies) when it concerns ‘free’ services (albeit with a slight caveat on the need to consider how business models are impacted).

Last week the UK’s data protection watchdog fired an even more specific shot across the bows of the online behavioral ad industry — warning that adtech’s mass-scale processing of web users’ personal data for targeting ads does not comply with EU privacy standards. The industry was told its rights-infringing practices must change, even if the Information Commissioner’s Office isn’t about to bring down the hammer just yet. But the reform warning was clear.

As EU policymakers work on fashioning a rights-respecting regulatory framework for AI, seeking to steer  the next ten years+ of cutting-edge tech developments in the region, the wider attention and scrutiny that will draw to digital practices and business models looks set to drive a clean up of problematic digital practices that have been able to proliferate under no or very light touch regulation, prior to now.

The HLEG also calls for support for developing mechanisms for the protection of personal data, and for individuals to “control and be empowered by their data” — which they argue would address “some aspects of the requirements of trustworthy AI”.

“Tools should be developed to provide a technological implementation of the GDPR and develop privacy preserving/privacy by design technical methods to explain criteria, causality in personal data processing of AI systems (such as federated machine learning),” they write.

“Support technological development of anonymisation and encryption techniques and develop standards for secure data exchange based on personal data control. Promote the education of the general public in personal data management, including individuals’ awareness of and empowerment in AI personal data-based decision-making processes. Create technology solutions to provide individuals with information and control over how their data is being used, for example for research, on consent management and transparency across European borders, as well as any improvements and outcomes that have come from this, and develop standards for secure data exchange based on personal data control.”

Other policy suggestions among the many included in the HLEG’s report are that AI systems which interact with humans should include a mandatory self-identification. Which would mean no sneaky Google Duplex human-speech mimicking bots. In such a case the bot would have to introduce itself up front — thereby giving the human caller a chance to disengage.

The HLEG also recommends establishing a “European Strategy for Better and Safer AI for Children”. Concern and queasiness about rampant datafication of children, including via commercial tracking of their use of online services, has been raised  in multiple EU member states.

“The integrity and agency of future generations should be ensured by providing Europe’s children with a childhood where they can grow and learn untouched by unsolicited monitoring, profiling and interest invested habitualisation and manipulation,” the group writes. “Children should be ensured a free and unmonitored space of development and upon moving into adulthood should be provided with a “clean slate” of any public or private storage of data related to them. Equally, children’s formal education should be free from commercial and other interests.”

Member states and the Commission should also devise ways to continuously “analyse, measure and score the societal impact of AI”, suggests the HLEG — to keep tabs on positive and negative impacts so that policies can be adapted to take account of shifting effects.

“A variety of indices can be considered to measure and score AI’s societal impact such as the UN Sustainable Development Goals and the Social Scoreboard Indicators of the European Social Pillar. The EU statistical programme of Eurostat, as well as other relevant EU Agencies, should be included in this mechanism to ensure that the information generated is trusted, of high and verifiable quality, sustainable and continuously available,” it suggests. “AI-based solutions can help the monitoring and measuring its societal impact.”

The report is also heavy on pushing for the Commission to bolster investment in AI — calling particularly for more help for startups and SMEs to access funding and advice, including via the InvestEU program.

Another suggestion is the creation of an EU-wide network of AI business incubators to connect academia and industry. “This could be coupled with the creation of EU-wide Open Innovation Labs, which could be built further on the structure of the Digital Innovation Hub network,” it continues. 

There are also calls to encourage public sector uptake of AI, such as by fostering digitalisation by transforming public data into a digital format; providing data literacy education to government agencies; creating European large annotated public non-personal databases for “high quality AI”; and funding and facilitating the development of AI tools that can assist in detecting biases and undue prejudice in governmental decision-making.

Another chunk of the report covers recommendations to try to bolster AI research in Europe — such as strengthening and creating additional Centres of Excellence which address strategic research topics and become “a European level multiplier for a specific AI topic”.

Investment in AI infrastructures, such as distributed clusters and edge computing, large RAM and fast networks, and a network of testing facilities and sandboxes is also urged; along with support for an EU-wide data repository “through common annotation and standardisation” — to work against data siloing, as well as trusted data spaces for specific sectors such as healthcare, automative and agri-food.

The push by the HLEG to accelerate uptake of AI has drawn some criticism, with digital rights group Access Now’s European policy manager, Fanny Hidvegi, writing that: “What we need now is not more AI uptake across all sectors in Europe, but rather clarity on safeguards, red lines, and enforcement mechanisms to ensure that the automated decision making systems — and AI more broadly — developed and deployed in Europe respect human rights.”

Other ideas in the HLEG’s report include developing and implementing a European curriculum for AI; and monitoring and restricting the development of automated lethal weapons — including technologies such as cyber attack tools which are not “actual weapons” but which the group points out “can have lethal consequences if deployed. 

The HLEG further suggests EU policymakers refrain from giving AI systems or robots legal personhood, writing: “We believe this to be fundamentally inconsistent with the principle of human agency, accountability and responsibility, and to pose a significant moral hazard.”

The report can downloaded in full here.

How to negotiate term sheets with strategic investors

By Arman Tabatabai
Alex Gold Contributor
Alex Gold is co-founder of Myia, an intelligent health platform employing novel biometric data to predict and prevent costly medical events. Previously, Alex was Venture Partner at BCG Digital Ventures and a co-founder of Traction, a marketplace of digital marketing experts.

Three years ago, I met with a founder who had raised a massive seed round at a valuation that was at least five times the market rate. I asked what firm made the investment.

She said it was not a traditional venture firm, but rather a strategic investor that not only had no ties to her space but also had no prior investment experience. The strategic investor, she said, was looking to “get their hands dirty” and “get in on the ground floor.”

Over the next 2 years, I kept a close eye on the founder. Although she had enough capital to pivot her business focus multiple times, she seemed to be at odds, serving the needs of her strategic investor and her customer base.

Ultimately, when the business needed more capital to survive, the strategic investor didn’t agree with the founder’s focus, opted not to prop it up, and the business had to shut down.

Sadly, this is not an uncommon story as examples abound of strategic investors influencing startup direction and management decisions to the point of harm for the startup. Corporate strategics, not to be confused with dedicated funds focused on financial returns like a traditional venture investor like Google Ventures, often care less about return on investment, and more about a startup’s focus, and sector specificity. If corporate imperatives change, the strategic may cease to be the right partner or could push the startup in a challenging direction.

And yet, fortunately, as the disruptive power of technology is being unleashed on nearly every major industry, strategic investors are now getting smarter, both in terms of how they invest and how they partner with entrepreneurs.

From making strong acquisitive plays (i.e. GM’s purchase of Cruise Automation or Toyota’s early-stage investment in Uber) to building dedicated funds, to executing commercial agreements in tandem with capital investment, strategics are getting savvier, and by extension, becoming better partners.  In some instances, they may be the best partner.

Negotiating a term sheet with a strategic investor necessitates a different set of considerations. Namely: the preference for a strategic to facilitate commercial milestones for the startup, a cautious approach to avoid the “over-valuation” trap, an acute focus on information rights, and the limitation of non-compete provisions.

❌