FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Root Insurance valuation hits $3.65 billion in latest round led by DST Global and Coatue

By Kirsten Korosec

Root Insurance,  lang="EN">an Ohio-based car insurance startup that uses smartphone technology to understand individual driver behavior, said Monday it has raised $350 million on a $3.65 billion valuation in a Series E funding round. 

The amount of the round was reported last month by Axios, citing anonymous sources. This official announcement fills in the remaining details, including that DST Global and Coatue Management led the funding round. Existing investors Drive Capital, Redpoint Ventures, Ribbit Capital, Scale Venture Partners and Tiger Global Management all participated in this round, along with several new investors, according to the company.

The car insurance company, founded in 2015, has now raised $523 million with an additional $100 million in debt financing. The funding will be used to scale up in the 29 U.S. states where it currently operates and expand into new markets. The additional capital will also be used to develop new product lines, Root said.

The company said last year it planned to be in all 50 states and Washington, D.C. by the end of 2019. 

“Root is transforming auto insurance, the largest property and casualty insurance market in the U.S., by leveraging technology and data to offer consumers lower prices, transparency, and fairness,” Tom Stafford, managing partner of DST Global, said in a statement.

Root provides car insurance to drivers. The company has differentiated itself by using individual driver behavior along with other factors to determine the premium customers pay.

Drivers download the Rootmobile app and take a test drive that typically lasts two or three weeks. Root provides a quote that rewards good driving behavior and allows customers to switch their insurance policy. Customers can purchase and manage their policy through the app.

Root has said its approach allows good drivers to save more than 50% on their policies compared to traditional insurance carriers. The company uses AI algorithms to adjust risk and sometimes provide discounts. For example, a vehicle with an advanced driver assistance system that it deems improves safety might receive further discounts.

The company’s business model has attracted customers. Root wrote more than $187 million in insurance premiums in the first six months of 2019, 824% growth over the same period in 2018.

The new Disney+ streaming service is oriented around fans and families

By Jonathan Shieber

You can tell a lot about a service by what it prioritizes on its home screen. With the new Disney + service, the focus is initially organized by fan base, with different silos for the company’s various studios and the fans that follow them.

As the company gets the service off the ground — and casts about for content to stuff it with — curation is increasingly important. Over the course of my conversation with Michael Paull, who’s overseeing Disney’s streaming service, “quality over quantity” was the mantra.

I spent some time reviewing the app and its features at the D23 expo and it seems the emphasis of quality over quantity in content didn’t necessarily extend to the app itself. The user interface and controls — at least on the AppleTV version that was used in my demonstration — were a little clunky.

While there’s going to be a rich content library of old and new titles — Disney, Pixar, Marvel and Star Wars classics and a mix of Fox content (chiefly “The Simpsons”) featured prominently on the home screen — other content is going to be a little bit more difficult to find.

Navigation over to the sidebar is required to find the new Disney+ original series (including acquisitions like the “Diary of a Female President” series that Disney ordered earlier in the year. And don’t even bother trying to find any media from Hulu — or Hulu itself.  There are no plans to integrate any Hulu content or Fox properties that now fall under the auspices of Disney or its underlying studios (that includes the mutant corner of the Marvel Comics world that now falls under Disney’s purview after the Fox deal).

Family-friendly fare for Disney means that the service (as previously reported) won’t have any media that would warrant a rating above PG-13. There won’t be a whiff of anything remotely as bloody or graphic as “Deadpool” on Disney’s streams.

While there aren’t a number of robust parental controls (since the content is designed to be more family friendly than the average streaming service), there is a kids mode designed for ages seven and below.

In the kids mode, shows are organized by character, because that’s the way children (many of whom are pre-literate) relate to the medium. The screen for kids is also brighter, and, in kids’ accounts, the autoplay feature is turned off (the default for the streaming services is that autoplay is on for adults).

Initially the service will be available in several languages at launch through subtitles and dubbing, with plans to be as inclusive as possible when the service rolls out in each of the countries in which it will be operating. And eventually Disney wants the streaming service to be available everywhere.

The $7-a-month price tag will enable families to get four simultaneous streams; all the videos will be available in up to 4K HDR video playback and Dolby Atmos audio, with an ability for a family to set up seven different user profiles. As CNET noted, this is in sharp contrast to Netflix, which only allows for five profiles and enables simultaneous streaming only at a higher price point.

Given the broader functionality, it’d be more apt to compare Disney+ to Netflix’s premium $15.99 per month service, rather than its basic $8.99 price point. Disney+’s content library and family-friendly pitch also make it a compelling offering for families with young children.

Each profile can be designated with the Disney avatar of your choice. The service also won’t be dropping its original episodes all at once, preferring to serialize the entertainment — more like a traditional network.

For Disney, which owns Marvel, Lucasfilm, its own catalog of live-action and animated shows through the now 36-year-old Disney Channel and the film libraries of Pixar and the Walt Disney Co., the successful launch of Disney+ is nothing less than the future of the company.

At D23, the company’s fan service expo, that was incredibly apparent.

Tumblr’s next step forward with Automattic CEO Matt Mullenweg

By Brian Heater

After months of rumors, Verizon finally sold off Tumblr for a reported $3 million — a fraction of what Yahoo paid for the once might blogging service back in 2013.

The media conglomerate (which also owns TechCrunch) was clearly never quite sure what to do with the property after gobbling it up as part of its 2016 Yahoo acquisition. All parties has since come to the conclusion that Tumblr simply wasn’t a good fit under either the Verizon or Yahoo umbrella, amounting to a $1.1 billion mistake.

For Tumblr, however, the story may still have a happy ending. By all accounts, its new home at Automattic is far better fit. The service joins a portfolio that includes popular blogging service WordPress.com, spam filtering service Akismet and long-form storytelling platform, Longreads.

In an interview, this week, Automattic founder and CEO Matt Mullenweg discussed Tumblr’s history and the impact of the poorly received adult content restrictions. He also shed some light on where Tumblr goes from here, including a potential increased focused on multimedia such as podcasting.

Brian Heater: I’m curious how [your meetings with Tumblr staff] went. What’s the feeling on the team right now? What are the concerns? How are people feeling about the transition?

Week in Review: Snapchat beats a dead horse

By Lucas Matney

Hey. This is Week-in-Review, where I give a heavy amount of analysis and/or rambling thoughts on one story while scouring the rest of the hundreds of stories that emerged on TechCrunch this week to surface my favorites for your reading pleasure.

Last week, I talked about how Netflix might have some rough times ahead as Disney barrels towards it.


3d video spectacles 3

The big story

There is plenty to be said about the potential of smart glasses. I write about them at length for TechCrunch and I’ve talked to a lot of founders doing cool stuff. That being said, I don’t have any idea what Snap is doing with the introduction of a third-generation of its Spectacles video sunglasses.

The first-gen were a marketing smash hit, their sales proved to be a major failure for the company which bet big and seemingly walked away with a landfill’s worth of the glasses.

Snap’s latest version of Spectacles were announced in Vogue this week, they are much more expensive at $380 and their main feature is that they have two cameras which capture images in light depth which can lead to these cute little 3D boomerangs. One one hand, it’s nice to see the company showing perseverance with a tough market, on the other it’s kind of funny to see them push the same rock up the hill again.

Snap is having an awesome 2019 after a laughably bad 2018, the stock has recovered from record lows and is trading in its IPO price wheelhouse. It seems like they’re ripe for something new and exciting, not beautiful yet iterative.

The $150 Spectacles 2 are still for sale, though they seem quite a bit dated-looking at this point. Spectacles 3 seem to be geared entirely towards women, and I’m sure they made that call after seeing the active users of previous generations, but given the write-down they took on the first-generation, something tells me that Snap’s continued experimentation here is borne out of some stubbornness form Spiegel and the higher-ups who want the Snap brand to live in a high fashion world and want to be at the forefront of an AR industry that seems to have already moved onto different things.

Send me feedback
on Twitter @lucasmtny or email
lucas@techcrunch.com

On to the rest of the week’s news.

tumblr phone sold

Trends of the week

Here are a few big news items from big companies, with green links to all the sweet, sweet added context:

  • WordPress buys Tumblr for chump change
    Tumblr, a game-changing blogging network that shifted online habits and exited for $1.1 billion just changed hands after Verizon (which owns TechCrunch) unloaded the property for a reported $3 million. Read more about this nightmarish deal here.
  • Trump gives American hardware a holiday season pass on tariffs 
    The ongoing trade war with China generally seems to be rough news for American companies deeply intertwined with the manufacturing centers there, but Trump is giving U.S. companies a Christmas reprieve from the tariffs, allowing certain types of hardware to be exempt from the recent rate increases through December. Read more here.
  • Facebook loses one last acquisition co-founder
    This week, the final remnant of Facebook’s major acquisitions left the company. Oculus co-founder Nate Mitchell announced he was leaving. Now, Instagram, WhatsApp and Oculus are all helmed by Facebook leadership and not a single co-founder from the three companies remains onboard. Read more here.

GAFA Gaffes

How did the top tech companies screw up this week? This clearly needs its own section, in order of badness:

  1. Facebook’s turn in audio transcription debacle:
    [Facebook transcribed users’ audio messages without permission]
  2. Google’s hate speech detection algorithms get critiqued:
    [Racial bias observed in hate speech detection algorithm from Google]
  3. Amazon has a little email mishap:
    [Amazon customers say they received emails for other people’s orders]

Adam Neumann (WeWork) at TechCrunch Disrupt NY 2017

Extra Crunch

Our premium subscription service had another week of interesting deep dives. My colleague Danny Crichton wrote about the “tech” conundrum that is WeWork and the questions that are still unanswered after the company filed documents this week to go public.

WeWork’s S-1 misses these three key points

…How is margin changing at its older locations? How is margin changing as it opens up in places like India, with very different costs and revenues? How do those margins change over time as a property matures? WeWork spills serious amounts of ink saying that these numbers do get better … without seemingly being willing to actually offer up the numbers themselves…

Here are some of our other top reads this week for premium subscribers. This week, we published a major deep dive into the world’s next music unicorn and we dug deep into marketplace startups.

Sign up for more newsletters in your inbox (including this one) here.

Privacy researchers devise a noise-exploitation attack that defeats dynamic anonymity

By Natasha Lomas

Privacy researchers in Europe believe they have the first proof that a long-theorised vulnerability in systems designed to protect privacy by aggregating and adding noise to data to mask individual identities is no longer just a theory.

The research has implications for the immediate field of differential privacy and beyond — raising wide-ranging questions about how privacy is regulated if anonymization only works until a determined attacker figures out how to reverse the method that’s being used to dynamically fuzz the data.

Current EU law doesn’t recognise anonymous data as personal data. Although it does treat pseudoanonymized data as personal data because of the risk of re-identification.

Yet a growing body of research suggests the risk of de-anonymization on high dimension data sets is persistent. Even — per this latest research — when a database system has been very carefully designed with privacy protection in mind.

It suggests the entire business of protecting privacy needs to get a whole lot more dynamic to respond to the risk of perpetually evolving attacks.

Academics from Imperial College London and Université Catholique de Louvain are behind the new research.

This week, at the 28th USENIX Security Symposium, they presented a paper detailing a new class of noise-exploitation attacks on a query-based database that uses aggregation and noise injection to dynamically mask personal data.

The product they were looking at is a database querying framework, called Diffix — jointly developed by a German startup called Aircloak and the Max Planck Institute for Software Systems.

On its website Aircloak bills the technology as “the first GDPR-grade anonymization” — aka Europe’s General Data Protection Regulation, which began being applied last year, raising the bar for privacy compliance by introducing a data protection regime that includes fines that can scale up to 4% of a data processor’s global annual turnover.

What Aircloak is essentially offering is to manage GDPR risk by providing anonymity as a commercial service — allowing queries to be run on a data-set that let analysts gain valuable insights without accessing the data itself. The promise being it’s privacy (and GDPR) ‘safe’ because it’s designed to mask individual identities by returning anonymized results.

The problem is personal data that’s re-identifiable isn’t anonymous data. And the researchers were able to craft attacks that undo Diffix’s dynamic anonymity.

“What we did here is we studied the system and we showed that actually there is a vulnerability that exists in their system that allows us to use their system and to send carefully created queries that allow us to extract — to exfiltrate — information from the data-set that the system is supposed to protect,” explains Imperial College’s Yves-Alexandre de Montjoye, one of five co-authors of the paper.

“Differential privacy really shows that every time you answer one of my questions you’re giving me information and at some point — to the extreme — if you keep answering every single one of my questions I will ask you so many questions that at some point I will have figured out every single thing that exists in the database because every time you give me a bit more information,” he says of the premise behind the attack. “Something didn’t feel right… It was a bit too good to be true. That’s where we started.”

The researchers chose to focus on Diffix as they were responding to a bug bounty attack challenge put out by Aircloak.

“We start from one query and then we do a variation of it and by studying the differences between the queries we know that some of the noise will disappear, some of the noise will not disappear and by studying noise that does not disappear basically we figure out the sensitive information,” he explains.

“What a lot of people will do is try to cancel out the noise and recover the piece of information. What we’re doing with this attack is we’re taking it the other way round and we’re studying the noise… and by studying the noise we manage to infer the information that the noise was meant to protect.

“So instead of removing the noise we study statistically the noise sent back that we receive when we send carefully crafted queries — that’s how we attack the system.”

A vulnerability exists because the dynamically injected noise is data-dependent. Meaning it remains linked to the underlying information — and the researchers were able to show that carefully crafted queries can be devised to cross-reference responses that enable an attacker to reveal information the noise is intended to protect.

Or, to put it another way, a well designed attack can accurately infer personal data from fuzzy (‘anonymized’) responses.

This despite the system in question being “quite good,” as de Montjoye puts it of Diffix. “It’s well designed — they really put a lot of thought into this and what they do is they add quite a bit of noise to every answer that they send back to you to prevent attacks”.

“It’s what’s supposed to be protecting the system but it does leak information because the noise depends on the data that they’re trying to protect. And that’s really the property that we use to attack the system.”

The researchers were able to demonstrate the attack working with very high accuracy across four real-world data-sets. “We tried US censor data, we tried credit card data, we tried location,” he says. “What we showed for different data-sets is that this attack works very well.

“What we showed is our attack identified 93% of the people in the data-set to be at risk. And I think more importantly the method actually is very high accuracy — between 93% and 97% accuracy on a binary variable. So if it’s a true or false we would guess correctly between 93-97% of the time.”

They were also able to optimise the attack method so they could exfiltrate information with a relatively low level of queries per user — up to 32.

“Our goal was how low can we get that number so it would not look like abnormal behaviour,” he says. “We managed to decrease it in some cases up to 32 queries — which is very very little compared to what an analyst would do.”

After disclosing the attack to Aircloak, de Montjoye says it has developed a patch — and is describing the vulnerability as very low risk — but he points out it has yet to publish details of the patch so it’s not been possible to independently assess its effectiveness. 

“It’s a bit unfortunate,” he adds. “Basically they acknowledge the vulnerability [but] they don’t say it’s an issue. On the website they classify it as low risk. It’s a bit disappointing on that front. I think they felt attacked and that was really not our goal.”

For the researchers the key takeaway from the work is that a change of mindset is needed around privacy protection akin to the shift the security industry underwent in moving from sitting behind a firewall waiting to be attacked to adopting a pro-active, adversarial approach that’s intended to out-smart hackers.

“As a community to really move to something closer to adversarial privacy,” he tells TechCrunch. “We need to start adopting the red team, blue team penetration testing that have become standard in security.

“At this point it’s unlikely that we’ll ever find like a perfect system so I think what we need to do is how do we find ways to see those vulnerabilities, patch those systems and really try to test those systems that are being deployed — and how do we ensure that those systems are truly secure?”

“What we take from this is really — it’s on the one hand we need the security, what can we learn from security including open systems, verification mechanism, we need a lot of pen testing that happens in security — how do we bring some of that to privacy?”

“If your system releases aggregated data and you added some noise this is not sufficient to make it anonymous and attacks probably exist,” he adds.

“This is much better than what people are doing when you take the dataset and you try to add noise directly to the data. You can see why intuitively it’s already much better.  But even these systems are still are likely to have vulnerabilities. So the question is how do we find a balance, what is the role of the regulator, how do we move forward, and really how do we really learn from the security community?

“We need more than some ad hoc solutions and only limiting queries. Again limiting queries would be what differential privacy would do — but then in a practical setting it’s quite difficult.

“The last bit — again in security — is defence in depth. It’s basically a layered approach — it’s like we know the system is not perfect so on top of this we will add other protection.”

The research raises questions about the role of data protection authorities too.

During Diffix’s development, Aircloak writes on its website that it worked with France’s DPA, the CNIL, and a private company that certifies data protection products and services — saying: “In both cases we were successful in so far as we received essentially the strongest endorsement that each organization offers.”

Although it also says that experience “convinced us that no certification organization or DPA is really in a position to assert with high confidence that Diffix, or for that matter any complex anonymization technology, is anonymous”, adding: “These organizations either don’t have the expertise, or they don’t have the time and resources to devote to the problem.”

The researchers’ noise exploitation attack demonstrates how even a level of regulatory “endorsement” can look problematic. Even well designed, complex privacy systems can contain vulnerabilities and cannot offer perfect protection. 

“It raises a tonne of questions,” says de Montjoye. “It is difficult. It fundamentally asks even the question of what is the role of the regulator here?

When you look at security my feeling is it’s kind of the regulator is setting standards and then really the role of the company is to ensure that you meet those standards. That’s kind of what happens in data breaches.

“At some point it’s really a question of — when something [bad] happens — whether or not this was sufficient or not as a [privacy] defence, what is the industry standard? It is a very difficult one.”

“Anonymization is baked in the law — it is not personal data anymore so there are really a lot of implications,” he adds. “Again from security we learn a lot of things on transparency. Good security and good encryption relies on open protocol and mechanisms that everyone can go and look and try to attack so there’s really a lot at this moment we need to learn from security.

“There’s no going to be any perfect system. Vulnerability will keep being discovered so the question is how do we make sure things are still ok moving forward and really learning from security — how do we quickly patch them, how do we make sure there is a lot of research around the system to limit the risk, to make sure vulnerabilities are discovered by the good guys, these are patched and really [what is] the role of the regulator?

“Data can have bad applications and a lot of really good applications so I think to me it’s really about how to try to get as much of the good while limiting as much as possible the privacy risk.”

What will Tumblr become under the ownership of tech’s only Goldilocks founder?

By Darrell Etherington

This week, Automattic revealed it has signed all the paperwork to acquire Tumblr from Verizon, including its full staff of 200. Tumblr has undergone quite a journey since its headline-grabbing acquisition by Marissa Mayer’s Yahoo in 2013 for $1.1 billion, but after six years of neglect, its latest move is its first real start since it stopped being an independent company. Now, it’s in the hands of Matt Mullenweg, the only founder of a major tech company who has repeatedly demonstrated a talent for measured responses, moderation and a willingness to forego reckless explosive growth in favor of getting things ‘just right.’

There’s never been a better acquisition for all parties involved, or at least one in which every party should walk away feeling they got exactly what they needed out of the deal. Yes, that’s in spite of the reported $3 million-ish asking price.

Verizon Media acquired Tumblr through a deal made to buy Yahoo, under a previous media unit strategy and leadership team. Verizon Media has no stake in the company, and so headlines talking about the bath it apparently took relative to the original $1.1 billion acquisition price are either willfully ignorant or just plain dumb.

Six years after another company made that bad deal for a company it clearly didn’t have the right business focus to correctly operate, Verizon made a good one to recoup some money.

Aligned leadership and complementary offerings drive a win-win

WebKit’s new anti-tracking policy puts privacy on a par with security

By Natasha Lomas

WebKit, the open source engine that underpins Internet browsers including Apple’s Safari browser, has announced a new tracking prevention policy that takes the strictest line yet on the background and cross-site tracking practices and technologies which are used to creep on Internet users as they go about their business online.

Trackers are technologies that are invisible to the average web user, yet which are designed to keep tabs on where they go and what they look at online — typically for ad targeting but web user profiling can have much broader implications than just creepy ads, potentially impacting the services people can access or the prices they see, and so on. Trackers can also be a conduit for hackers to inject actual malware, not just adtech.

This translates to stuff like tracking pixels; browser and device fingerprinting; and navigational tracking to name just a few of the myriad methods that have sprouted like weeds from an unregulated digital adtech industry that’s poured vast resource into ‘innovations’ intended to strip web users of their privacy.

WebKit’s new policy is essentially saying enough: Stop the creeping.

But — and here’s the shift — it’s also saying it’s going to treat attempts to circumvent its policy as akin to malicious hack attacks to be responded to in kind; i.e. with privacy patches and fresh technical measures to prevent tracking.

“WebKit will do its best to prevent all covert tracking, and all cross-site tracking (even when it’s not covert),” the organization writes (emphasis its), adding that these goals will apply to all types of tracking listed in the policy — as well as “tracking techniques currently unknown to us”.

“If we discover additional tracking techniques, we may expand this policy to include the new techniques and we may implement technical measures to prevent those techniques,” it adds.

“We will review WebKit patches in accordance with this policy. We will review new and existing web standards in light of this policy. And we will create new web technologies to re-enable specific non-harmful practices without reintroducing tracking capabilities.”

Spelling out its approach to circumvention, it states in no uncertain terms: “We treat circumvention of shipping anti-tracking measures with the same seriousness as exploitation of security vulnerabilities,” adding: “If a party attempts to circumvent our tracking prevention methods, we may add additional restrictions without prior notice. These restrictions may apply universally; to algorithmically classified targets; or to specific parties engaging in circumvention.”

It also says that if a certain tracking technique cannot be completely prevented without causing knock-on effects with webpage functions the user does intend to interact with, it will “limit the capability” of using the technique” — giving examples such as “limiting the time window for tracking” and “reducing the available bits of entropy” (i.e. limiting how many unique data points are available to be used to identify a user or their behavior).

If even that’s not possible “without undue user harm” it says it will “ask for the user’s informed consent to potential tracking”.

“We consider certain user actions, such as logging in to multiple first party websites or apps using the same account, to be implied consent to identifying the user as having the same identity in these multiple places. However, such logins should require a user action and be noticeable by the user, not be invisible or hidden,” it further warns.

WebKit credits Mozilla’s anti-tracking policy as inspiring and underpinning its new approach.

Commenting on the new policy, Dr Lukasz Olejnik, an independent cybersecurity advisor and research associate at the Center for Technology and Global Affairs Oxford University, says it marks a milestone in the evolution of how user privacy is treated in the browser — setting it on the same footing as security.

Equating circumvention of anti-tracking with security exploitation is unprecedented. This is exactly what we need to treat privacy as first class citizen. Enough with hand-waving. It's making technology catch up with regulations (not the other way, for once!) #ePrivacy #GDPR https://t.co/G1Dx7F2MXu

— Lukasz Olejnik (@lukOlejnik) August 15, 2019

“Treating privacy protection circumventions on par with security exploitation is a first of its kind and unprecedented move,” he tells TechCrunch. “This sends a clear warning to the potential abusers but also to the users… This is much more valuable than the still typical approach of ‘we treat the privacy of our users very seriously’ that some still think is enough when it comes to user expectation.”

Asked how he sees the policy impacting pervasive tracking, Olejnik does not predict an instant, overnight purge of unethical tracking of users of WebKit-based browsers but argues there will be less room for consent-less data-grabbers to manoeuvre.

“Some level of tracking, including with unethical technologies, will probably remain in use for the time being. But covert tracking is less and less tolerated,” he says. “It’s also interesting if any decisions will follow, such as for example the expansion of bug bounties to reported privacy vulnerabilities.”

“How this policy will be enforced in practice will be carefully observed,” he adds.

As you’d expect, he credits not just regulation but the role played by active privacy researchers in helping to draw attention and change attitudes towards privacy protection — and thus to drive change in the industry.

There’s certainly no doubt that privacy research is a vital ingredient for regulation to function in such a complex area — feeding complaints that trigger scrutiny that can in turn unlock enforcement and force a change of practice.

Although that’s also a process that takes time.

“The quality of cybersecurity and privacy technology policy, including its communication still leave much to desire, at least at most organisations. This will not change fast,” says says Olejnik. “Even if privacy is treated at the ‘C-level’, this then still tends to be about the purely risk of compliance. Fortunately, some important industry players with good understanding of both technology policy and the actual technology, even the emerging ones still under active research, treat it increasingly seriously.

“We owe it to the natural flow of the privacy research output, the talent inflows, and the slowly moving strategic shifts as well to a minor degree to the regulatory pressure and public heat. This process is naturally slow and we are far from the end.”

For its part, WebKit has been taking aim at trackers for several years now, adding features intended to reduce pervasive tracking — such as, back in 2017, Intelligent Tracking Prevention (ITP), which uses machine learning to squeeze cross-site tracking by putting more limits on cookies and other website data.

Apple immediately applied ITP to its desktop Safari browser — drawing predictable fast-fire from the Internet Advertising Bureau whose membership is comprised of every type of tracker deploying entity on the Internet.

But it’s the creepy trackers that are looking increasingly out of step with public opinion. And, indeed, with the direction of travel of the industry.

In Europe, regulation can be credited with actively steering developments too — following last year’s application of a major update to the region’s comprehensive privacy framework (which finally brought the threat of enforcement that actually bites). The General Data Protection Regulation (GDPR) has also increased transparency around security breaches and data practices. And, as always, sunlight disinfects.

Although there remains the issue of abuse of consent for EU regulators to tackle — with research suggesting many regional cookie consent pop-ups currently offer users no meaningful privacy choices despite GDPR requiring consent to be specific, informed and freely given.

It also remains to be seen how the adtech industry will respond to background tracking being squeezed at the browser level. Continued aggressive lobbying to try to water down privacy protections seems inevitable — if ultimately futile. And perhaps, in Europe in the short term, there will be attempts by the adtech industry to funnel more tracking via cookie ‘consent’ notices that nudge or force users to accept.

As the security space underlines, humans are always the weakest link. So privacy-hostile social engineering might be the easiest way for adtech interests to keep overriding user agency and grabbing their data anyway. Stopping that will likely need regulators to step in and intervene.

Another question thrown up by WebKit’s new policy is which way Chromium will jump, aka the browser engine that underpins Google’s hugely popular Chrome browser.

Of course Google is an ad giant, and parent company Alphabet still makes the vast majority of its revenue from digital advertising — so it maintains a massive interest in tracking Internet users to serve targeted ads.

Yet Chromium developers did pay early attention to the problem of unethical tracking. Here, for example, are two discussing potential future work to combat tracking techniques designed to override privacy settings in a blog post from nearly five years ago.

There have also been much more recent signs Google paying attention to Chrome users’ privacy, such as changes to how it handles cookies which it announced earlier this year.

But with WebKit now raising the stakes — by treating privacy as seriously as security — that puts pressure on Google to respond in kind. Or risk being seen as using its grip on browser marketshare to foot-drag on baked in privacy standards, rather than proactively working to prevent Internet users from being creeped on.

Most EU cookie ‘consent’ notices are meaningless or manipulative, study finds

By Natasha Lomas

New research into how European consumers interact with the cookie consent mechanisms which have proliferated since a major update to the bloc’s online privacy rules last year casts an unflattering light on widespread manipulation of a system that’s supposed to protect consumer rights.

As Europe’s General Data Protection Regulation (GDPR) came into force in May 2018, bringing in a tough new regime of fines for non-compliance, websites responded by popping up legal disclaimers which signpost visitor tracking activities. Some of these cookie notices even ask for consent to track you.

But many don’t — even now, more than a year later.

The study, which looked at how consumers interact with different designs of cookie pop-ups and how various design choices can nudge and influence people’s privacy choices, also suggests consumers are suffering a degree of confusion about how cookies function, as well as being generally mistrustful of the term ‘cookie’ itself. (With such baked in tricks, who can blame them?)

The researchers conclude that if consent to drop cookies was being collected in a way that’s compliant with the EU’s existing privacy laws only a tiny fraction of consumers would agree to be tracked.

The paper, which we’ve reviewed in draft ahead of publication, is co-authored by academics at Ruhr-University Bochum, Germany, and the University of Michigan in the US — and entitled: (Un)informed Consent: Studying GDPR Consent Notices in the Field.

The researchers ran a number of studies, gathering ~5,000 of cookie notices from screengrabs of leading websites to compile a snapshot (derived from a random sub-sample of 1,000) of the different cookie consent mechanisms in play in order to paint a picture of current implementations.

They also worked with a German ecommerce website over a period of four months to study how more than 82,000 unique visitors to the site interacted with various cookie consent designs which the researchers’ tweaked in order to explore how different defaults and design choices affected individuals’ privacy choices.

Their industry snapshot of cookie consent notices found that the majority are placed at the bottom of the screen (58%); not blocking the interaction with the website (93%); and offering no options other than a confirmation button that does not do anything (86%). So no choice at all then.

A majority also try to nudge users towards consenting (57%) — such as by using ‘dark pattern’ techniques like using a color to highlight the ‘agree’ button (which if clicked accepts privacy-unfriendly defaults) vs displaying a much less visible link to ‘more options’ so that pro-privacy choices are buried off screen.

And while they found that nearly all cookie notices (92%) contained a link to the site’s privacy policy, only a third (39%) mention the specific purpose of the data collection or who can access the data (21%).

The GDPR updated the EU’s long-standing digital privacy framework, with key additions including tightening the rules around consent as a legal basis for processing people’s data — which the regulation says must be specific (purpose limited), informed and freely given for consent to be valid.

Even so, since May last year there has been an outgrown in cookie ‘consent’ mechanisms popping up or sliding atop websites that still don’t offer EU visitors the necessary privacy choices, per the research.

“Given the legal requirements for explicit, informed consent, it is obvious that the vast majority of cookie consent notices are not compliant with European privacy law,” the researchers argue.

“Our results show that a reasonable amount of users are willing to engage with consent notices, especially those who want to opt out or do not want to opt in. Unfortunately, current implementations do not respect this and the large majority offers no meaningful choice.”

The researchers also record a large differential in interaction rates with consent notices — of between 5 and 55% — generated by tweaking positions, options, and presets on cookie notices.

This is where consent gets manipulated — to flip visitors’ preference for privacy.

They found that the more choices offered in a cookie notice, the more likely visitors were to decline the use of cookies. (Which is an interesting finding in light of the vendor laundry lists frequently baked into the so-called “transparency and consent framework” which the industry association, the Internet Advertising Bureau (IAB), has pushed as the standard for its members to use to gather GDPR consents.)

“The results show that nudges and pre-selection had a high impact on user decisions, confirming previous work,” the researchers write. “It also shows that the GDPR requirement of privacy by default should be enforced to make sure that consent notices collect explicit consent.”

Here’s a section from the paper discussing what they describe as “the strong impact of nudges and pre-selections”:

Overall the effect size between nudging (as a binary factor) and choice was CV=0.50. For example, in the rather simple case of notices that only asked users to confirm that they will be tracked, more users clicked the “Accept” button in the nudge condition, where it was highlighted (50.8% on mobile, 26.9% on desktop), than in the non-nudging condition where “Accept” was displayed as a text link (39.2% m, 21.1% d). The effect was most visible for the category-and vendor-based notices, where all checkboxes were pre-selected in the nudging condition, while they were not in the privacy-by-default version. On the one hand, the pre-selected versions led around 30% of mobile users and 10% of desktop users to accept all third parties. On the other hand, only a small fraction (< 0.1%) allowed all third parties when given the opt-in choice and around 1 to 4 percent allowed one or more third parties (labeled “other” in 4). None of the visitors with a desktop allowed all categories. Interestingly, the number of non-interacting users was highest on average for the vendor-based condition, although it took up the largest part of any screen since it offered six options to choose from.

The key implication is that just 0.1% of site visitors would freely choose to enable all cookie categories/vendors — i.e. when not being forced to do so by a lack of choice or via nudging with manipulative dark patterns (such as pre-selections).

Rising a fraction, to between 1-4%, who would enable some cookie categories in the same privacy-by-default scenario.

“Our results… indicate that the privacy-by-default and purposed-based consent requirements put forth by the GDPR would require websites to use consent notices that would actually lead to less than 0.1 % of active consent for the use of third parties,” they write in conclusion.

They do flag some limitations with the study, pointing out that the dataset they used that arrived at the 0.1% figure is biased — given the nationality of visitors is not generally representative of public Internet users, as well as the data being generated from a single retail site. But they supplemented their findings with data from a company (Cookiebot) which provides cookie notices as a SaaS — saying its data indicated a higher accept all clicks rate but still only marginally higher: Just 5.6%.

Hence the conclusion that if European web users were given an honest and genuine choice over whether or not they get tracked around the Internet, the overwhelming majority would choose to protect their privacy by rejecting tracking cookies.

This is an important finding because GDPR is unambiguous in stating that if an Internet service is relying on consent as a legal basis to process visitors’ personal data it must obtain consent before processing data (so before a tracking cookie is dropped) — and that consent must be specific, informed and freely given.

Yet, as the study confirms, it really doesn’t take much clicking around the regional Internet to find a gaslighting cookie notice that pops up with a mocking message saying by using this website you’re consenting to your data being processed how the site sees fit — with just a single ‘Ok’ button to affirm your lack of say in the matter.

It’s also all too common to see sites that nudge visitors towards a big brightly colored ‘click here’ button to accept data processing — squirrelling any opt outs into complex sub-menus that can sometimes require hundreds of individual clicks to deny consent per vendor.

You can even find websites that gate their content entirely unless or until a user clicks ‘accept’ — aka a cookie wall. (A practice that has recently attracted regulatory intervention.)

Nor can the current mess of cookie notices be blamed on a lack of specific guidance on what a valid and therefore legal cookie consent looks like. At least not any more. Here, for example, is a myth-busting blog which the UK’s Information Commissioner’s Office (ICO) published last month that’s pretty clear on what can and can’t be done with cookies.

For instance on cookie walls the ICO writes: “Using a blanket approach such as this is unlikely to represent valid consent. Statements such as ‘by continuing to use this website you are agreeing to cookies’ is not valid consent under the higher GDPR standard.” (The regulator goes into more detailed advice here.)

While France’s data watchdog, the CNIL, also published its own detailed guidance last month — if you prefer to digest cookie guidance in the language of love and diplomacy.

(Those of you reading TechCrunch back in January 2018 may also remember this sage plain english advice from our GDPR explainer: “Consent requirements for processing personal data are also considerably strengthened under GDPR — meaning lengthy, inscrutable, pre-ticked T&Cs are likely to be unworkable.” So don’t say we didn’t warn you.)

Nor are Europe’s data protection watchdogs lacking in complaints about improper applications of ‘consent’ to justify processing people’s data.

Indeed, ‘forced consent’ was the substance of a series of linked complaints by the pro-privacy NGO noyb, which targeted T&Cs used by Facebook, WhatsApp, Instagram and Google Android immediately GDPR started being applied in May last year.

While not cookie notice specific, this set of complaints speaks to the same underlying principle — i.e. that EU users must be provided with a specific, informed and free choice when asked to consent to their data being processed. Otherwise the ‘consent’ isn’t valid.

So far Google is the only company to be hit with a penalty as a result of that first wave of consent-related GDPR complaints; France’s data watchdog issued it a $57M fine in January.

But the Irish DPC confirmed to us that three of the 11 open investigations it has into Facebook and its subsidiaries were opened after noyb’s consent-related complaints. (“Each of these investigations are at an advanced stage and we can’t comment any further as these investigations are ongoing,” a spokeswoman told us. So, er, watch that space.)

The problem, where EU cookie consent compliance is concerned, looks to be both a failure of enforcement and a lack of regulatory alignment — the latter as a consequence of the ePrivacy Directive (which most directly concerns cookies) still not being updated, generating confusion (if not outright conflict) with the shiny new GDPR.

However the ICO’s advice on cookies directly addresses claimed inconsistencies between ePrivacy and GDPR, stating plainly that Recital 25 of the former (which states: “Access to specific website content may be made conditional on the well-informed acceptance of a cookie or similar device, if it is used for a legitimate purpose”) does not, in fact, sanction gating your entire website behind an ‘accept or leave’ cookie wall.

Here’s what the ICO says on Recital 25 of the ePrivacy Directive:

  • ‘specific website content’ means that you should not make ‘general access’ subject to conditions requiring users to accept non-essential cookies – you can only limit certain content if the user does not consent;
  • the term ‘legitimate purpose’ refers to facilitating the provision of an information society service – ie, a service the user explicitly requests. This does not include third parties such as analytics services or online advertising;

So no cookie wall; and no partial walls that force a user to agree to ad targeting in order to access the content.

It’s worth point out that other types of privacy-friendly online advertising are available with which to monetize visits to a website. (And research suggests targeted ads offer only a tiny premium over non-targeted ads, even as publishers choosing a privacy-hostile ads path must now factor in the costs of data protection compliance to their calculations — as well as the cost and risk of massive GDPR fines if their security fails or they’re found to have violated the law.)

Negotiations to replace the now very long-in-the-tooth ePrivacy Directive — with an up-to-date ePrivacy Regulation which properly takes account of the proliferation of Internet messaging and all the ad tracking techs that have sprung up in the interim — are the subject of very intense lobbying, including from the adtech industry desperate to keep a hold of cookie data. But EU privacy law is clear.

“[Cookie consent]’s definitely broken (and has been for a while). But the GDPR is only partly to blame, it was not intended to fix this specific problem. The uncertainty of the current situation is caused the delay of the ePrivacy regulation that was put on hold (thanks to lobbying),” says Martin Degeling, one of the research paper’s co-authors, when we suggest European Internet users are being subject to a lot of ‘consent theatre’ (ie noisy yet non-compliant cookie notices) — which in turn is causing knock-on problems of consumer mistrust and consent fatigue for all these useless pop-ups. Which work against the core aims of the EU’s data protection framework.

“Consent fatigue and mistrust is definitely a problem,” he agrees. “Users that have experienced that clicking ‘decline’ will likely prevent them from using a site are likely to click ‘accept’ on any other site just because of one bad experience and regardless of what they actually want (which is in most cases: not be tracked).”

“We don’t have strong statistical evidence for that but users reported this in the survey,” he adds, citing a poll the researchers also ran asking site visitors about their privacy choices and general views on cookies. 

Degeling says he and his co-authors are in favor of a consent mechanism that would enable web users to specify their choice at a browser level — rather than the current mess and chaos of perpetual, confusing and often non-compliant per site pop-ups. Although he points out some caveats.

“DNT [Do Not Track] is probably also not GDPR compliant as it only knows one purpose. Nevertheless  something similar would be great,” he tells us. “But I’m not sure if shifting the responsibility to browser vendors to design an interface through which they can obtain consent will lead to the best results for users — the interfaces that we see now, e.g. with regard to cookies, are not a good solution either.

“And the conflict of interest for Google with Chrome are obvious.”

The EU’s unfortunate regulatory snafu around privacy — in that it now has one modernized, world-class privacy regulation butting up against an outdated directive (whose progress keeps being blocked by vested interests intent on being able to continue steamrollering consumer privacy) — likely goes some way to explaining why Member States’ data watchdogs have generally been loath, so far, to show their teeth where the specific issue of cookie consent is concerned.

At least for an initial period the hope among data protection agencies (DPAs) was likely that ePrivacy would be updated and so they should wait and see.

They have also undoubtedly been providing data processors with time to get their data houses and cookie consents in order. But the frictionless interregnum while GDPR was allowed to ‘bed in’ looks unlikely to last much longer.

Firstly because a law that’s not enforced isn’t worth the paper it’s written on (and EU fundamental rights are a lot older than the GDPR). Secondly, with the ePrivacy update still blocked DPAs have demonstrated they’re not just going to sit on their hands and watch privacy rights be rolled back — hence them putting out guidance that clarifies what GDPR means for cookies. They’re drawing lines in the sand, rather than waiting for ePrivacy to do it (which also guards against the latter being used by lobbyists as a vehicle to try to attack and water down GDPR).

And, thirdly, Europe’s political institutions and policymakers have been dining out on the geopolitical attention their shiny privacy framework (GDPR) has attained.

Much has been made at the highest levels in Europe of being able to point to US counterparts, caught on the hop by ongoing tech privacy and security scandals, while EU policymakers savor the schadenfreude of seeing their US counterparts being forced to ask publicly whether it’s time for America to have its own GDPR.

With its extraterritorial scope, GDPR was always intended to stamp Europe’s rule-making prowess on the global map. EU lawmakers will feel they can comfortably check that box.

However they are also aware the world is watching closely and critically — which makes enforcement a very key piece. It must slot in too. They need the GDPR to work on paper and be seen to be working in practice.

So the current cookie mess is a problematic signal which risks signposting regulatory failure — and that simply isn’t sustainable.

A spokesperson for the European Commission told us it cannot comment on specific research but said: “The protection of personal data is a fundamental right in the European Union and a topic the Juncker commission takes very seriously.”

“The GDPR strengthens the rights of individuals to be in control of the processing of personal data, it reinforces the transparency requirements in particular on the information that is crucial for the individual to make a choice, so that consent is given freely, specific and informed,” the spokesperson added. 

“Cookies, insofar as they are used to identify users, qualify as personal data and are therefore subject to the GDPR. Companies do have a right to process their users’ data as long as they receive consent or if they have a legitimate interest.”

All of which suggests that the movement, when it comes, must come from a reforming adtech industry.

With robust privacy regulation in place the writing is now on the wall for unfettered tracking of Internet users for the kind of high velocity, real-time trading of people’s eyeballs that the ad industry engineered for itself when no one knew what was being done with people’s data.

GDPR has already brought greater transparency. Once Europeans are no longer forced to trade away their privacy it’s clear they’ll vote with their clicks not to be ad-stalked around the Internet too.

The current chaos of non-compliant cookie notices is thus a signpost pointing at an underlying privacy lag — and likely also the last gasp signage of digital business models well past their sell-by-date.

Preclusio uses machine learning to comply with GDPR, other privacy regulations

By Ron Miller

As privacy regulations like GDPR and the California Consumer Privacy Act proliferate, more startups are looking to help companies comply. Enter Preclusio, a member of the Y Combinator Summer 2019 class, which has developed a machine learning-fueled solution to help companies adhere to these privacy regulations.

“We have a platform that is deployed on prem in our customer’s environment, and helps them identify what data they’re collecting, how they’re using it, where it’s being stored and how it should be protected. We help companies put together this broad view of their data, and then we continuously monitor their data infrastructure to ensure that this data continues to be protected,” company co-founder and CEO Heather Wade told TechCrunch.

She says that the company made a deliberate decision to keep the solution on-prem.”We really believe in giving our clients control over their data. We don’t want to be just another third-party SaaS vendor that you have to ship your data to,” Wade explained.

That said, customers can run it wherever they wish, whether that’s on prem or in the cloud in Azure or AWS. Regardless of where it’s stored, the idea is to give customers direct control over their own data. “We are really trying to alert our customers to threats or to potential privacy exceptions that are occurring in their environment in real time, and being in their environment is really the best way to facilitate this,” she said.

The product works by getting read-only access to the data, then begins to identify sensitive data in an automated fashion using machine learning. “Our product automatically looks at the schema and samples of the data, and uses machine learning to identify common protected data,” she said. Once that process is completed, a privacy compliance team can review the findings and adjust these classifications as needed.

Wade, who started the company in March, says the idea formed at previous positions where she was responsible for implementing privacy policies and found there weren’t adequate solutions on the market to help. “I had to face the challenges first-hand of dealing with privacy and compliance and seeing how resources were really taken away from our engineering teams and having to allocate these resources to solving these problems internally, especially early on when GDPR was first passed, and there really were not that many tools available in the market,” she said.

Interestingly Wade’s co-founder is her husband, John. She says they deal with the intensity of being married and startup founders by sticking to their areas of expertise. He’s the marketing person and she’s the technical one.

She says they applied to Y Combinator because they wanted to grow quickly, and that timing is important with more privacy laws coming online soon. She has been impressed with the generosity of the community in helping them reach their goals. “It’s almost indescribable how generous and helpful other folks who’ve been through the YC program are to the incoming batches, and they really do have that spirit of paying it forward,” she said.

Opsani helps optimize cloud applications with AI

By Ron Miller

Opsani, a Redwood City, Calif. startup, wants to go beyond performance monitoring to continually optimizing cloud applications, using artificial intelligence to help the software learn what is the optimal state.

“We have come up with a machine learning technique centered around reinforcement learning to tune the performance of applications in the cloud,” company co-founder and CEO Ross Schibler told TechCrunch.

Schibler says each company has its own unique metrics and that’s what they try to optimize around. “We’re modifying these parameters around the resource, and we’re looking at the performance of the application. So in real time, what is the key business metric that the application is producing as a service? So it might be the number of transactions or it might be latency, but if it’s important to the business, then we use that,” he explained.

He claims that what separates Opsani from a monitoring tool like New Relic or AppDynamics is that they watch performance and then provide feedback for admins, but Opsani actually changes the parameters to improve the application performance in real time, based on what it knows about the application and what the developers want to optimize for.

It is also somewhat similar to a company like Spotinst, which optimizes for the cheapest cloud resources, but instead of simply trying to find the best price, Opsani is actually tuning the application.

The company recently announced a $10 million Series A investment led by Redpoint Ventures. Previous investors Zetta Ventures and Bain Capital also participated.

For now, it’s still early days for the startup. It has a dozen employees and a handful of customers, according to Schibler. With the recent $10 million round of funding, it should be able to hire more employees and continue refining the product.

The headphone jack dies not with a bang, but a Note

By Brian Heater

Next month marks three years since Apple unceremoniously murdered the headphone jack. Courage. The company was roundly mocked for the its own hype, and the interviewing product cycles have been marked by several companies proudly showcasing their staunch refusal to cave.

None were more vocal about clinging to the 3.5mm jack than Samsung. And the company certainly deserves kudos for turning the once ubiquitous port into a distinguishing feature. Like I said a couple of weeks ago, if nothing else, Samsung ought to get a bit of credit for the continuing high quality of the headphones it bundles in with its flagships. It’s been an Apple blindspot, while Samsung has cancelled with comfortable, quality, AKG-branded headphones.

Never forgot were you were at 4PM ET on August 7, 2019. That’s when the torch carrier finally extinguished the flame at the tail end of the dongle decade. The Note 10 is here and the headphone jack is gone.

https://techcrunch.com/2019/08/07/this-is-samsungs-galaxy-note-10-and-10/

You already know the whys. Apple discussed them three years ago. So did Google after quickly reversing its own foot dragging on the Pixel line. But Samsung has had well over three years to prepare for this inevitable moment. The company knew there were would be a little egg on its face after a few years of talking up the port. But when you’ve been through a Galaxy Fold relaunch and two Note recalls, this is a veritable cakewalk.

Samsung’s primary driver here is the same as everyone else: space. The Note 10 and Note 10+ are big phones with big batteries (3,500mAh and 4,300mAh, respectively). For reasons that are clear for anyone who’s been following the line for some time, the company hit pause on the battery race for a while there, focusing instead on safety issues.

With that particular crisis well in the past now, however, battery life is once again central — as it should be. In order to make more room for mAhs, the company dropped the port and picked up the dongle. The tipping point, it says, came when its internal metrics showed that a majority of users on its flagship devices (the S and Note lines) moved to bluetooth streaming. The company says the number is now in excess of 70 percent of users.

I’ll be honest, that surprises me a bit, even now that bluetooth headphones are far cheaper and more plentiful than just three years ago. And no doubt the number changes fairly dramatically when you start talking about entry- and mid-tier devices. The company wouldn’t come out and say it, but it seems this dramatic shift also marks the end of the jack for S series devices, when the S11 starts shipping next year.

As for the dongle, turns out it won’t ship in box. That’ll cost you extra. But the good news is that the Note will ship with a USB-C version of its excellent (by free in-box standards) AKG headphones. Also, Samsung is one of eight million or so companies currently making bluetooth headphones.

And theirs are actually pretty good, turns out.

Twitter ‘fesses up to more adtech leaks

By Natasha Lomas

Twitter has disclosed more bugs related to how it uses personal data for ad targeting that means it may have shared users data with advertising partners even when a user had expressly told it not to.

Back in May the social network disclosed a bug that in certain conditions resulted in an account’s location data being shared with a Twitter ad partner, during real-time bidding (RTB) auctions.

In a blog post on its Help Center about the latest “issues” Twitter says it “recently” found, it admits to finding two problems with users’ ad settings choices that mean they “may not have worked as intended”.

It claims both problems were fixed on August 5. Though it does not specify when it realized it was processing user data without their consent.

The first bug relates to tracking ad conversions. This meant that if a Twitter user clicked or viewed an ad for a mobile application on the platform and subsequently interacted with the mobile app Twitter says it “may have shared certain data (e.g., country code; if you engaged with the ad and when; information about the ad, etc)” with its ad measurement and advertising partners — regardless of whether the user had agreed their personal data could be shared in this way.

It suggests this leak of data has been happening since May 2018 — which is also the day when Europe’s updated privacy framework, GDPR, came into force. The regulation mandates disclosure of data breaches (which explains why you’re hearing about all these issues from Twitter) — and means that quite a lot is riding on how “recently” Twitter found these latest bugs. Because GDPR also includes a supersized regime of fines for confirmed data protection violations.

Though it remains to be seen whether Twitter’s now repeatedly leaky adtech will attract regulatory attention…

Twitter may have /accidentally/ shared data on users to ads partners even for those who opted out from personalised ads. That would be a violation of user settings and expectations, which #GDPR makes a quasi-contract. https://t.co/s0acfllEhG

— Lukasz Olejnik (@lukOlejnik) August 7, 2019

Twitter specifies that it does not share users’ names, Twitter handles, email or phone number with ad partners. However it does share a user’s mobile device identifier, which GDPR treats as personal data as it acts as a unique identifier. Using this identifier, Twitter and Twitter’s ad partners can work together to link a device identifier to other pieces of identity-linked personal data they collectively hold on the same user to track their use of the wider Internet, thereby allowing user profiling and creepy ad targeting to take place in the background.

The second issue Twitter discloses in the blog post also relates to tracking users’ wider web browsing to serve them targeted ads.

Here Twitter admits that, since September 2018, it may have served targeted ads that used inferences made about the user’s interests based on tracking their wider use of the Internet — even when the user had not given permission to be tracked.

This sounds like another breach of GDPR, given that in cases where the user did not consent to being tracked for ad targeting Twitter would lack a legal basis for processing their personal data. But it’s saying it processed it anyway — albeit, it claims accidentally.

This type of creepy ad targeting — based on so-called ‘inferences’ — is made possible because Twitter associates the devices you use (including mobile and browsers) when you’re logged in to its service with your Twitter account, and then receives information linked to these same device identifiers (IP addresses and potentially browser fingerprinting) back from its ad partners, likely gathered via tracking cookies (including Twitter’s own social plug-ins) which are larded all over the mainstream Internet for the purpose of tracking what you look at online.

These third party ad cookies link individuals’ browsing data (which gets turned into inferred interests) with unique device/browser identifiers (linked to individuals) to enable the adtech industry (platforms, data brokers, ad exchanges and so on) to track web users across the web and serve them “relevant” (aka creepy) ads.

“As part of a process we use to try and serve more relevant advertising on Twitter and other services since September 2018, we may have shown you ads based on inferences we made about the devices you use, even if you did not give us permission to do so,” it how Twitter explains this second ‘issue’.

“The data involved stayed within Twitter and did not contain things like passwords, email accounts, etc.,” it adds. Although the key point here is one of a lack of consent, not where the data ended up.

(Also, the users’ wider Internet browsing activity linked to their devices via cookie tracking did not originate with Twitter — even if it’s claiming the surveillance files it received from its “trusted” partners stayed on its servers. Bits and pieces of that tracked data would, in any case, exist all over the place.)

In an explainer on its website on “personalization based on your inferred identity” Twitter seeks to reassure users that it will not track them without their consent, writing:

We are committed to providing you meaningful privacy choices. You can control whether we operate and personalize your experience based on browsers or devices other than the ones you use to log in to Twitter (or if you’re logged out, browsers or devices other than the one you’re currently using), or email addresses and phone numbers similar to those linked to your Twitter account. You can do this by visiting your Personalization and data settings and adjusting the Personalize based on your inferred identity setting.

The problem in this case is that users’ privacy choices were simply overridden. Twitter says it did not do so intentionally. But either way it’s not consent. Ergo, a breach.

“We know you will want to know if you were personally affected, and how many people in total were involved. We are still conducting our investigation to determine who may have been impacted and If we discover more information that is useful we will share it,” Twitter goes on. “What is there for you to do? Aside from checking your settings, we don’t believe there is anything for you to do.

“You trust us to follow your choices and we failed here. We’re sorry this happened, and are taking steps to make sure we don’t make a mistake like this again. If you have any questions, you may contact Twitter’s Office of Data Protection through this form.”

While the company may “believe” there is nothing Twitter users can do — but accept its apology for screwing up, European Twitter users who believe it processed their data without their consent do have a course of action they can take: They can complain to their local data protection watchdog.

Zooming out, there are also major legal question marks hanging over behaviourally targeted ads in Europe.

The UK’s privacy regulator warned in June that systematic profiling of web users via invasive tracking technologies such as cookies is in breach of pan-EU privacy laws — following multiple complaints filed in the region that argue RTB is in breach of the GDPR.

While, back in May Google’s lead regulator in Europe, the Irish Data Protection Commission, confirmed it has opened a formal investigation into use of personal data in the context of its online Ad Exchange.

So the wider point here is that the whole leaky business of creepy ads looks to be operating on borrowed time.

Important statement from Brian O’Kelley, inventor of much of RTB https://t.co/02oHP6ZPVp pic.twitter.com/AS5Bh2zFqh

— Johnny Ryan (@johnnyryan) August 6, 2019

E3’s organizer apologizes after revealing information for thousands of journalists

By Brian Heater

The Entertainment Software Association issued an apology of sorts after making available the contact information for more than 2,000 journalists and analysts who attended this year’s E3.

“ESA was made aware of a website vulnerability that led to the contact list of registered journalists attending E3 being made public,” the organization said via statement. “Once notified, we immediately took steps to protect that data and shut down the site, which is no longer available. We regret this this occurrence and have put measures in place to ensure it will not occur again.”

It’s not clear whether the organization attempted to reach out to those impacted by the breach.

In a kind of bungle that utterly boggles the mind in 2019, the ESA had made available on its site a full spreadsheet of contact information for thousands of attendees, including email addresses, phone numbers and physical addresses. While many or most of the addresses appear to be businesses, journalists often work remotely, and the availability of a home address online can present a real safety concern.

After all, many gaming journalists are routinely targets of harassments and threats of physical violence for the simple act of writing about video games on the internet. That’s the reality of the world we currently live in. And while the information leaked could have been worse, there’s a real potential human consequence here.

That, in turn, presents a pretty compelling case that the ESA is going to have a pretty big headache on its hands under GDPR. Per the rules,

In the case of a personal data breach, the controller shall without undue delay and, where feasible, not later than 72 hours after having become aware of it, notify the personal data breach to the supervisory authority competent in accordance with Article 55, unless the personal data breach is unlikely to result in a risk to the rights and freedoms of natural persons. Where the notification to the supervisory authority is not made within 72 hours, it shall be accompanied by reasons for the delay.

There is, indeed, a pretty strong argument to made that said breach could “result in a risk to the rights and freedoms of natural persons.” Failure to notify individuals in the allotted time period could, in turn, result in some hefty fines.

It’s hard to say how long the ESA knew about the information, though YouTuber Sophia Narwitz, who first brought this information to light publicly, may have also been the first to alert the organization. The ESA appears to have been reasonably responsive in pulling the spreadsheet down, but the internet is always faster, and that information is still floating around online and fairy easily found.

VentureBeat notes rightfully that spreadsheets like these are incredibly valuable to convention organizations, representing contact information some of the top journalists in any given industry. Many will no doubt think twice before sharing this kind of information again, of course.

Notably (and, yes, ironically), the Black Hat security conference experienced a similar breach this time last year. It chalked the issue up to a “legacy system.”

Natasha Lomas contributed to this report

Don’t miss this epic Twitter fight between the IAB’s CEO and actual publishers

By Natasha Lomas

Grab popcorn. As Internet fights go this one deserves your full attention — because the fight is over your attention. Your eyeballs and the creepy ads that trade data on you to try to swivel ’em.

A Clockwork Orange Eyes GIF - Find & Share on GIPHY

In the blue corner, the Internet Advertising Association’s CEO, Randall Rothenberg, who has been taking to Twitter increasingly loudly in recent days to savage Europe’s privacy framework, the GDPR, and bleat dire warnings about California’s Consumer Privacy Act (CCPA) — including amplifying studies he claims show “the negative impact” on publishers.

Exhibit A, tweeted August 1:

More on the negative impact of #GDPR on publishers (and more reasons @iab is trying to fix #CCPA so publishers, brands, & retailers don’t get killed). https://t.co/BWtGNYGTJq

Randall Rothenberg (@r2rothenberg) July 31, 2019

NB: The IAB is a mixed membership industry organization which combines advertisers, brands, publishers, data brokers* and adtech platform tech giants — including the dominant adtech duopoly, Google and Facebook, who take home ~60% of digital ad spend. The only entity capable of putting a dent in the duopoly, Amazon, is also in the club. Its membership reflects the sprawling interests attached to the online ad industry, and, well, the personal data that currently feeds it (your eyeballs again!), although some members clearly have pots more money to spend on lobbying against digital privacy regs than others.

In a what now looks to have been deleted tweet last month Rothenberg publicly professed himself proud to have Facebook as a member of his ‘publisher defence’ club. Though, admittedly, per the above tweet, he’s also worried about brands and retailers getting “killed”. He doesn’t need to worry about Google and Facebook’s demise because that would just be ridiculous.

Now, in the — I wish I could call it ‘red top’ corner, except these newspaper guys are anything but tabloid — we find premium publishers biting back at Rothenberg’s attempts to trash-talk online privacy legislation.

Here’s the New York Times‘ data governance & privacy guy, Robin Berjon, demolishing Rothenberg via the exquisite medium of quote-tweet

One of the primary reasons we need the #GDPR and #CCPA (and more) today is because the @iab, under @r2rothenberg's leadership, has been given 20 years to self-regulate and has used the time to do [checks notes] nothing whatsoever.https://t.co/hBS9d671LU

— Robin Berjon (@robinberjon) August 1, 2019

I’m going to quote Berjon in full because every single tweet packs a beautifully articulated punch:

  • One of the primary reasons we need the #GDPR and #CCPA (and more) today is because the @iab, under @r2rothenberg’s leadership, has been given 20 years to self-regulate and has used the time to do [checks notes] nothing whatsoever.
  • I have spent much of my adult life working in self-regulatory environments. They are never perfect, but when they work they really deliver.
  • #Adtech had a chance to self-reg when the FTC asked them to — from which we got the joke known as AdChoices.
  • They got a second major chance with DNT. But the notion of a level playing field between #adtech and consumers didn’t work for them so they did everything to prevent it from existing.
  • At some point it became evident that the @iab lacked the vision and leadership to shepherd the industry towards healthy, sustainable behaviour. That’s when regulation became unavoidable. No one has done as much as the @iab has to bring about strong privacy regulation.
  • And to make things funnier the article that @r2rothenberg was citing as supporting his view is… calling for stronger enforcement of the #GDPR.
  • If that’s not a metaphor for where the @iab’s at, I don’t know what is.

Next time Facebook talks about how it can self-regulate its access to data I suggest you cc that entire thread.

Also chipping in on Twitter to champion Berjon’s view about the IAB’s leadership vacuum in cleaning up the creepy online ad complex, is Aram Zucker-Scharff, aka the ad engineering director at — checks notes — The Washington Post.

His punch is more of a jab — but one that’s no less painful for the IAB’s current leadership.

“I say this rarely, but this is a must read,” he writes, in a quote tweet pointing to Berjon’s entire thread.

I say this rarely, but this is a must read, Thread: https://t.co/FxKmT9bp7r

— Aram Zucker-Scharff (@Chronotope) August 2, 2019

Another top tier publisher’s commercial chief also told us in confidence that they “totally agree with Robin” — although they didn’t want to go on the record today.

In an interesting twist to this ‘mixed member online ad industry association vs people who work with ads and data at actual publishers’ slugfest, Rothenberg replied to Berjon’s thread, literally thanking him for the absolute battering.

Yes, thank you – that’s exactly where we’re at & why these pieces are important!” he tweeted, presumably still dazed and confused from all the body blows he’d just taken. “@iab supports the competitiveness of the hundreds of small publishers, retailers, and brands in our global membership. We appreciate the recognition and your explorations,@robinberjon.”

Yes, thank you – that’s exactly where we’re at & why these pieces are important! @iab supports the competitiveness of the hundreds of small publishers, retailers, and brands in our global membership. We appreciate the recognition and your explorations, @robinberjon & @Bershidsky https://t.co/WDxrWIyHXd

— Randall Rothenberg (@r2rothenberg) August 2, 2019

Rothenberg also took the time to thank Bloomberg columnist, Leonid Bershidsky, who’d chipped into the thread to point out that the article Rothenberg had furiously retweeted actually says the GDPR “should be enforced more rigorously against big companies, not that the GDPR itself is bad or wrong”.

Who is Bershidsky? Er, just the author of the article Rothenberg tried to nega-spin. So… uh… owned.

May I point out that the piece that's cited here (mine) says the GDPR should be enforced more rigorously against big companies, not that the GDPR itself is bad or wrong?

— Leonid Bershidsky (@Bershidsky) August 1, 2019

But there’s more! Berjon tweeted a response to Rothenberg’s thanks for what the latter tortuously referred to as “your explorations” — I mean, the mind just boggles as to what he was thinking to come up with that euphemism — thanking him for reversing his position on GDPR, and for reversing his prior leadership vacuum on supporting robustly enforced online privacy laws. 

It’s great to hear that you’re now supporting strong GDPR enforcement,” he writes. “It’s indeed what most helps the smaller players. A good next step to this conversation would be an @iab statement asking to transpose the GDPR to US federal law. Want to start drafting something?”

It's great to hear that you're now supporting strong GDPR enforcement. It's indeed what most helps the smaller players. A good next step to this conversation would be an @iab statement asking to transpose the GDPR to US federal law. Want to start drafting something?

— Robin Berjon (@robinberjon) August 2, 2019

We’ve asked the IAB if, in light of Rothenberg’s tweet, it now wishes to share a public statement in support of transposing the GDPR into US law. We’ll be sure to update this post if it says anything at all.

We’ve also screengrabbed the vinegar strokes of this epic fight — as an insurance policy against any further instances of the IAB hitting the tweet delete button. (Plus, I mean, you might want to print it out and get it framed.)

Screenshot 2019 08 02 at 18.48.08

Some light related reading can be found here:

Google ordered to halt human review of voice AI recordings over privacy risks

By Natasha Lomas

A German privacy watchdog has ordered Google to cease manual reviews of audio snippets generated by its voice AI. 

This follows a leak last month of scores of audio snippets from the Google Assistant service. A contractor working as a Dutch language reviewer handed more than 1,000 recordings to the Belgian news site VRT which was then able to identify some of the people in the clips. It reported being able to hear people’s addresses, discussion of medical conditions, and recordings of a woman in distress.

The Hamburg data protection authority used Article 66 powers of the General Data Protection Regulation (GDPR) to make the order — which allows a DPA to order data processing to stop if it believes there is “an urgent need to act in order to protect the rights and freedoms of data subjects”.

The Article 66 order to Google appears to be the first use of the power since GDPR came into force across the bloc in May last year.

Google says it received the order on July 26 — which requires it to stop manually reviewing audio snippets in Germany for a period of three months. Although the company had already taken the decision to manually suspend audio reviews of Google Assistant across the whole of Europe — doing so on July 10, after learning of the data leak.

Last month it also informed its lead privacy regulator in Europe, the Irish Data Protection Commission (DPC), of the breach — which also told us it is now “examining” the issue that’s been highlighted by Hamburg’s order.

The Irish DPC’s head of communications, Graham Doyle, said Google Ireland filed an Article 33 breach notification for the Google Assistant data “a couple of weeks ago”, adding: “We note that as of 10 July Google Ireland ceased the processing in question and that they have committed to the continued suspension of processing for a period of at least three months starting today (1 August). In the meantime we are currently examining the matter.”

It’s not clear whether Google will be able to reinstate manual reviews in Europe in a way that’s compliant with the bloc’s privacy rules. The Hamburg DPA writes in a statement [in German] on its website that it has “significant doubts” about whether Google Assistant complies with EU data-protection law.

“We are in touch with the Hamburg data protection authority and are assessing how we conduct audio reviews and help our users understand how data is used,” Google’s spokesperson also told us.

In a blog post published last month after the leak, Google product manager for search, David Monsees, claimed manual reviews of Google Assistant queries are “a critical part of the process of building speech technology”, couching them as “necessary” to creating such products.

“These reviews help make voice recognition systems more inclusive of different accents and dialects across languages. We don’t associate audio clips with user accounts during the review process, and only perform reviews for around 0.2% of all clips,” Google’s spokesperson added now.

But it’s far from clear whether human review of audio recordings captured by any of the myriad always-on voice AI products and services now on the market will be able to be compatible with European’s fundamental privacy rights.

These AIs typically have trigger words for activating the recording function that streams audio data to the cloud but the technology can easily be accidentally triggered — and leaks have shown they are able to hoover up sensitive and intimate personal data of anyone in their vicinity (which can include people who never got within sniffing distance of any T&Cs).

In its website the Hamburg DPA says the order against Google is intended to protect the privacy rights of affected users in the immediate term, noting that GDPR allows for concerned authorities in EU Member States to issue orders of up to three months.

In a statement Johannes Caspar, the Hamburg commissioner for data protection, added: “The use of language assistance systems in the EU must comply with the data protection requirements of the GDPR. In the case of the Google Assistant, there are currently significant doubts. The use of language assistance systems must be done in a transparent way, so that an informed consent of the users is possible. In particular, this involves providing sufficient information and transparently informing those concerned about the processing of voice commands, but also about the frequency and risks of mal-activation. Finally, due regard must be given to the need to protect third parties affected by the recordings. First of all, further questions about the functioning of the speech analysis system have to be clarified. The data protection authorities will then have to decide on definitive measures that are necessary for a privacy-compliant operation. ”

The DPA also urges other regional privacy watchdogs to prioritize checking on other providers of language assistance systems — and “implement appropriate measures” — name-checking providers of voice AIs, such as Apple and Amazon .

This suggests there could be wider ramifications for other tech giants operating voice AIs in Europe, flowing from this single Article 66 order.

As we’ve said before, the real enforcement punch packed by GDPR is not the headline-grabbing fines, which can scale as high as 4% of a company’s global annual turnover — it’s the power that Europe’s DPAs now have in their regulatory toolbox to order that data stops flowing.

“This is just the beginning,” one expert on European data protection legislation told us, speaking on condition of anonymity. “The Article 66 chest is open and it has a lot on offer.”

In a sign of the potential scale of the looming privacy problems for voice AIs Apple also said earlier today that it’s suspending a quality control program for its Siri voice assistant.

The move, which does not appear to be linked to any regulatory order, follows a Guardian report last week detailing claims by a whistleblower that contractors working for Apple ‘regularly hear confidential details’ on Siri recordings, such as audio of people having sex and identifiable financial details, regardless of the processes Apple uses to anonymize the records.

Apple’s suspension of manual reviews of Siri snippets applies worldwide.

India has labeled hyperloop a public infrastructure project — here’s why that matters

By Kirsten Korosec

Hyperloop, the futuristic and still theoretical transportation system that could someday propel people and packages at speeds of more than 600 miles per hour, has been designated a “public infrastructure project” by India lawmakers in the state of Maharashtra.

Wrapped in that government jargon is a valuable and notable outcome. The upshot: hyperloop is being treated like any other public infrastructure project such as bridges, roads and railways. In other words, hyperloop has been plucked out of niche, futuristic obscurity and given a government stamp of approval.

That’s remarkable, considering that the idea for hyperloop was first proposed by Tesla and SpaceX CEO Elon Musk in a nearly 60-page public white paper just five years ago.

It also kicks off a process that could bring hyperloop to a 93-mile stretch of India between the cities of Mumbai and Pune. The Pune Metropolitan Regional Development Authority will begin the procurement process in mid-August when it starts accepting proposals from companies hoping to land the hyperloop contract.

The frontrunner is likely Virgin Hyperloop One -DP World, a consortium between the hyperloop company and its biggest backer that pitched the original project to India. The MahaIDEA Committee earlier approved Virgin Hyperloop One-DP World Consortium as the Original Project Proponent.

Under the VHO-DPW proposal, a hyperloop capable of transporting 200 million people every year would be built between Pune and Mumbai. That stretch of road now takes more than three hours by car; VHO says its hyperloop would reduce it to a 35-minute trip.

“This is history in the making. The race is on to host the first hyperloop transportation system in the world, and today’s announcement puts India firmly in the lead. This is a significant milestone and the first of many important steps toward bringing hyperloop to the masses,” Virgin Hyperloop One CEO Jay Walder said in a statement Wednesday.

The hope is that India’s government will award the contract by the end of 2019, a VHO executive told TechCrunch. If that occurs, Phase 1 of the project — an 11.8 kilometer (or 7.3 mile) section — would begin in 2020.

The cost of building Phase 1 will be covered by DP World, which has committed $500 million to this section. The government is covering the cost and logistics of acquiring the land for the hyperloop.

Phase 1 will initially act as a certification track, which will be used to certify the hyperloop technology for passenger operations. VHO wants this certification track built and operating by 2024. If this section meets safety standards it will become part of the larger hyperloop line between Pune and Mumbai.

There is a lot of work to do, and technical milestones to meet, before hyperloop is whisking people in pods through a tunnel. But if it works and is built, the region’s economy could be transformed, supporters insist.

Once commercialized, the hyperloop will transform the Pune-Mumbai corridor into a mega-economic region, according to Harj Dhaliwal, managing director of India and Middle East at Virgin Hyperloop One.

Today, some 75 million people travel between Pune and Mumbai each year, and forecasts suggest that number could rise to 130 million annually by 2026. The VHO-DPW consortium says its hyperloop will have the capacity to handle 16,000 passengers day, or about 200 million people annually.

NakedPoppy launches curated beauty marketplace for wellness junkies

By Kate Clark

NakedPoppy co-founders Jaleh Bisharat and Kimberly Shenk are an impressive duo. Bisharat, the startup’s chief executive officer, is a commanding presence and a bona fide marketing savant. The perfect compliment to Shenk, a reticent and data-focused chief product officer.

Together they’re building a cosmetics startup, NakedPoppy, where people can purchase high-quality “clean” makeup, or sustainable, ethically-made and cruelty-free products produced without harmful chemicals. It launches today with $4 million in venture capital backing from top investors, including Cowboy Ventures (the seed-stage fund led by Aileen Lee), Felicis Ventures, Khosla Ventures, Maveron, Polaris Ventures and Slow Ventures.

“Conventional makeup is considered hazardous waste by the EPA,” Bisharat tells TechCrunch. “You can look better and go clean.”

But NakedPoppy isn’t just another website for buying makeup. Like all companies today, it’s a tech company. NakedPoppy’s patent-pending personalization algorithm helps customers quickly find makeup that matches or complements their skin tone. To do this, customers are asked to complete a three-minute assessment and submit a photo of their wrist, which is used to pinpoint their base skin color.

NakedPoppy assessment

“I’m not the person that is up to trends or is keeping up with the YouTube stars,”  NakedPoppy’s product chief Shenk tells TechCrunch. “When I walk into Sephora my stomach drops … I am the kind of woman that wants to set it and forget it. Just give me the right thing and let’s move on.”

Bisharat adds that NakedPoppy targets the busy woman: “The one for whom it’s not entertainment to go shopping for makeup.”

The NakedPoppy team hopes its algorithm expedites the makeup shopping process for those who view the task as a chore not a hobby. Accounting for skin type, skin color, skin undertone, age, eye color, hair color, allergies, sensitivities and more, the startup presents each customer a filtered and tailored list of the 400 items its carries, ranging from lipsticks to foundation to blush and more. Cosmetic chemists screen all NakedPoppy products to ensure they were made with only clean ingredients.

Alongside its official launch, NakedPoppy is announcing its debut original product: Liquid eyeliner. The product was screened and tested by a number of clean beauty experts and even a VC: “This is a hero product, no doubt about it,” BBG Ventures’ managing partner Susan Lyne said in a statement. Lyne, of course, is a NakedPoppy angel investor. “Most eyeliners start drying out after a few weeks and get harder to apply. This one is still as supple as the day I got it. It looks natural, lasts all day and washes off easily with soap. It’s pretty perfect.”

For the record, I tried out the NakedPoppy eyeliner too and can attest to its greatness.

NakedPoppy founders

NakedPoppy co-founders Jaleh Bisharat (CEO, left) and Kimberly Shenk (CPO, right).

The women behind NakedPoppy, as I alluded to earlier, know what they’re doing. In fact, I’d go as far as to say they could’ve paired their marketing and data science expertise to build just about anything. Makeup, however, was their shared passion.

“For us, it’s a personal passion and an area of information asymmetry, like most people know that with the food you eat, you should try to eat organic or as healthy as you can, but you’d be surprised how few women — they just assume the FDA protects them,” Bisharat said. “The idea is to educate the world and help women move toward new solutions.”

Bisharat got her start in marketing two decades ago. Shortly after the e-commerce giant went public, she served as the vice president of marketing at Amazon . A career peak for many, Bisharat went on to lead marketing efforts at OpenTable, Jawbone, UpWork and, most recently, Eventbrite, where she met Shenk.

Before moving into the private sector, Shenk got her start as a data scientist in the U.S. Air Force, ultimately ending up as the director of data science at the now-public ticketing and events business, Eventbrite .

10 NakedPoppy Pouch

Bisharat and Shenk remained mum on what marketing tactics they’ll deploy to capture the attention of potential customers. Will they partner with social media influencers to spread the word? Double down on Instagram ads? Open brick-and-mortar shops? They wouldn’t say. Additional original products are definitely in the works, though, as is a foray into skincare and ultimately, a full-fledged dive into all self-care products.

The hope is to making buying clean makeup easy. Historically, the big makeup brands have been owned and operated by one of a dozen or so large companies dominating the space. Increasingly, however, direct-to-consumer brands and startups, most notably Glossier, have attracted customers that prioritize ease-of-access.

As the beauty industry adjusts, an influx of digital-first upstarts, NakedPoppy included, will be poised to steal market share from the long-reigning giants. Perhaps NakedPoppy’s push toward transparency in ingredients and production will encourage the big brands to do the same.

Ethyca raises $4.2M to simplify GDPR compliance

By Ron Miller

GDPR, the European data privacy regulations, have been in effect for over a year, but it’s still a challenge for companies to comply. Ethyca, a New York City startup, has created a solution from the ground up to help customers adhere to the regulations, and today it announced a $4.2 million investment led by IA Ventures and Founder Collective.

Table Management, Sinai Ventures, Cheddar founder Jon Steinberg and Moat co-founder Jonah Goodhart also participated.

At its heart Ethyca is a data platform that helps companies discover sensitive data, then provides a mechanism for customers to see, edit or delete their data from the system. Finally, the solution enables companies to define who can see particular types of data across the organization to control access. All of these components are designed to help companies comply with GDPR regulations.

ethyca enterprise transaction log

Ethyca enterprise transaction log. Screenshot: Ethyca

Company co-founder Cillian Kieran says that the automation component is key and should greatly reduce the complexity and cost associated with complying with GDPR rules. From his perspective, current solutions which involve either expensive consultants or solutions that require some manual intervention, don’t get companies all the way there.

“These solutions don’t actually solve the issue from an infrastructure point of view. I think that’s the distinction. You can go and use the consultants, or you can use a control panel that tells you what you need to do. But ultimately, at some point you’re either going to have to build or deploy code that fixes some issues, or indeed manually manage or remediate those [issues]. Ethyca is designed for that and takes away those risks because it is managing privacy by design at the infrastructure level,” Kieran explained.

If you’re worried about the privacy of providing information like this to a third-party vendor, Kieran says that his company never actually sees the raw data. “We are a suite of tools that sits between business processes. We don’t capture raw data, We don’t see personal information. We find information based on unique identifiers,” he said.

The company has been around for over a year, but has been spending its first year, developing the solution. He sees this investment as validation of the problem his startup is trying to solve. “I think the investment represents the growing awareness fundamentally from both with the investor community, and also in the tech world, that data privacy as a regulatory constraint is real and will compound itself,” he said.

He also points out, GDPR is really just the tip of the privacy regulation iceberg with laws in Australia, Brazil, Japan, as well as California and other states in the US due to come online next year. He says his solution has been designed to deal with a variety of privacy frameworks beyond GDPR. If that’s so, his company could be in a good position moving forward.

Sony WF-1000XM3 Review: The Perfect Travel Companion

By Adrienne So
The new premium earbuds from Sony have multiple noise-canceling modes, and it switches between them automatically depending on your environment.

Anvyl, looking to help D2C brands manage their supply chain, raises $9.3M

By Jordan Crook

Growing D2C brands face an interesting challenge. While they’ve eliminated much of the hassle of a physical storefront, they must still deal with all the complications involved in managing inventory and manufacturing and shipping a physical product to suppliers.

Anvyl, with a fresh $9.3 million in Series A funding, is looking to jump in and make a difference for those brands. The company, co-founded by chief executive Rodney Manzo, is today announcing the raise, led by Redpoint Ventures, with participation from existing investors First Round Capital and Company Ventures. Angel investors Kevin Ryan (MongoDB and DoubleClick), Ben Kaufman (Quirky and Camp) and Dan Rose (Facebook) also participated in the round.

Manzo hails from Apple, where with $300 million in spend to manage logistics and supply chain he was still operating in an Excel spreadsheet. He then went to Harry’s, where he shaved $10 million in cash burn in his first month. He says himself that sourcing, procurement and logistics are in his DNA.

Which brings us to Anvyl. Anvyl looks at every step in the logistics process, from manufacture to arrival at the supplier, and visualizes that migration in an easy-to-understand UI.

The difference between Anvyl and other supply chain logistics companies, such as Flexport, is that Anvyl goes all the way to the very beginning of the supply chain: the factories. The company partners with factories to set up cameras and sensors that let brands see their product actually being built.

“When I was at Apple, I traveled for two years at least once a month to China and Japan just to oversee production,” said Manzo. “To oversee production, you essentially have to be boots on the ground and eyes in the factory. None of our brands have traveled to a factory.”

On the other end of the supply chain, Anvyl lets brands manage suppliers, find new suppliers, submit RFQs, see cost breakdowns and accept quotes.

The company also looks at each step in between, including trucks, trains, boats and planes so that brands can see, in real time, their products go from being manufactured to delivery.

Anvyl charges brands a monthly fee using a typical SaaS model. On the other end, Anvyl takes a “tiny percentage” of goods being produced within the Anvyl marketplace. The company declined to share actual numbers around pricing.

This latest round brings Anvyl’s total funding to $11.8 million. The company plans to use the funding toward hiring in engineering and marketing, and grow its consumer goods customer base.

❌