SpotQA, a new automated software testing platform that claims to be significantly faster than either manual testing or existing automated QA solutions, has raised $3.25 million in seed funding.
Leading the round is Crane Venture Partners, the newly-outed London venture capital firm focusing on “intelligent” enterprise startups. Also participating is Forward Ventures, Downing Ventures and Acequia Capital.
Founded in 2016 by CEO Adil Mohammed, who sold his previous company to apparel platform Teespring, SpotQA’s flagship product is dubbed Virtuoso. Described as an “Intelligent Quality Assistance Platform” that uses machine learning and robotic process automation, it claims to speed up the testing of web and mobile apps by up to 25x and make QA accessible across an entire company, not just software or QA engineers.
“Over the years working closely with engineering teams, I learned how the QA and testing process, when done inefficiently, can be a big barrier for company growth and productivity,” Mohammed tells me. “The way testing is done today is not fit for purpose. Even automated testing methods are not keeping pace with agile development practices”.
This results in software testing creating a bottleneck that prevents companies deploying as fast as they’d like to, says the SpotQA CEO, which is pain point for all involved, from developers to testers, all the way through to DevOps and production. “It has a real impact on the company’s bottom line,” adds Mohammed.
The incumbent options are either manual testing or traditional automation. Mohammed says manual testing is slow and makes continuous development difficult as there is a constant “disconnect” between QA and other teams. In turn, traditional automation is not very smart and hasn’t seen much innovation in the last decade. “It’s still very code based, relies on expensive automation engineers and it is difficult to setup and maintain,” he argues.
In contrast, SpotQA claims to have designed Virtuoso so that software quality can be ensured across the entire software development lifecycle, something the company has branded “Quality Assistance”.
“By using machine learning and robotic process automation, Virtuoso is by far the most efficient and effective way to ensure bugs, inconsistencies and errors can be identified and fixed in a fraction of the time taken using manual and traditional automated testing,” says Mohammed.
Meanwhile, the London-based company will use the new injection of capital to scale engineering, sales and marketing, and to expand internationally. Existing Virtuoso customers include Experian, Chemistry, Optionis and DXC Technologies.
WeTransfer, the Amsterdam-headquartered company that is best know for its file-sharing service, is disclosing a €35 million secondary funding round.
The investment is led by European growth equity firm, HPE Growth, with “significant” participation from existing investor Highland Europe. Being secondary funding — meaning that a number of shareholders have sold all or a portion of their holding — no new money has entered WeTransfer’s balance sheet.
We are also told that Jonne de Leeuw, of HPE, will replace WeTransfer co-founder Nalden on the company’s Supervisory Board. He joins Bas Beerens (founder of WeTransfer), Irena Goldenberg (Highland Europe) and Tony Zappalà (Highland Europe).
The exact financial terms of the secondary funding, including valuation, aren’t being disclosed. However, noteworthy is that WeTransfer says it has been profitable for 6 years.
“The valuation of the company is not public, but what I can tell you is that it’s definitely up significantly since the Series A in 2015,” WeTransfer CEO Gordon Willoughby tells me. “WeTransfer has become a trusted brand in its space with significant scale. Our transfer service has 50 million users a month across 195 countries, sharing over 1.5 billion files each month”.
In addition to the wildly popular WeTransfer file-sharing service, the company operates a number of other apps and services, some it built in-house and others it has acquired. They include content sharing app Collect (claiming 4 million monthly users), sketching tool Paper (which has had 25 million downloads) and collaborative presentation tool Paste (which claims 40,000 active teams).
“We want to help people work more effectively and deliver more impactful results, with tools that collectively remove friction from every stage of the creative process — from sparking ideas, capturing content, developing and aligning, to delivery,” says Willoughby.
“Over the past two years, we’ve been investing heavily in our product development and have grown tremendously following the acquisition of the apps Paper and Paste. This strengthened our product set. Our overarching mission is to become the go-to source for beautiful, intuitive tools that facilitate creativity, rather than distract from it. Of course, our transfer service is still a big piece of that — it’s a brilliantly simple tool that more than 50 million people a month love to use”.
Meanwhile, Willoughby describes WeTransfer’s dual revenue model as “pretty unique”. The company offers a premium subscription service called WeTransfer Plus, and sells advertising in the form of “beautiful” full-screen ads called wallpapers on Wetransfer.com.
“Each piece of creative is fully produced in-house by our creative studio with an uncompromising focus on design and user experience,” explains the WeTransfer CEO. “With full-screen advertising, we find that our users don’t feel they’re simply being sold to. This approach to advertising has been incredibly effective, and our ad performance has far outpaced IAB standards. Our advertising inventory is sought out by brands like Apple, Nike, Balenciaga, Adobe, Squarespace, and Saint Laurent”.
Alongside this, WeTransfer says it allocates up to 30% of its advertising inventory and “billions of impressions” to support and spotlight up-and-coming creatives, and causes, such as spearheading campaigns for social issues.
The company has 185 employees in total, with about 150 in Amsterdam and the rest across its U.S. offices in L.A. and New York.
Privacy researchers in Europe believe they have the first proof that a long-theorised vulnerability in systems designed to protect privacy by aggregating and adding noise to data to mask individual identities is no longer just a theory.
The research has implications for the immediate field of differential privacy and beyond — raising wide-ranging questions about how privacy is regulated if anonymization only works until a determined attacker figures out how to reverse the method that’s being used to dynamically fuzz the data.
Current EU law doesn’t recognise anonymous data as personal data. Although it does treat pseudoanonymized data as personal data because of the risk of re-identification.
Yet a growing body of research suggests the risk of de-anonymization on high dimension data sets is persistent. Even — per this latest research — when a database system has been very carefully designed with privacy protection in mind.
It suggests the entire business of protecting privacy needs to get a whole lot more dynamic to respond to the risk of perpetually evolving attacks.
Academics from Imperial College London and Université Catholique de Louvain are behind the new research.
This week, at the 28th USENIX Security Symposium, they presented a paper detailing a new class of noise-exploitation attacks on a query-based database that uses aggregation and noise injection to dynamically mask personal data.
On its website Aircloak bills the technology as “the first GDPR-grade anonymization” — aka Europe’s General Data Protection Regulation, which began being applied last year, raising the bar for privacy compliance by introducing a data protection regime that includes fines that can scale up to 4% of a data processor’s global annual turnover.
What Aircloak is essentially offering is to manage GDPR risk by providing anonymity as a commercial service — allowing queries to be run on a data-set that let analysts gain valuable insights without accessing the data itself. The promise being it’s privacy (and GDPR) ‘safe’ because it’s designed to mask individual identities by returning anonymized results.
The problem is personal data that’s re-identifiable isn’t anonymous data. And the researchers were able to craft attacks that undo Diffix’s dynamic anonymity.
“What we did here is we studied the system and we showed that actually there is a vulnerability that exists in their system that allows us to use their system and to send carefully created queries that allow us to extract — to exfiltrate — information from the data-set that the system is supposed to protect,” explains Imperial College’s Yves-Alexandre de Montjoye, one of five co-authors of the paper.
“Differential privacy really shows that every time you answer one of my questions you’re giving me information and at some point — to the extreme — if you keep answering every single one of my questions I will ask you so many questions that at some point I will have figured out every single thing that exists in the database because every time you give me a bit more information,” he says of the premise behind the attack. “Something didn’t feel right… It was a bit too good to be true. That’s where we started.”
The researchers chose to focus on Diffix as they were responding to a bug bounty attack challenge put out by Aircloak.
“We start from one query and then we do a variation of it and by studying the differences between the queries we know that some of the noise will disappear, some of the noise will not disappear and by studying noise that does not disappear basically we figure out the sensitive information,” he explains.
“What a lot of people will do is try to cancel out the noise and recover the piece of information. What we’re doing with this attack is we’re taking it the other way round and we’re studying the noise… and by studying the noise we manage to infer the information that the noise was meant to protect.
“So instead of removing the noise we study statistically the noise sent back that we receive when we send carefully crafted queries — that’s how we attack the system.”
A vulnerability exists because the dynamically injected noise is data-dependent. Meaning it remains linked to the underlying information — and the researchers were able to show that carefully crafted queries can be devised to cross-reference responses that enable an attacker to reveal information the noise is intended to protect.
Or, to put it another way, a well designed attack can accurately infer personal data from fuzzy (‘anonymized’) responses.
This despite the system in question being “quite good,” as de Montjoye puts it of Diffix. “It’s well designed — they really put a lot of thought into this and what they do is they add quite a bit of noise to every answer that they send back to you to prevent attacks”.
“It’s what’s supposed to be protecting the system but it does leak information because the noise depends on the data that they’re trying to protect. And that’s really the property that we use to attack the system.”
The researchers were able to demonstrate the attack working with very high accuracy across four real-world data-sets. “We tried US censor data, we tried credit card data, we tried location,” he says. “What we showed for different data-sets is that this attack works very well.
“What we showed is our attack identified 93% of the people in the data-set to be at risk. And I think more importantly the method actually is very high accuracy — between 93% and 97% accuracy on a binary variable. So if it’s a true or false we would guess correctly between 93-97% of the time.”
They were also able to optimise the attack method so they could exfiltrate information with a relatively low level of queries per user — up to 32.
“Our goal was how low can we get that number so it would not look like abnormal behaviour,” he says. “We managed to decrease it in some cases up to 32 queries — which is very very little compared to what an analyst would do.”
After disclosing the attack to Aircloak, de Montjoye says it has developed a patch — and is describing the vulnerability as very low risk — but he points out it has yet to publish details of the patch so it’s not been possible to independently assess its effectiveness.
“It’s a bit unfortunate,” he adds. “Basically they acknowledge the vulnerability [but] they don’t say it’s an issue. On the website they classify it as low risk. It’s a bit disappointing on that front. I think they felt attacked and that was really not our goal.”
For the researchers the key takeaway from the work is that a change of mindset is needed around privacy protection akin to the shift the security industry underwent in moving from sitting behind a firewall waiting to be attacked to adopting a pro-active, adversarial approach that’s intended to out-smart hackers.
“As a community to really move to something closer to adversarial privacy,” he tells TechCrunch. “We need to start adopting the red team, blue team penetration testing that have become standard in security.
“At this point it’s unlikely that we’ll ever find like a perfect system so I think what we need to do is how do we find ways to see those vulnerabilities, patch those systems and really try to test those systems that are being deployed — and how do we ensure that those systems are truly secure?”
“What we take from this is really — it’s on the one hand we need the security, what can we learn from security including open systems, verification mechanism, we need a lot of pen testing that happens in security — how do we bring some of that to privacy?”
“If your system releases aggregated data and you added some noise this is not sufficient to make it anonymous and attacks probably exist,” he adds.
“This is much better than what people are doing when you take the dataset and you try to add noise directly to the data. You can see why intuitively it’s already much better. But even these systems are still are likely to have vulnerabilities. So the question is how do we find a balance, what is the role of the regulator, how do we move forward, and really how do we really learn from the security community?
“We need more than some ad hoc solutions and only limiting queries. Again limiting queries would be what differential privacy would do — but then in a practical setting it’s quite difficult.
“The last bit — again in security — is defence in depth. It’s basically a layered approach — it’s like we know the system is not perfect so on top of this we will add other protection.”
The research raises questions about the role of data protection authorities too.
During Diffix’s development, Aircloak writes on its website that it worked with France’s DPA, the CNIL, and a private company that certifies data protection products and services — saying: “In both cases we were successful in so far as we received essentially the strongest endorsement that each organization offers.”
Although it also says that experience “convinced us that no certification organization or DPA is really in a position to assert with high confidence that Diffix, or for that matter any complex anonymization technology, is anonymous”, adding: “These organizations either don’t have the expertise, or they don’t have the time and resources to devote to the problem.”
The researchers’ noise exploitation attack demonstrates how even a level of regulatory “endorsement” can look problematic. Even well designed, complex privacy systems can contain vulnerabilities and cannot offer perfect protection.
“It raises a tonne of questions,” says de Montjoye. “It is difficult. It fundamentally asks even the question of what is the role of the regulator here?
When you look at security my feeling is it’s kind of the regulator is setting standards and then really the role of the company is to ensure that you meet those standards. That’s kind of what happens in data breaches.
“At some point it’s really a question of — when something [bad] happens — whether or not this was sufficient or not as a [privacy] defence, what is the industry standard? It is a very difficult one.”
“Anonymization is baked in the law — it is not personal data anymore so there are really a lot of implications,” he adds. “Again from security we learn a lot of things on transparency. Good security and good encryption relies on open protocol and mechanisms that everyone can go and look and try to attack so there’s really a lot at this moment we need to learn from security.
“There’s no going to be any perfect system. Vulnerability will keep being discovered so the question is how do we make sure things are still ok moving forward and really learning from security — how do we quickly patch them, how do we make sure there is a lot of research around the system to limit the risk, to make sure vulnerabilities are discovered by the good guys, these are patched and really [what is] the role of the regulator?
“Data can have bad applications and a lot of really good applications so I think to me it’s really about how to try to get as much of the good while limiting as much as possible the privacy risk.”
US legislator David Cicilline will be joining the next meeting of the International Grand Committee on Disinformation and ‘Fake News’, it has been announced. The meeting will be held in Dublin on November 7.
Chair of the committee, the Irish Fine Gael politician Hildegarde Naughton, announced Cicilline’s inclusion today.
The congressman — who is chairman of the US House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee — will attend as an “ex officio member” which will allow him to question witnesses, she added.
Exactly who the witnesses in front of the grand committee will be is tbc. But the inclusion of a US legislator in the ranks of a non-US committee that’s been seeking answers about reining in online disinformation will certainly make any invitations that get extended to senior executives at US-based tech giants much harder to ignore.
Naughton points out that the addition of American legislators also means the International Grand Committee represents ~730 million citizens — and “their right to online privacy and security”.
“The Dublin meeting will be really significant in that it will be the first time that US legislators will participate,” she said in a statement. “As all the major social media/tech giants were founded and are headquartered in the United States it is very welcome that Congressman Cicilline has agreed to participate. His own Committee is presently conducting investigations into Facebook, Google, Amazon and Apple and so his attendance will greatly enhance our deliberations.”
“Greater regulation of social media and tech giants is fast becoming a priority for many countries throughout the world,” she added. “The International Grand Committee is a gathering of international parliamentarians who have a particular responsibility in this area. We will coordinate actions to tackle online election interference, ‘fake news’, and harmful online communications, amongst other issues while at the same time respecting freedom of speech.”
The international committee met for its first session in London last November — when it was forced to empty-chair Facebook founder Mark Zuckerberg who had declined to attend in person, sending UK policy VP Richard Allan in his stead.
Lawmakers from nine countries spent several hours taking Allan to task over Facebook’s lack of accountability for problems generated by the content it distributes and amplifies, raising myriad examples of ongoing failure to tackle the democracy-denting, society-damaging disinformation — from election interference to hate speech whipping up genocide.
A second meeting of the grand committee was held earlier this year in Canada — taking place over three days in May.
Again Zuckerberg failed to show. Facebook COO Sheryl Sandberg also gave international legislators zero facetime, with the company opting to send local head of policy, Kevin Chan, and global head of policy, Neil Potts, as stand ins.
Lawmakers were not amused. Canadian MPs voted to serve Zuckerberg and Sandberg with an open summons — meaning they’ll be required to appear before it the next time they step foot in the country.
Parliamentarians in the UK also issued a summons for Zuckerberg last year after repeat snubs to testify to the Digital, Culture, Media and Sport committee’s enquiry into fake news — a decision that essentially gave birth to the international grand committee, as legislators in multiple jurisdictions united around a common cause of trying to find ways to hold social media giants to accounts.
— Damian Collins (@DamianCollins) August 15, 2019
While it’s not clear who the grand committee will invite to the next session, Facebook’s founder seems highly unlikely to have dropped off their list. And this time Zuckerberg and Sandberg may find it harder to turn down an invite to Dublin, given the committee’s ranks will include a homegrown lawmaker.
In a statement on joining the next meeting, Cicilline said: “We are living in a critical moment for privacy rights and competition online, both in the United States and around the world. As people become increasingly connected by what seem to be free technology platforms, many remain unaware of the costs they are actually paying.
“The Internet has also become concentrated, less open, and growingly hostile to innovation. This is a problem that transcends borders, and it requires multinational cooperation to craft solutions that foster competition and safeguard privacy online. I look forward to joining the International Grand Committee as part of its historic effort to identify problems in digital markets and chart a path forward that leads to a better online experience for everyone.”
Multiple tech giants (including Facebook) have their international headquarters in Ireland — making the committee’s choice of location for their next meeting a strategic one. Should any tech CEOs thus choose to snub an invite to testify to the committee they might find themselves being served with an open summons to testify by Irish parliamentarians — and not being able to set foot in a country where their international HQ is located would be more than a reputational irritant.
Ireland’s privacy regulator is also sitting on a stack of open investigations against tech giants — again with Facebook and Facebook owned companies producing the fattest file (some 11 investigations). But there are plenty of privacy and security concerns to go around, with the DPC’s current case file also touching tech giants including Apple, Google, LinkedIn and Twitter.
Tesla CEO Elon Musk tweeted late Wednesday night that Spotify premium integration is “coming.” Musk, who has talked about bringing Spotify to owners in North America before, did not provide a timeline. In other words, the music streaming service could be integrated next week or six months from now.
— Elon Musk (@elonmusk) August 15, 2019
But still, it’s a moment of celebration for many Tesla owners who have complained about Slacker Radio, the streaming music service integrated into all vehicles in the U.S. and Canada. Owners in Europe, Australia and Hong Kong have had Spotify Premium in their vehicles since late 2015.
Slacker Radio, which launched in 2007, has customizable radio stations based on the listener’s personal music tastes. The free and subscription-based service also tried to differentiate itself from the likes of Spotify and Pandora by using DJs to curate programs and, at one time, even sold a portable music player. Despite its efforts, Slacker has been overshadowed by Spotify, which had 232 million monthly active users and 108 million paying subscribers at the end of June 2019.
Slacker was acquired in 2017 for $50 million in cash and stock by LiveXLive, an entertainment and streaming service that focused on live music performances.
Last year, LiveXLive announced a partnership with Dash Radio, a digital radio broadcasting platform with more than 80 original live stations. Under the deal, Dash channels will be available across Slacker Radio, a move meant to bring more live radio to the streaming service.
A section in the policy on how the company uses personal data now reads (emphasis ours):
Our processing of personal data for these purposes includes both automated and manual (human) methods of processing. Our automated methods often are related to and supported by our manual methods. For example, our automated methods include artificial intelligence (AI), which we think of as a set of technologies that enable computers to perceive, learn, reason, and assist in decision-making to solve problems in ways that are similar to what people do. To build, train, and improve the accuracy of our automated methods of processing (including AI), we manually review some of the predictions and inferences produced by the automated methods against the underlying data from which the predictions and inferences were made. For example, we manually review short snippets of a small sampling of voice data we have taken steps to de-identify to improve our speech services, such as recognition and translation.
Multiple tech giants’ use of human workers to review users’ audio across a number of products involving AI has grabbed headlines in recent weeks after journalists exposed a practice that had not been clearly conveyed to users in terms and conditions — despite European privacy law requiring clarity about how people’s data is used.
Such workers are typically employed to improve the performance of AI systems by verifying translations and speech in different accents. But, again, this human review component within AI systems has generally been buried rather than transparently disclosed.
Earlier this month a German privacy watchdog told Google it intended to use EU privacy law to order it to halt human reviews of audio captured by its Google Assistant AI in Europe — after press had obtained leaked audio snippets and being able to re-identify some of the people in the recordings.
On learning of the regulator’s planned intervention Google suspended reviews.
Apple also announced it was suspending human reviews of Siri snippets globally, again after a newspaper reported that its contractors could access audio and routinely heard sensitive stuff.
Facebook also said it was pausing human reviews of a speech-to-text AI feature offered in its Messenger app — again after concerns had been raised by journalists.
So far Apple, Google and Facebook have suspended or partially suspended human reviews in response to media disclosures and/or regulatory attention.
While the lead privacy regulator for all three, Ireland’s DPC, has started asking questions.
Microsoft told Motherboard it is not suspending human reviews at this stage.
Users of Microsoft’s voice assistant can delete recordings — but such deletions require action from the user and would be required on a rolling basis as long as the product continues being use. So it’s not the same as having a full and blanket opt out.
We’ve asked Microsoft whether it intends to offer Skype or Cortana users an opt out of their recordings being reviewed by humans.
The company told Motherboard it will “continue to examine further steps we might be able to take”.
It’s generally agreed that Higher Education in the United States has gradually become more and more unaffordable. Students are dependent on external financial resources for which many of them do not even qualify. Students that are able to secure a loan, often have to take on debts they can’t really afford. And if they don’t eventually land a job with enough income, they are saddled with debt for a very long time.
Much of the problem is that most student loan companies are not concerned with the overall financial well-being of their students, who often feel stuck, trying to repay a loan they cannot afford, without a backup organization that will help them figure it all out. We can see that in the figures. The student loan debt in the US has just reached $1.6 trillion dollars and more than quadrupled in the last 15 years.
With the student debt crisis getting out of hand, the topic has become a semi-permanent issue in the news.
Launching next week is a new startup under the Y Combinator accelerator called Blair which aims to address this seemingly intractable problem.
Blair finances college students through what’s called “Income Share Agreements” (ISA). Students receive funding for their tuition or costs of living and in turn pay back a percentage of their income for a fixed period of time after they graduate. Repayments adjust to individual income circumstances and by deferring payments in times of low income we protect the downside of the students.
It thus provides students with an alternative to debt which is tailored to their individual circumstances to ensure affordability. Blair’s underwriting process is based on the future potential of a student and not their credit score or co-signer, which could be a deal-breaker in traditional settings. Blair’s competitors are traditional student lenders: Sallie Mae, Sofi, Earnest, Wells Fargo, Citizen Bank, other banks. ISA companies include Vemo Education, Leif, Almapact, Lumni and Defynance.
In contrast to traditional student loan companies, Blair relies on being more aligned with the financial incentives of students, the idea being that it supports students in improving their employability by placing them in internships early, giving them access to industry mentors and coaching them individually on their career prospects.
The founders came up with the idea from personal experience. Constantin, one of the co-founders, is on an ISA himself, as are a lot of the company’s friends. They stumbled across the problem of student debt over and over again while studying in the US and noticed a stark difference between their friends in the US and their friends in Germany. The main reason is that 40% of the students at their alma maters in Germany use Income Share Agreements to finance their studies. They plan to use their experience from Europe and make ISAs more widespread in the US.
Students apply for funding on the website, and within minutes and get a personal quote shortly after. If they accept the quote, they receive their funding within a couple of days which they can use to pay for their tuition or cost of living. Once Blair issues the funding, it crafts a holistic career plan for each individual student and starts supporting them in landing the internships and jobs they want. This includes, for example, optimizing their application documents, preparing them for interviews or connecting them to mentors in their target industry. For context, they batch students together in funds and let external investors invest in the funds.
It receives a cut of the student repayments and carried interest if a student fund performs better than the target return. Additionally, it partners with companies that hire talent through the platform.
Blair has raised the first fund for 50 students and disbursed money for the first ten. The rest of the students will receive their money within the next weeks. After YC’s Demo Day the company will deploy a larger fund that will support 200 additional students.
“Our underwriting model is unique since we have based it on data from concluded ISA funds in European countries,” says cofounder Mike Mahlkow.
“In the last two weeks, we received applications for funding totaling over 4 million dollars. Many of our students come from underprivileged backgrounds, often without any support network. Our goal is to build a human capital platform where individuals can access capital based on their future potential instead of their past and investors can participate in the upside potential of individuals in an ethical way” he adds.
WebKit, the open source engine that underpins Internet browsers including Apple’s Safari browser, has announced a new tracking prevention policy that takes the strictest line yet on the background and cross-site tracking practices and technologies which are used to creep on Internet users as they go about their business online.
Trackers are technologies that are invisible to the average web user, yet which are designed to keep tabs on where they go and what they look at online — typically for ad targeting but web user profiling can have much broader implications than just creepy ads, potentially impacting the services people can access or the prices they see, and so on. Trackers can also be a conduit for hackers to inject actual malware, not just adtech.
This translates to stuff like tracking pixels; browser and device fingerprinting; and navigational tracking to name just a few of the myriad methods that have sprouted like weeds from an unregulated digital adtech industry that’s poured vast resource into ‘innovations’ intended to strip web users of their privacy.
WebKit’s new policy is essentially saying enough: Stop the creeping.
But — and here’s the shift — it’s also saying it’s going to treat attempts to circumvent its policy as akin to malicious hack attacks to be responded to in kind; i.e. with privacy patches and fresh technical measures to prevent tracking.
“WebKit will do its best to prevent all covert tracking, and all cross-site tracking (even when it’s not covert),” the organization writes (emphasis its), adding that these goals will apply to all types of tracking listed in the policy — as well as “tracking techniques currently unknown to us”.
“If we discover additional tracking techniques, we may expand this policy to include the new techniques and we may implement technical measures to prevent those techniques,” it adds.
“We will review WebKit patches in accordance with this policy. We will review new and existing web standards in light of this policy. And we will create new web technologies to re-enable specific non-harmful practices without reintroducing tracking capabilities.”
Spelling out its approach to circumvention, it states in no uncertain terms: “We treat circumvention of shipping anti-tracking measures with the same seriousness as exploitation of security vulnerabilities,” adding: “If a party attempts to circumvent our tracking prevention methods, we may add additional restrictions without prior notice. These restrictions may apply universally; to algorithmically classified targets; or to specific parties engaging in circumvention.”
It also says that if a certain tracking technique cannot be completely prevented without causing knock-on effects with webpage functions the user does intend to interact with, it will “limit the capability” of using the technique” — giving examples such as “limiting the time window for tracking” and “reducing the available bits of entropy” (i.e. limiting how many unique data points are available to be used to identify a user or their behavior).
If even that’s not possible “without undue user harm” it says it will “ask for the user’s informed consent to potential tracking”.
“We consider certain user actions, such as logging in to multiple first party websites or apps using the same account, to be implied consent to identifying the user as having the same identity in these multiple places. However, such logins should require a user action and be noticeable by the user, not be invisible or hidden,” it further warns.
WebKit credits Mozilla’s anti-tracking policy as inspiring and underpinning its new approach.
Commenting on the new policy, Dr Lukasz Olejnik, an independent cybersecurity advisor and research associate at the Center for Technology and Global Affairs Oxford University, says it marks a milestone in the evolution of how user privacy is treated in the browser — setting it on the same footing as security.
Equating circumvention of anti-tracking with security exploitation is unprecedented. This is exactly what we need to treat privacy as first class citizen. Enough with hand-waving. It's making technology catch up with regulations (not the other way, for once!) #ePrivacy #GDPR https://t.co/G1Dx7F2MXu
— Lukasz Olejnik (@lukOlejnik) August 15, 2019
“Treating privacy protection circumventions on par with security exploitation is a first of its kind and unprecedented move,” he tells TechCrunch. “This sends a clear warning to the potential abusers but also to the users… This is much more valuable than the still typical approach of ‘we treat the privacy of our users very seriously’ that some still think is enough when it comes to user expectation.”
Asked how he sees the policy impacting pervasive tracking, Olejnik does not predict an instant, overnight purge of unethical tracking of users of WebKit-based browsers but argues there will be less room for consent-less data-grabbers to manoeuvre.
“Some level of tracking, including with unethical technologies, will probably remain in use for the time being. But covert tracking is less and less tolerated,” he says. “It’s also interesting if any decisions will follow, such as for example the expansion of bug bounties to reported privacy vulnerabilities.”
“How this policy will be enforced in practice will be carefully observed,” he adds.
As you’d expect, he credits not just regulation but the role played by active privacy researchers in helping to draw attention and change attitudes towards privacy protection — and thus to drive change in the industry.
There’s certainly no doubt that privacy research is a vital ingredient for regulation to function in such a complex area — feeding complaints that trigger scrutiny that can in turn unlock enforcement and force a change of practice.
Although that’s also a process that takes time.
“The quality of cybersecurity and privacy technology policy, including its communication still leave much to desire, at least at most organisations. This will not change fast,” says says Olejnik. “Even if privacy is treated at the ‘C-level’, this then still tends to be about the purely risk of compliance. Fortunately, some important industry players with good understanding of both technology policy and the actual technology, even the emerging ones still under active research, treat it increasingly seriously.
“We owe it to the natural flow of the privacy research output, the talent inflows, and the slowly moving strategic shifts as well to a minor degree to the regulatory pressure and public heat. This process is naturally slow and we are far from the end.”
For its part, WebKit has been taking aim at trackers for several years now, adding features intended to reduce pervasive tracking — such as, back in 2017, Intelligent Tracking Prevention (ITP), which uses machine learning to squeeze cross-site tracking by putting more limits on cookies and other website data.
Apple immediately applied ITP to its desktop Safari browser — drawing predictable fast-fire from the Internet Advertising Bureau whose membership is comprised of every type of tracker deploying entity on the Internet.
But it’s the creepy trackers that are looking increasingly out of step with public opinion. And, indeed, with the direction of travel of the industry.
In Europe, regulation can be credited with actively steering developments too — following last year’s application of a major update to the region’s comprehensive privacy framework (which finally brought the threat of enforcement that actually bites). The General Data Protection Regulation (GDPR) has also increased transparency around security breaches and data practices. And, as always, sunlight disinfects.
Although there remains the issue of abuse of consent for EU regulators to tackle — with research suggesting many regional cookie consent pop-ups currently offer users no meaningful privacy choices despite GDPR requiring consent to be specific, informed and freely given.
It also remains to be seen how the adtech industry will respond to background tracking being squeezed at the browser level. Continued aggressive lobbying to try to water down privacy protections seems inevitable — if ultimately futile. And perhaps, in Europe in the short term, there will be attempts by the adtech industry to funnel more tracking via cookie ‘consent’ notices that nudge or force users to accept.
As the security space underlines, humans are always the weakest link. So privacy-hostile social engineering might be the easiest way for adtech interests to keep overriding user agency and grabbing their data anyway. Stopping that will likely need regulators to step in and intervene.
Another question thrown up by WebKit’s new policy is which way Chromium will jump, aka the browser engine that underpins Google’s hugely popular Chrome browser.
Of course Google is an ad giant, and parent company Alphabet still makes the vast majority of its revenue from digital advertising — so it maintains a massive interest in tracking Internet users to serve targeted ads.
Yet Chromium developers did pay early attention to the problem of unethical tracking. Here, for example, are two discussing potential future work to combat tracking techniques designed to override privacy settings in a blog post from nearly five years ago.
There have also been much more recent signs Google paying attention to Chrome users’ privacy, such as changes to how it handles cookies which it announced earlier this year.
But with WebKit now raising the stakes — by treating privacy as seriously as security — that puts pressure on Google to respond in kind. Or risk being seen as using its grip on browser marketshare to foot-drag on baked in privacy standards, rather than proactively working to prevent Internet users from being creeped on.
Flatfair, a London-based fintech that lets landlords offer “deposit-free” renting to tenants, has raised $11 million in funding.
The Series A round is led by Index Ventures, with participation from Revolt Ventures, Adevinta, Greg Marsh (founder of Onefinestay), Jeremy Helbsy (former Savills CEO) and Taavet Hinrikus (TransferWise co-founder).
With the new capital, Flatfair says it plans to hire a “significant” number of product engineers, data scientists and business development specialists.
The startup will also invest in building out new features as it looks to expand its platform with “a focus on making renting fairer and more transparent for landlords and tenants.”
“With the average deposit of £1,110 across England and Wales being just shy of the national living wage, tenants struggle to pay expensive deposits when moving into their new home, often paying double deposits in between tenancies,” Flatfair co-founder and CEO Franz Doerr tells me when asked to frame the problem the startup has set out to solve.
“This creates cash flow issues for tenants, in particular for those with families. Some tenants end up financing the deposit through friends and family or even accrue expensive credit card debt. The latter can have a negative impact on the tenant’s credit rating, further restricting important access to credit for things that really matter in a tenant’s life.”
To remedy this, Flatfair’s “insurance-backed” payment technology provides tenants with the option to pay a per-tenancy membership fee instead of a full deposit. They do this by authorising their bank account via debit card with Flatfair, and when it is time to move out, any end-of-tenancy charges are handled via the Flatfair portal, including dispute resolution.
So, for example, rather than having to find a rental deposit equivalent to a month’s rent, which in theory you would get back once you move out sans any end-of-tenancy charges, with Flatfair you pay about a quarter of that as a non-refundable fee.
Of course, there are pros and cons to both, but for tenants that are cashflow restricted, the startup’s model at least offers an alternative financing option.
In addition, tenants registered with Flatfair are given a “trust score” that can go up over time, helping them move tenancy more easily in the future. The company is also trialing the use of Open Banking to help with credit checks by analysing transaction history to verify that you have paid rent regularly and on time in the past.
Landlords are said to like the model. Current Flatfair clients include major property owners and agents, such as Greystar, Places for People and CBRE. “Before Flatfair, deposits were the only form of tenancy security that landlords trusted,” claims Doerr.
In the event of a dispute over end-of-tenancy charges, both landlords and tenants are asked to upload evidence to the Flatfair platform and to try to settle the disagreement amicably. If they can’t, the case is referred by Flatfair to an independent adjudicator via mydeposits, a U.K. government-backed deposit scheme with which the company is partnering.
“In such a case, all the evidence is submitted to mydeposits and they come back with a decision within 24 hours,” explains Doerr. “[If] the adjudicator says that the tenant owes money, we invoice the tenant who then has five days to pay. If the tenant doesn’t pay, we charge their bank account… What’s key here is having the evidence. People are generally happy to pay if the costs are fair and where clear evidence exists, there’s less to argue about.”
More broadly, Doerr says there’s significant scope for digitisation across the buy-to-let sector and that the big vision for Flatfair is to create an “operating system” for rentals.
“The fundamental idea is to streamline processes around the tenancy to create revenue and savings opportunities for landlords and agents, whilst promoting a better customer experience, affordability and fairness for tenants,” he says.
“We’re working on a host of exciting new features that we’ll be able to talk about in the coming months, but we see opportunities to automate more functions within the life cycle of a tenancy and think there are a number of big efficiency savings to be made by unifying old systems, dumping old paper systems and streamlining cumbersome admin. Offering a scoring system for tenants is a great way of encouraging better behaviour and, given housing represents most people’s biggest expense, it’s only right renters should be able to build up their credit score and benefit from paying on time.”
U.S. stock markets plummeted today as recession fears continue to grow.
Yesterday’s good news about a reprieve on tariffs for U.S. consumer imports was undone by increasing concerns over economic indicators pointing to a potential global recession coming within the next year.
The Dow Jones Industrial Average dropped more than 800 points on Wednesday — its largest decline of the year — while the S&P 500 fell by 85 points and the tech-heavy Nasdaq dropped 240 points.
The downturn in the markets came a day after the Dow closed up 373 points after the U.S. Trade Representative announced a delay in many of the import taxes the Trump administration planned to impose on Chinese goods.
In the U.S. it was concerns over the news that the yield on 10-year U.S. Treasury notes had dipped below the yield of two-year notes. It’s an indicator that investors think the short-term prospects for a country’s economic outlook are worse than the long-term outlook, so yields are higher for short-term investments.
China’s industrial and retail sectors both slowed significantly in July. Industrial production, including manufacturing, mining and utilities, grew by 4.8% in July (a steep decline from 6.3% growth in June). Meanwhile, retail sales in the country slowed to 7.6%, down from 9.8% in June.
Germany also posted declines over the summer months, indicating that its economy had contracted by 0.1% in the three months leading to June.
Globally, the protracted trade war between the U.S. and China are weighing on economies — as are concerns about what a hard Brexit would mean for the economies in the European Union .
The stocks of Alphabet, Amazon, Apple, Facebook, Microsoft, Netflix and Salesforce were all off by somewhere between 2.5% and 4.5% in today’s trading.
New research into how European consumers interact with the cookie consent mechanisms which have proliferated since a major update to the bloc’s online privacy rules last year casts an unflattering light on widespread manipulation of a system that’s supposed to protect consumer rights.
As Europe’s General Data Protection Regulation (GDPR) came into force in May 2018, bringing in a tough new regime of fines for non-compliance, websites responded by popping up legal disclaimers which signpost visitor tracking activities. Some of these cookie notices even ask for consent to track you.
But many don’t — even now, more than a year later.
The study, which looked at how consumers interact with different designs of cookie pop-ups and how various design choices can nudge and influence people’s privacy choices, also suggests consumers are suffering a degree of confusion about how cookies function, as well as being generally mistrustful of the term ‘cookie’ itself. (With such baked in tricks, who can blame them?)
The researchers conclude that if consent to drop cookies was being collected in a way that’s compliant with the EU’s existing privacy laws only a tiny fraction of consumers would agree to be tracked.
The paper, which we’ve reviewed in draft ahead of publication, is co-authored by academics at Ruhr-University Bochum, Germany, and the University of Michigan in the US — and entitled: (Un)informed Consent: Studying GDPR Consent Notices in the Field.
The researchers ran a number of studies, gathering ~5,000 of cookie notices from screengrabs of leading websites to compile a snapshot (derived from a random sub-sample of 1,000) of the different cookie consent mechanisms in play in order to paint a picture of current implementations.
They also worked with a German ecommerce website over a period of four months to study how more than 82,000 unique visitors to the site interacted with various cookie consent designs which the researchers’ tweaked in order to explore how different defaults and design choices affected individuals’ privacy choices.
Their industry snapshot of cookie consent notices found that the majority are placed at the bottom of the screen (58%); not blocking the interaction with the website (93%); and offering no options other than a confirmation button that does not do anything (86%). So no choice at all then.
A majority also try to nudge users towards consenting (57%) — such as by using ‘dark pattern’ techniques like using a color to highlight the ‘agree’ button (which if clicked accepts privacy-unfriendly defaults) vs displaying a much less visible link to ‘more options’ so that pro-privacy choices are buried off screen.
The GDPR updated the EU’s long-standing digital privacy framework, with key additions including tightening the rules around consent as a legal basis for processing people’s data — which the regulation says must be specific (purpose limited), informed and freely given for consent to be valid.
Even so, since May last year there has been an outgrown in cookie ‘consent’ mechanisms popping up or sliding atop websites that still don’t offer EU visitors the necessary privacy choices, per the research.
“Given the legal requirements for explicit, informed consent, it is obvious that the vast majority of cookie consent notices are not compliant with European privacy law,” the researchers argue.
“Our results show that a reasonable amount of users are willing to engage with consent notices, especially those who want to opt out or do not want to opt in. Unfortunately, current implementations do not respect this and the large majority offers no meaningful choice.”
The researchers also record a large differential in interaction rates with consent notices — of between 5 and 55% — generated by tweaking positions, options, and presets on cookie notices.
This is where consent gets manipulated — to flip visitors’ preference for privacy.
“The results show that nudges and pre-selection had a high impact on user decisions, confirming previous work,” the researchers write. “It also shows that the GDPR requirement of privacy by default should be enforced to make sure that consent notices collect explicit consent.”
Here’s a section from the paper discussing what they describe as “the strong impact of nudges and pre-selections”:
Overall the effect size between nudging (as a binary factor) and choice was CV=0.50. For example, in the rather simple case of notices that only asked users to confirm that they will be tracked, more users clicked the “Accept” button in the nudge condition, where it was highlighted (50.8% on mobile, 26.9% on desktop), than in the non-nudging condition where “Accept” was displayed as a text link (39.2% m, 21.1% d). The effect was most visible for the category-and vendor-based notices, where all checkboxes were pre-selected in the nudging condition, while they were not in the privacy-by-default version. On the one hand, the pre-selected versions led around 30% of mobile users and 10% of desktop users to accept all third parties. On the other hand, only a small fraction (< 0.1%) allowed all third parties when given the opt-in choice and around 1 to 4 percent allowed one or more third parties (labeled “other” in 4). None of the visitors with a desktop allowed all categories. Interestingly, the number of non-interacting users was highest on average for the vendor-based condition, although it took up the largest part of any screen since it offered six options to choose from.
The key implication is that just 0.1% of site visitors would freely choose to enable all cookie categories/vendors — i.e. when not being forced to do so by a lack of choice or via nudging with manipulative dark patterns (such as pre-selections).
Rising a fraction, to between 1-4%, who would enable some cookie categories in the same privacy-by-default scenario.
“Our results… indicate that the privacy-by-default and purposed-based consent requirements put forth by the GDPR would require websites to use consent notices that would actually lead to less than 0.1 % of active consent for the use of third parties,” they write in conclusion.
They do flag some limitations with the study, pointing out that the dataset they used that arrived at the 0.1% figure is biased — given the nationality of visitors is not generally representative of public Internet users, as well as the data being generated from a single retail site. But they supplemented their findings with data from a company (Cookiebot) which provides cookie notices as a SaaS — saying its data indicated a higher accept all clicks rate but still only marginally higher: Just 5.6%.
Hence the conclusion that if European web users were given an honest and genuine choice over whether or not they get tracked around the Internet, the overwhelming majority would choose to protect their privacy by rejecting tracking cookies.
This is an important finding because GDPR is unambiguous in stating that if an Internet service is relying on consent as a legal basis to process visitors’ personal data it must obtain consent before processing data (so before a tracking cookie is dropped) — and that consent must be specific, informed and freely given.
Yet, as the study confirms, it really doesn’t take much clicking around the regional Internet to find a gaslighting cookie notice that pops up with a mocking message saying by using this website you’re consenting to your data being processed how the site sees fit — with just a single ‘Ok’ button to affirm your lack of say in the matter.
It’s also all too common to see sites that nudge visitors towards a big brightly colored ‘click here’ button to accept data processing — squirrelling any opt outs into complex sub-menus that can sometimes require hundreds of individual clicks to deny consent per vendor.
You can even find websites that gate their content entirely unless or until a user clicks ‘accept’ — aka a cookie wall. (A practice that has recently attracted regulatory intervention.)
Nor can the current mess of cookie notices be blamed on a lack of specific guidance on what a valid and therefore legal cookie consent looks like. At least not any more. Here, for example, is a myth-busting blog which the UK’s Information Commissioner’s Office (ICO) published last month that’s pretty clear on what can and can’t be done with cookies.
For instance on cookie walls the ICO writes: “Using a blanket approach such as this is unlikely to represent valid consent. Statements such as ‘by continuing to use this website you are agreeing to cookies’ is not valid consent under the higher GDPR standard.” (The regulator goes into more detailed advice here.)
While France’s data watchdog, the CNIL, also published its own detailed guidance last month — if you prefer to digest cookie guidance in the language of love and diplomacy.
(Those of you reading TechCrunch back in January 2018 may also remember this sage plain english advice from our GDPR explainer: “Consent requirements for processing personal data are also considerably strengthened under GDPR — meaning lengthy, inscrutable, pre-ticked T&Cs are likely to be unworkable.” So don’t say we didn’t warn you.)
Nor are Europe’s data protection watchdogs lacking in complaints about improper applications of ‘consent’ to justify processing people’s data.
Indeed, ‘forced consent’ was the substance of a series of linked complaints by the pro-privacy NGO noyb, which targeted T&Cs used by Facebook, WhatsApp, Instagram and Google Android immediately GDPR started being applied in May last year.
While not cookie notice specific, this set of complaints speaks to the same underlying principle — i.e. that EU users must be provided with a specific, informed and free choice when asked to consent to their data being processed. Otherwise the ‘consent’ isn’t valid.
So far Google is the only company to be hit with a penalty as a result of that first wave of consent-related GDPR complaints; France’s data watchdog issued it a $57M fine in January.
But the Irish DPC confirmed to us that three of the 11 open investigations it has into Facebook and its subsidiaries were opened after noyb’s consent-related complaints. (“Each of these investigations are at an advanced stage and we can’t comment any further as these investigations are ongoing,” a spokeswoman told us. So, er, watch that space.)
The problem, where EU cookie consent compliance is concerned, looks to be both a failure of enforcement and a lack of regulatory alignment — the latter as a consequence of the ePrivacy Directive (which most directly concerns cookies) still not being updated, generating confusion (if not outright conflict) with the shiny new GDPR.
However the ICO’s advice on cookies directly addresses claimed inconsistencies between ePrivacy and GDPR, stating plainly that Recital 25 of the former (which states: “Access to specific website content may be made conditional on the well-informed acceptance of a cookie or similar device, if it is used for a legitimate purpose”) does not, in fact, sanction gating your entire website behind an ‘accept or leave’ cookie wall.
Here’s what the ICO says on Recital 25 of the ePrivacy Directive:
So no cookie wall; and no partial walls that force a user to agree to ad targeting in order to access the content.
It’s worth point out that other types of privacy-friendly online advertising are available with which to monetize visits to a website. (And research suggests targeted ads offer only a tiny premium over non-targeted ads, even as publishers choosing a privacy-hostile ads path must now factor in the costs of data protection compliance to their calculations — as well as the cost and risk of massive GDPR fines if their security fails or they’re found to have violated the law.)
Negotiations to replace the now very long-in-the-tooth ePrivacy Directive — with an up-to-date ePrivacy Regulation which properly takes account of the proliferation of Internet messaging and all the ad tracking techs that have sprung up in the interim — are the subject of very intense lobbying, including from the adtech industry desperate to keep a hold of cookie data. But EU privacy law is clear.
“[Cookie consent]’s definitely broken (and has been for a while). But the GDPR is only partly to blame, it was not intended to fix this specific problem. The uncertainty of the current situation is caused the delay of the ePrivacy regulation that was put on hold (thanks to lobbying),” says Martin Degeling, one of the research paper’s co-authors, when we suggest European Internet users are being subject to a lot of ‘consent theatre’ (ie noisy yet non-compliant cookie notices) — which in turn is causing knock-on problems of consumer mistrust and consent fatigue for all these useless pop-ups. Which work against the core aims of the EU’s data protection framework.
“Consent fatigue and mistrust is definitely a problem,” he agrees. “Users that have experienced that clicking ‘decline’ will likely prevent them from using a site are likely to click ‘accept’ on any other site just because of one bad experience and regardless of what they actually want (which is in most cases: not be tracked).”
“We don’t have strong statistical evidence for that but users reported this in the survey,” he adds, citing a poll the researchers also ran asking site visitors about their privacy choices and general views on cookies.
Degeling says he and his co-authors are in favor of a consent mechanism that would enable web users to specify their choice at a browser level — rather than the current mess and chaos of perpetual, confusing and often non-compliant per site pop-ups. Although he points out some caveats.
“DNT [Do Not Track] is probably also not GDPR compliant as it only knows one purpose. Nevertheless something similar would be great,” he tells us. “But I’m not sure if shifting the responsibility to browser vendors to design an interface through which they can obtain consent will lead to the best results for users — the interfaces that we see now, e.g. with regard to cookies, are not a good solution either.
“And the conflict of interest for Google with Chrome are obvious.”
The EU’s unfortunate regulatory snafu around privacy — in that it now has one modernized, world-class privacy regulation butting up against an outdated directive (whose progress keeps being blocked by vested interests intent on being able to continue steamrollering consumer privacy) — likely goes some way to explaining why Member States’ data watchdogs have generally been loath, so far, to show their teeth where the specific issue of cookie consent is concerned.
At least for an initial period the hope among data protection agencies (DPAs) was likely that ePrivacy would be updated and so they should wait and see.
They have also undoubtedly been providing data processors with time to get their data houses and cookie consents in order. But the frictionless interregnum while GDPR was allowed to ‘bed in’ looks unlikely to last much longer.
Firstly because a law that’s not enforced isn’t worth the paper it’s written on (and EU fundamental rights are a lot older than the GDPR). Secondly, with the ePrivacy update still blocked DPAs have demonstrated they’re not just going to sit on their hands and watch privacy rights be rolled back — hence them putting out guidance that clarifies what GDPR means for cookies. They’re drawing lines in the sand, rather than waiting for ePrivacy to do it (which also guards against the latter being used by lobbyists as a vehicle to try to attack and water down GDPR).
And, thirdly, Europe’s political institutions and policymakers have been dining out on the geopolitical attention their shiny privacy framework (GDPR) has attained.
Much has been made at the highest levels in Europe of being able to point to US counterparts, caught on the hop by ongoing tech privacy and security scandals, while EU policymakers savor the schadenfreude of seeing their US counterparts being forced to ask publicly whether it’s time for America to have its own GDPR.
With its extraterritorial scope, GDPR was always intended to stamp Europe’s rule-making prowess on the global map. EU lawmakers will feel they can comfortably check that box.
However they are also aware the world is watching closely and critically — which makes enforcement a very key piece. It must slot in too. They need the GDPR to work on paper and be seen to be working in practice.
So the current cookie mess is a problematic signal which risks signposting regulatory failure — and that simply isn’t sustainable.
A spokesperson for the European Commission told us it cannot comment on specific research but said: “The protection of personal data is a fundamental right in the European Union and a topic the Juncker commission takes very seriously.”
“The GDPR strengthens the rights of individuals to be in control of the processing of personal data, it reinforces the transparency requirements in particular on the information that is crucial for the individual to make a choice, so that consent is given freely, specific and informed,” the spokesperson added.
“Cookies, insofar as they are used to identify users, qualify as personal data and are therefore subject to the GDPR. Companies do have a right to process their users’ data as long as they receive consent or if they have a legitimate interest.”
All of which suggests that the movement, when it comes, must come from a reforming adtech industry.
With robust privacy regulation in place the writing is now on the wall for unfettered tracking of Internet users for the kind of high velocity, real-time trading of people’s eyeballs that the ad industry engineered for itself when no one knew what was being done with people’s data.
GDPR has already brought greater transparency. Once Europeans are no longer forced to trade away their privacy it’s clear they’ll vote with their clicks not to be ad-stalked around the Internet too.
The current chaos of non-compliant cookie notices is thus a signpost pointing at an underlying privacy lag — and likely also the last gasp signage of digital business models well past their sell-by-date.
You may not have heard of Kobalt before, but you probably engage with the music it oversees every day, if not almost every hour. Combining a technology platform to better track ownership rights and royalties of songs with a new approach to representing musicians in their careers, Kobalt has risen from the ashes of the 2000 dot-com bubble to become a major player in the streaming music era. It is the leading alternative to incumbent music publishers (who represent songwriters) and is building a new model record label for the growing “middle class’ of musicians around the world who are stars within niche audiences.
Having predicted music’s digital upheaval early, Kobalt has taken off as streaming music has gone mainstream across the US, Europe, and East Asia. In the final quarter of last year, it represented the artists behind 38 of the top 100 songs on U.S. radio.
Along the way, it has secured more than $200 million in venture funding from investors like GV, Balderton, and Michael Dell, and its valuation was last pegged at $800 million. It confirmed in April that it is raising another $100 million to boot. Kobalt Music Group now employs over 700 people in 14 offices, and GV partner Avid Larizadeh Duggan even left her firm to become Kobalt’s COO.
How did a Swedish saxophonist from the 1980s transform into a leading entrepreneur in music’s digital transformation? Why are top technology VCs pouring money into a company that represents a roster of musicians? And how has the rise of music streaming created an opening for Kobalt to architect a new approach to the way the industry works?
Gaining an understanding of Kobalt and its future prospects is a vehicle for understanding the massive change underway across the global music industry right now and the opportunities that is and isn’t creating for entrepreneurs.
This article is Part 1 of the Kobalt EC-1, focused on the company’s origin story and growth. Part 2 will look at the company’s journey to create a new model for representing songwriters and tracking their ownership interests through the complex world of music royalties. Part 3 will look at Kobalt’s thesis about the rise of a massive new middle class of popular musicians and the record label alternative it is scaling to serve them.
It’s tough to imagine a worse year to launch a music company than 2000. Willard Ahdritz, a Swede living in London, left his corporate consulting job and sold his home for £200,000 to fully commit to his idea of a startup collecting royalties for musicians. In hindsight, his timing was less than impeccable: he launched Kobalt just as Napster and music piracy exploded onto the mainstream and mere months before the dot-com crash would wipe out much of the technology industry.
The situation was dire, and even his main seed investor told him he was doomed once the market crashed. “Eating an egg and ham sandwich…have you heard this saying? The chicken is contributing but the pig is committed,” Ahdritz said when we first spoke this past April (he has an endless supply of sayings). “I believe in that — to lose is not an option.”
Entrepreneurial hardship though is something that Ahdritz had early experience with. Born in Örebro, a city of 100,000 people in the middle of Sweden, Ahdritz spent a lot of time as a kid playing in the woods, which also holding dual interests in music and engineering. The intersection of those two converged in the synthesizer revolution of early electronic music, and he was fascinated by bands like Kraftwerk.
Amazon’s lead data regulator in Europe, Luxembourg’s National Commission for Data Protection, has raised privacy concerns about its use of manual human reviews of Alexa AI voice assistant recordings.
A spokesman for the regulator confirmed in an email to TechCrunch it is discussing the matter with Amazon, adding: “At this stage, we cannot comment further about this case as we are bound by the obligation of professional secrecy.” The development was reported earlier by Reuters.
We’ve reached out to Amazon for comment.
Amazon’s Alexa voice AI, which is embedded in a wide array of hardware — from the company’s own brand Echo smart speaker line to an assortment of third party devices (such as this talkative refrigerator or this oddball table lamp) — listens pervasively for a trigger word which activates a recording function, enabling it to stream audio data to the cloud for processing and storage.
However trigger-word activated voice AIs have been shown to be prone to accidental activation. While a device may be being used in a multi-person household. So there’s always a risk of these devices recording any audio in their vicinity, not just intentional voice queries…
In a nutshell, the AIs’ inability to distinguish between intentional interactions and stuff they overhear means they are natively prone to eavesdropping — hence the major privacy concerns.
These concerns have been dialled up by recent revelations that tech giants — including Amazon, Apple and Google — use human workers to manually review a proportion of audio snippets captured by their voice AIs, typically for quality purposes. Such as to try to improve the performance of voice recognition across different accents or environments. But that means actual humans are listening to what might be highly sensitive personal data.
Earlier this week Amazon quietly added an option to the settings of the Alexa smartphone app to allow users to opt out of their audio snippets being added to a pool that may be manually reviewed by people doing quality control work for Amazon — having not previously informed Alexa users of its human review program.
The policy shift followed rising attention on the privacy of voice AI users — especially in Europe.
Last month thousands of recordings of users of Google’s AI assistant were leaked to the Belgian media which was able to identify some of the people in the clips.
A data protection watchdog in Germany subsequently ordered Google to halt manual reviews of audio snippets.
Google responded by suspending human reviews across Europe. While its lead data watchdog in Europe, the Irish DPC, told us it’s “examining” the issue.
Separately, in recent days, Apple has also suspended human reviews of Siri snippets — doing so globally, in its case — after a contractor raised privacy concerns in the UK press over what Apple contractors are privy to when reviewing Siri audio.
The Hamburg data protection agency which intervened to halt human reviews of Google Assistant snippets urged its fellow EU privacy watchdogs to prioritize checks on other providers of language assistance systems — and “implement appropriate measures” — naming both Apple and Amazon.
In the case of Amazon, scrutiny from European watchdogs looks to be fast dialling up.
At the time of writing it is the only one of the three tech giants not to have suspended human reviews of voice AI snippets, either regionally or globally.
In a statement provided to the press at the time it changed Alexa settings to offer users an opt-out from the chance of their audio being manually reviewed, Amazon said:
We take customer privacy seriously and continuously review our practices and procedures. For Alexa, we already offer customers the ability to opt-out of having their voice recordings used to help develop new Alexa features. The voice recordings from customers who use this opt-out are also excluded from our supervised learning workflows that involve manual review of an extremely small sample of Alexa requests. We’ll also be updating information we provide to customers to make our practices more clear.
Make way for another antitrust investigation into big tech. Step forward Russia’s Federal Antimonopoly Service (FAS), which has opened an official probe of Apple — following a complaint lodged in March by security company Kaspersky Labs.
Kaspersky’s complaint to FAS followed a change in Apple’s policy towards a parental control app it offers, called Kaspersky Safe Kids. Discussing the complaint in a blog post the security firm says Apple contacted it in 2017 to inform it that the use of configuration profiles is against App Store policy, even though the app had been on Apple’s store for nearly three years without it raising any objections.
Apple told Kaspersky to remove configuration profiles from the app — which it says would require it to remove two key features that makes it useful to parents: Namely, app control and Safari browser blocking.
It also points out that the timing of Apple’s objection followed Apple announcing its Screen Time feature, in iOS 12 — which allows iOS users to monitor the amount of time they spend using certain apps or on certain websites and set time restrictions. Kaspersky argues Screen Time is “essentially Apple’s own app for parental control” — hence raising concerns about the potential for Apple to exert unfair market power over the store it also operates by restricting competition.
We’ve reached out to Apple for comment on the FAS investigation. The company referred Reuters to a statement it made in April about its policy towards parental control apps, following other complaints.
In the statement Apple says it removed several such apps from the App Store because they “put users’ privacy and security at risk” — calling out the use of what it described as “a highly invasive technology called Mobile Device Management” (MDM).
But Kaspersky claims its app does not, and never did, use MDM.
Following complaints and some press attention to Apple’s parental control apps crackdown), the company appears to have softened its stance on MDM for this specific use-case — updating the App Store Review Guidelines’ to allow using MDM for parental controls in limited cases.
Kaspersky also says that the Apple Developer Enterprise Program License Agreement “clarifies that the use of MDM-profiles and configuration profiles in applications for home users is only possible with the explicit written consent of Apple”.
However it argues that Apple’s updated rules and restrictions still “do not provide clear criteria allowing the usage of these profiles, as well as information on meeting the criteria, which is needed for obtaining written consent from Apple to use them”. Hence it’s not willing to drop its complaint yet.
It says it’s also continuing to prepare to file an antitrust complaint over the same issue in Europe — where a separate competition-related complaint was recently filed against Apple by the music service Spotify.
Kaspersky says now that only official written confirmation from Apple — of “the applicability of the new p.5.5. “App Store Review Guidelines” for Kaspersky Safe Kids for iOS” — will stay its complaint.
Russia’s FAS has shown itself to be relatively alacritous at handling big tech antitrust complaints — slapping Google with an order against bundling its services with Android back in 2015, a few months after local search giant Yandex had filed a complaint.
It took the European Union’s competition regulator several more years before arriving at a similar conclusion vis-a-vis Google’s competition-blocking Android bundling.
The UK government has announced it’s rerouting £250M (~$300M) in public funds for the country’s National Health Service (NHS) to set up an artificial intelligence lab that will work to expand the use of AI technologies within the service.
The Lab, which will sit within a new NHS unit tasked with overseeing the digitisation of the health and care system (aka: NHSX), will act as an interface for academic and industry experts, including potentially startups, encouraging research and collaboration with NHS entities (and data) — to drive health-related AI innovation and the uptake of AI-driven healthcare within the NHS.
Last fall the then new in post health secretary, Matt Hancock, set out a tech-first vision of future healthcare provision — saying he wanted to transform NHS IT so it can accommodate “healthtech” to support “preventative, predictive and personalised care”.
In a press release announcing the AI lab, the Department of Health and Social Care suggested it would seek to tackle “some of the biggest challenges in health and care, including earlier cancer detection, new dementia treatments and more personalised care”.
Other suggested areas of focus include:
Google-owned UK AI specialist DeepMind has been an early mover in some of these areas — inking a partnership with a London-based NHS trust in 2015 to develop a clinical task management app called Streams that’s been rolled out to a number of NHS hospitals.
UK startup, Babylon Health, is another early mover in AI and app-based healthcare, developing a chatbot-style app for triaging primary care which it sells to the NHS. (Hancock himself is a user.)
In the case of DeepMind, the company also hoped to use the same cache of NHS data it obtained for Streams to develop an AI algorithm for earlier detection of a condition called acute kidney injury (AKI).
However the data-sharing partnership ran into trouble when concerns were raised about the legal basis for reusing patient data to develop AI. And in 2017 the UK’s data watchdog found DeepMind’s partner NHS trust had failed to obtain proper consents for the use of patients’ data.
DeepMind subsequently announced its own AI model for predicting AKI — trained on heavily skewed US patient data. It has also inked some AI research partnerships involving NHS patient data — such as this one with Moorfields Eye Hospital, aiming to build AIs to speed up predictions of degenerative eye conditions.
But an independent panel of reviewers engaged to interrogate DeepMind’s health app business raised early concerns about monopoly risks attached to NHS contracts that lock trusts to using its infrastructure for delivering digital healthcare.
Where healthcare AIs are concerned, representative clinical data is the real goldmine — and it’s the NHS that owns that.
So, provided NHSX properly manages the delivery infrastructure for future digital healthcare — to ensure systems adhere to open standards, and no single platform giant is allowed to lock others out — Hancock’s plan to open up NHS IT to the next wave of health-tech could deliver a transformative and healthy market for AI innovative that benefits startups and patients alike.
— Mark Tluszcz (@marktluszcz) August 8, 2019
Commenting on the launch of NHSX in a statement, Hancock said: “We are on the cusp of a huge health tech revolution that could transform patient experience by making the NHS a truly predictive, preventive and personalised health and care service.
“I am determined to bring the benefits of technology to patients and staff, so the impact of our NHS Long Term Plan and this immediate, multimillion pound cash injection are felt by all. It’s part of our mission to make the NHS the best it can be.
“The experts tell us that because of our NHS and our tech talent, the UK could be the world leader in these advances in healthcare, so I’m determined to give the NHS the chance to be the world leader in saving lives through artificial intelligence and genomics.”
Simon Stevens, CEO of NHS England, added: “Carefully targeted AI is now ready for practical application in health services, and the investment announced today is another step in the right direction to help the NHS become a world leader in using these important technologies.
“In the first instance it should help personalise NHS screening and treatments for cancer, eye disease and a range of other conditions, as well as freeing up staff time, and our new NHS AI Lab will ensure the benefits of NHS data and innovation are fully harnessed for patients in this country.”
Most entrepreneurs who have tried to compete with Netflix have failed. But Efe Cakarel isn’t one of them. As the founder and CEO of Mubi, he has created a beloved movie streaming service. That’s why I’m excited to announce that Mubi founder Efe Cakarel is joining us at TechCrunch Disrupt Berlin.
Mubi has been around for more than a decade. Back then, Netflix was just launching its on-demand streaming service. It was still mostly a DVD rental company.
Instead of focusing on quantity and mainstream content, Mubi went the opposite direction with a subscription tailored for cinephiles. Every day, Mubi adds a new movie to its catalog. It remains available for 30 days before it disappears from the service.
With this rolling window of 30 movies, there’s always something new, something interesting. The limited selection has become an asset as you can take time to read about each movie and watch things you would have never considered watching on a service with thousands of titles.
More recently, the company started purchasing exclusive distribution rights and even producing its own original content. The service is available in most countries around the world.
And yet, Mubi is still around after all those years. I’m personally impressed by Cakarel’s resilience and I can’t wait to see what’s next for the company.
Buy your ticket to Disrupt Berlin to listen to this discussion and many others. The conference will take place on December 11-12.
In addition to panels and fireside chats, like this one, new startups will participate in the Startup Battlefield to compete for the highly coveted Battlefield Cup.
Robinhood, the Silicon Valley-based stock trading app that was recently valued by investors at $7.6 billion, has received regulatory approval in the U.K., breaking cover on its plans to set up shop in London (as reported exclusively by TechCrunch 7 months ago).
Specifically, Robinhood International Ltd., a Robinhood subsidiary, has been authorised to operate as a broker (with some restrictions) in the U.K. by the Financial Conduct Authority, which regulates U.K. financial services. This gears Robinhood up for a U.K. launch, although the company is staying tightlipped on when exactly that will be.
In addition, Robinhood is disclosing that it has appointed Wander Rutgers as President of Robinhood International. He joins from London fintech Plum, where he headed up the startup’s investing and savings product, and prior to that is said to have led product, compliance and operations teams at TransferWise.
At Robinhood, Rutgers will lead the U.K. business and oversee the company’s new London office, which has already begun staffing up. Sources told me in April that Robinhood was busy hiring for multiple U.K. positions, including recruitment, operations, marketing/PR, customer support, compliance and product.
The company tells me it is also building out a London-based user research team so it can better find product-market fit here. Crudely building a localised version of Robinhood obviously won’t cut it.
Meanwhile, news that Robinhood is ramping its planned U.K. launch is interesting in the context of local fintech startups that have launched their own fee-free trading offerings.
First out of the gate was London-based Freetrade, which chose very early on to build a bona-fide “challenger broker,” including obtaining the required license from the FCA, rather than simply partnering with an established broker. The app lets you invest in stocks and ETFs. Trades are “fee-free” if you are happy for your buy or sell trades to execute at the close of business each day. If you want to execute immediately, the startup charges a low £1 per trade.
And just last week, Revolut finally launched its fee-free stock trading feature, albeit tentatively. For now, the feature is limited to some Revolut customers with a premium Metal card (which itself entails a monthly subscription fee) and covers 300 U.S.-listed stocks. The company says that it plans to expand to U.K. and European stocks as well as Exchange Traded Funds in the future. Noteworthy, my understanding is that Revolut doesn’t have its own broker license but is partnering with US broker DriveWealth for part of its tech and the required regulatory authorisation (it also explains why, for now, Revolut is offering access to U.S. stocks only).
In contrast, Freetrade has long argued that to innovate within trading, you need to build and own the full brokerage stack. It was the first mover in this regard amongst the new crop of “fee-free” trading apps in the U.K., though others, including Netherlands-based Bux and now Robinhood, have since taken the same path. Only time will tell if Revolut will be forced to do the same.
Another tidbit is that Revolut and Robinhood share investors, namely Index and DST. That makes for an interesting subplot as the two unicorns encroach on each other’s lawn. No conflict, no interest.
Twitter has disclosed more bugs related to how it uses personal data for ad targeting that means it may have shared users data with advertising partners even when a user had expressly told it not to.
In a blog post on its Help Center about the latest “issues” Twitter says it “recently” found, it admits to finding two problems with users’ ad settings choices that mean they “may not have worked as intended”.
It claims both problems were fixed on August 5. Though it does not specify when it realized it was processing user data without their consent.
The first bug relates to tracking ad conversions. This meant that if a Twitter user clicked or viewed an ad for a mobile application on the platform and subsequently interacted with the mobile app Twitter says it “may have shared certain data (e.g., country code; if you engaged with the ad and when; information about the ad, etc)” with its ad measurement and advertising partners — regardless of whether the user had agreed their personal data could be shared in this way.
It suggests this leak of data has been happening since May 2018 — which is also the day when Europe’s updated privacy framework, GDPR, came into force. The regulation mandates disclosure of data breaches (which explains why you’re hearing about all these issues from Twitter) — and means that quite a lot is riding on how “recently” Twitter found these latest bugs. Because GDPR also includes a supersized regime of fines for confirmed data protection violations.
Though it remains to be seen whether Twitter’s now repeatedly leaky adtech will attract regulatory attention…
Twitter may have /accidentally/ shared data on users to ads partners even for those who opted out from personalised ads. That would be a violation of user settings and expectations, which #GDPR makes a quasi-contract. https://t.co/s0acfllEhG
— Lukasz Olejnik (@lukOlejnik) August 7, 2019
Twitter specifies that it does not share users’ names, Twitter handles, email or phone number with ad partners. However it does share a user’s mobile device identifier, which GDPR treats as personal data as it acts as a unique identifier. Using this identifier, Twitter and Twitter’s ad partners can work together to link a device identifier to other pieces of identity-linked personal data they collectively hold on the same user to track their use of the wider Internet, thereby allowing user profiling and creepy ad targeting to take place in the background.
The second issue Twitter discloses in the blog post also relates to tracking users’ wider web browsing to serve them targeted ads.
Here Twitter admits that, since September 2018, it may have served targeted ads that used inferences made about the user’s interests based on tracking their wider use of the Internet — even when the user had not given permission to be tracked.
This sounds like another breach of GDPR, given that in cases where the user did not consent to being tracked for ad targeting Twitter would lack a legal basis for processing their personal data. But it’s saying it processed it anyway — albeit, it claims accidentally.
This type of creepy ad targeting — based on so-called ‘inferences’ — is made possible because Twitter associates the devices you use (including mobile and browsers) when you’re logged in to its service with your Twitter account, and then receives information linked to these same device identifiers (IP addresses and potentially browser fingerprinting) back from its ad partners, likely gathered via tracking cookies (including Twitter’s own social plug-ins) which are larded all over the mainstream Internet for the purpose of tracking what you look at online.
These third party ad cookies link individuals’ browsing data (which gets turned into inferred interests) with unique device/browser identifiers (linked to individuals) to enable the adtech industry (platforms, data brokers, ad exchanges and so on) to track web users across the web and serve them “relevant” (aka creepy) ads.
“As part of a process we use to try and serve more relevant advertising on Twitter and other services since September 2018, we may have shown you ads based on inferences we made about the devices you use, even if you did not give us permission to do so,” it how Twitter explains this second ‘issue’.
“The data involved stayed within Twitter and did not contain things like passwords, email accounts, etc.,” it adds. Although the key point here is one of a lack of consent, not where the data ended up.
(Also, the users’ wider Internet browsing activity linked to their devices via cookie tracking did not originate with Twitter — even if it’s claiming the surveillance files it received from its “trusted” partners stayed on its servers. Bits and pieces of that tracked data would, in any case, exist all over the place.)
In an explainer on its website on “personalization based on your inferred identity” Twitter seeks to reassure users that it will not track them without their consent, writing:
We are committed to providing you meaningful privacy choices. You can control whether we operate and personalize your experience based on browsers or devices other than the ones you use to log in to Twitter (or if you’re logged out, browsers or devices other than the one you’re currently using), or email addresses and phone numbers similar to those linked to your Twitter account. You can do this by visiting your Personalization and data settings and adjusting the Personalize based on your inferred identity setting.
The problem in this case is that users’ privacy choices were simply overridden. Twitter says it did not do so intentionally. But either way it’s not consent. Ergo, a breach.
“We know you will want to know if you were personally affected, and how many people in total were involved. We are still conducting our investigation to determine who may have been impacted and If we discover more information that is useful we will share it,” Twitter goes on. “What is there for you to do? Aside from checking your settings, we don’t believe there is anything for you to do.
“You trust us to follow your choices and we failed here. We’re sorry this happened, and are taking steps to make sure we don’t make a mistake like this again. If you have any questions, you may contact Twitter’s Office of Data Protection through this form.”
While the company may “believe” there is nothing Twitter users can do — but accept its apology for screwing up, European Twitter users who believe it processed their data without their consent do have a course of action they can take: They can complain to their local data protection watchdog.
Zooming out, there are also major legal question marks hanging over behaviourally targeted ads in Europe.
The UK’s privacy regulator warned in June that systematic profiling of web users via invasive tracking technologies such as cookies is in breach of pan-EU privacy laws — following multiple complaints filed in the region that argue RTB is in breach of the GDPR.
While, back in May Google’s lead regulator in Europe, the Irish Data Protection Commission, confirmed it has opened a formal investigation into use of personal data in the context of its online Ad Exchange.
So the wider point here is that the whole leaky business of creepy ads looks to be operating on borrowed time.
— Johnny Ryan (@johnnyryan) August 6, 2019
Brolly, the U.K. insurance app that lets you keep track of your various policies so you are correctly and competitively covered, is launching a new product to plug what it sees as a gap in home contents insurance.
Dubbed “Brolly Contents,” the new offering promises “flexible” monthly cover for all or a subset of the items you own, transparently priced and delivered in a more convenient way via Brolly’s mobile app. Features of Brolly Contents include the ability to insure up to £40,000 worth of belongings, suitable for renters or property owners, and no fees for updates to your cover.
In addition, there’s a promised loyalty discount of up to 25% that increases each month you stay with Brolly and haven’t made a claim. That’s the antithesis to incumbent providers who offer large discounts for new customers, which are then clawed back the following years on the premise that you are too lazy or time poor to bother switching.
Brolly founder and CEO Phoebe Hugh tells me her aim is to rid customers of what she calls the “loyalty tax,” while simultaneously upgrading contents insurance for the digital age.
“For the majority of consumers, contents insurance is the first voluntary insurance product they will come across,” says Hugh. “A digital native generation are approaching this for the first time and are confused and unhappy with what is currently available. Nine out of 10 households headed by someone between 65-75 have contents insurance, versus just 4 out of 10 of under 30s. This newer customer has become accustomed to digital delivery of everything, from banking to food delivery, and cannot find an insurance product that suits them. Brolly Contents is the first Brolly product to address these problems head on.”
Developed in partnership with specialist insurer Hiscox, Brolly Contents promises to be more flexible than similar products after Hugh and her team concluded that the current market wasn’t meeting existing Brolly customers’ needs, let alone expanding the market for contents insurance as a whole.
Contents insurance is typically sold as blanket cover but with lots of caveats, and/or requires tedious form filling and is still opaque at best. This leaves many not bothering to take out cover at all or discovering that the cover they have falls short when it’s time to make a claim.
In contrast, Brolly Contents claims to be more transparent, with a much simpler to understand product and an on-boarding experience delivered via in-app chat that walks you through how much cover you require and the amount of excess you wish to pay should you make a claim.
“With Brolly Contents, you can choose how much you want to insure and it doesn’t need to be everything in your home,” says Hugh. “You can get insured from as little as £4.50 a month, if you only want to protect a few things. There are no add-ons, and you can add valuables for no additional cost. Many businesses in this space, particularly some of the newer ones, are offering a branded product to customers which, in the background, consists of multiple underwriters with policies stitched together. As soon as you add some valuables and accidental damage, the price skyrockets. It’s pretty tricky to keep pricing competitive if this is how you operate.”
Meanwhile, Hugh — who before starting Brolly was an underwriter at Aviva — says that despite the insurtech hype, the insurance industry remains a “pre-disrupted market.” Incumbents are focused on where the profit currently is, and therefore the uninsured or beginner insurance customers aren’t well served. In the meantime, insurtech startups typically have to work with those same incumbents.
“A new business gaining traction in insurance is challenging; it’s unlikely you can underwrite yourself at the outset so you have to take a patient approach,” she says. “We found a world-class underwriting partner in Hiscox who shared our vision to simplify insurance, and who wanted to challenge the status quo, but are also trusted to pay out on claims. We’ve been working on Brolly Contents for over a year to deliver something genuinely new.”
Adds Matt Churchill, head of Hiscox Futures: “Consumer expectations of insurance are changing. We identified early on that Brolly were leading the charge in exploring new ways of engaging customers. Together, we’ve designed a simple insurance product and brought it to life on Brolly’s proven technology driven platform. We hope it brings positive benefits to consumers looking for simplicity and flexibility from a home contents policy.”
For nearly 15 years LanzaTech has been developing a carbon capture technology that can turn waste streams into ethanol that can be used for chemicals and fuel.
Now, with $72 million in fresh funding at a nearly $1 billion valuation and a newly inked partnership with biotechnology giant, Novo Holdings, the company is looking to expand its suite of products beyond ethanol manufacturing, thanks, in part, to the intellectual property held by Novozymes (a Novo Holdings subsidiary).
“We are learning how to modify our organisms so they can make things other than ethanol directly,” said LanzaTech chief executive officer, Jennifer Holmgren.
From its headquarters in Skokie, Ill., where LanzaTech relocated in 2014 from New Zealand, the biotechnology company has been plotting ways to reduce carbon emissions and create a more circular manufacturing system. That’s one where waste gases and solid waste sources that were previously considered to be un-recyclable are converted into chemicals by LanzaTech’s genetically modified microbes.
The company already has a commercial manufacturing facility in China, attached to a steel plant operated by the Shougang Group, which produces 16 million gallons of ethanol per-year. LanzaTech’s technology pipes the waste gas into a fermenter, which is filled with genetically modified yeast that uses the carbon dioxide to produce ethanol. Another plant, using a similar technology is under construction in Europe.
Through a partnership with Indian Oil, LanzaTech is working on a third waste gas to ethanol using a different waste gas taken from a Hydrogen plant.
The company has also inked early deals with airlines like Virgin in the UK and ANA in Japan to make an ethanol-based jet fuel for commercial flight. And a third application of the technology is being explored in Japan which takes previously un-recyclable waste streams from consumer products and converts that into ethanol and polyethylene that can be used to make bio-plastics or bio-based nylon fabrics.
Through the partnership with Novo Holdings, LanzaTech will be able to use the company’s technology to expand its work into other chemicals, according to chief executive Jennifer Holmgren. “We are making product to sell into that [chemicals market] right now. We are taking ethanol and making products out of it. Taking ethylene and we will make polyethylene and we will make PET to substitute for fiber.”
Holmgren said that LanzaTech’s operations were currently reducing carbon dioxide emissions by the equivalent of taking 70,000 cars off the road.
“LanzaTech is addressing our collective need for sustainable fuels and materials, enabling industrial players to be part of building a truly circular economy,” said Anders Bendsen Spohr, Senior Director at Novo Holdings, in a statement. “Novo Holdings’ investment underlines our commitment to supporting the bio-industrials sector and, in particular, companies that are developing cutting-edge technology platforms. We are excited to work with the LanzaTech team and look forward to supporting the company in its next phase of growth.”
Holmgren said that the push into new chemicals by LanzaTech is symbolic of a resurgence of industrial biotechnology as one of the critical pathways to reducing carbon emissions and setting industry on a more sustainable production pathway.
“Industrial biotechnology ca unlock the utility of a lot of waste carbon emissions. ” said Holmgren. “[Municipal solid waste] is an urban oil field. And we are working to find new sources of sustainable carbon.”
LanzaTech isn’t alone in its quest to create sustainable pathways for chemical manufacturing. Solugen, an upstart biotechnology company out of Houston, is looking to commercialize the bio-production of hydrogen peroxide. It’s another chemical that’s at the heart of modern industrial processes — and is incredibly hazardous to make using traditional methods.
As the world warms, and carbon emissions continue to rise, it’s important that both companies find pathways to commercial success, according to Holmgren.
“It’s going to get much much worse if we don’t do anything,” she said.