FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Today — January 22nd 2020Your RSS feeds

UK watchdog sets out “age appropriate” design code for online services to keep kids’ privacy safe

By Natasha Lomas

The UK’s data protection watchdog has today published a set of design standards for Internet services which are intended to help protect the privacy and safety of children online.

The Information Commissioner’s Office (ICO) has been working on the Age Appropriate Design Code since the 2018 update of domestic data protection law — as part of a government push to create ‘world-leading’ standards for children when they’re online.

UK lawmakers have grown increasingly concerned about the ‘datafication’ of children when they go online and may be too young to legally consent to being tracked and profiled under existing European data protection law.

The ICO’s code is comprised of 15 standards of what it calls “age appropriate design” — which the regulator says reflects a “risk-based approach”, including stipulating that setting should be set by default to ‘high privacy’; that only the minimum amount of data needed to provide the service should be collected and retained; and that children’s data should not be shared unless there’s a reason to do so that’s in their best interests.

Profiling should also be off by default. While the code also takes aim at dark pattern UI designs that seek to manipulate user actions against their own interests, saying “nudge techniques” should not be used to “lead or encourage children to provide unnecessary personal data or weaken or turn off their privacy protections”.

“The focus is on providing default settings which ensures that children have the best possible access to online services whilst minimising data collection and use, by default,” the regulator writes in an executive summary.

While the age appropriate design code is focused on protecting children it is applies to a very broad range of online services — with the regulator noting that “the majority of online services that children use are covered” and also stipulating “this code applies if children are likely to use your service” [emphasis ours].

This means it could be applied to anything from games, to social media platforms to fitness apps to educational websites and on-demand streaming services — if they’re available to UK users.

“We consider that for a service to be ‘likely’ to be accessed [by children], the possibility of this happening needs to be more probable than not. This recognises the intention of Parliament to cover services that children use in reality, but does not extend the definition to cover all services that children could possibly access,” the ICO adds.

Here are the 15 standards in full as the regulator describes them:

  1. Best interests of the child: The best interests of the child should be a primary consideration when you design and develop online services likely to be accessed by a child.
  2. Data protection impact assessments: Undertake a DPIA to assess and mitigate risks to the rights and freedoms of children who are likely to access your service, which arise from your data processing. Take into account differing ages, capacities and development needs and ensure that your DPIA builds in compliance
    with this code.
  3. Age appropriate application: Take a risk-based approach to recognising the age of individual users and ensure you effectively apply the standards in this code to child users. Either establish age with a level of certainty that is appropriate to the risks to the rights and freedoms of children that arise from your data processing, or apply the standards in this code to all your users instead.
  4. Transparency: The privacy information you provide to users, and other published terms, policies and community standards, must be concise, prominent and in clear language suited to the age of the child. Provide additional specific ‘bite-sized’ explanations about how you use personal data at the point that use is activated.
  5. Detrimental use of data: Do not use children’s personal data in ways that have been shown to be detrimental to their wellbeing, or that go against industry codes of practice, other regulatory provisions or Government advice.
  6. Policies and community standards: Uphold your own published terms, policies and community standards (including but not limited to privacy policies, age restriction, behaviour rules and content policies).
  7. Default settings: Settings must be ‘high privacy’ by default (unless you can demonstrate a compelling reason for a different default setting, taking account of the best interests of the child).
  8. Data minimisation: Collect and retain only the minimum amount of personal data you need to provide the elements of your service in which a child is actively and knowingly engaged. Give children separate choices over which elements they wish to activate.
  9. Data sharing: Do not disclose children’s data unless you can demonstrate a compelling reason to do so, taking account of the best interests of the child.
  10. Geolocation: Switch geolocation options off by default (unless you can demonstrate a compelling reason for geolocation to be switched on by default, taking account of the best interests of the child). Provide an obvious sign for children when location tracking is active. Options which make a child’s location visible to others must default back to ‘off’ at the end of each session.
  11. Parental controls: If you provide parental controls, give the child age appropriate information about this. If your online service allows a parent or carer to monitor their child’s online activity or track their location, provide an obvious sign to the child when they are being monitored.
  12. Profiling: Switch options which use profiling ‘off’ by default (unless you can demonstrate a compelling reason for profiling to be on by default, taking account of the best interests of the child). Only allow profiling if you have appropriate measures in place to protect the child from any harmful effects (in particular, being fed content that is detrimental to their health or wellbeing).
  13. Nudge techniques: Do not use nudge techniques to lead or encourage children to provide unnecessary personal data or weaken or turn off their privacy protections.
  14. Connected toys and devices: If you provide a connected toy or device ensure you include effective tools to enable conformance to this code.
  15. Online tools: Provide prominent and accessible tools to help children exercise their data protection rights and report concerns.

The Age Appropriate Design Code also defines children as under the age of 18 — which offers a higher bar than current UK data protection law which, for example, puts only a 13-year-age limit for children to be legally able to give their consent to being tracked online.

So — assuming (very wildly) — that Internet services were to suddenly decide to follow the code to the letter, setting trackers off by default and not nudging users to weaken privacy-protecting defaults by manipulating them to give up more data, the code could — in theory — raise the level of privacy both children and adults typically get online.

However it’s not legally binding — so there’s a pretty fat chance of that.

Although the regulator does make a point of noting that the standards in the code are backed by existing data protection laws, which it does regulate and can legally enforceable — pointing out that it has powers to take action against law breakers including “tough sanctions” such as orders to stop processing data and fines of up to 4% of a company’s global turnover.

So, in a way, the regulator appears to be saying: ‘Are you feeling lucky data punk?’

Last April the UK government published a white paper setting out its proposals for regulating a range of online harms — including seeking to address concern about inappropriate material that’s available on the Internet being accessed by children.

The ICO’s Age Appropriate Design Code is intended to support that effort. So there’s also a chance that some of the same sorts of stipulations could be baked into the planned online harms bill.

“This is not, and will not be, ‘law’. It is just a code of practice,” said Neil Brown, an Internet, telecoms and tech lawyer at Decoded Legal, discussing the likely impact of the suggested standards. “It shows the direction of the ICO’s thinking, and its expectations, and the ICO has to have regard to it when it takes enforcement action but it’s not something with which an organisation needs to comply as such. They need to comply with the law, which is the GDPR [General Data Protection Regulation] and the DPA [Data Protection Act] 2018.

“The code of practice sits under the DPA 2018, so companies which are within the scope of that are likely to want to understand what it says. The DPA 2018 and the UK GDPR (the version of the GDPR which will be in place after Brexit) covers controllers established in the UK, as well as overseas controllers which target services to people in the UK or monitor the behaviour of people in the UK. Merely making a service available to people in the UK should not be sufficient.”

“Overall, this is consistent with the general direction of travel for online services, and the perception that more needs to be done to protect children online,” Brown also told us.

“Right now, online services should be working out how to comply with the GDPR, the ePrivacy rules, and any other applicable laws. The obligation to comply with those laws does not change because of today’s code of practice. Rather, the code of practice shows the ICO’s thinking on what compliance might look like (and, possibly, goldplates some of the requirements of the law too).”

Organizations that choose to take note of the code — and are in a position to be able to demonstrate they’ve followed its standards — stand a better chance of persuading the regulator they’ve complied with relevant privacy laws, per Brown.

“Conversely, if they want to say that they comply with the law but not with the code, that is (legally) possible, but might be more of a struggle in terms of engagement with the ICO,” he added.

Zooming back out, the government said last fall that it’s committed to publishing draft online harms legislation for pre-legislative scrutiny “at pace”.

But at the same time it dropped a controversial plan included in a 2017 piece of digital legislation which would have made age checks for accessing online pornography mandatory — saying it wanted to focus on a developing “the most comprehensive approach possible to protecting children”, i.e. via the online harms bill.

How comprehensive the touted ‘child protections’ will end up being remains to be seen.

Brown suggested age verification could come through as a “general requirement”, given the age verification component of the Digital Economy Act 2017 was dropped — and “the government has said that these will be swept up in the broader online harms piece”.

It has also been consulting with tech companies on possible ways to implement age verification online.

The difficulties of regulating perpetually iterating Internet services — many of which are also operated by companies based outside the UK — have been writ large for years. (And are mired in geopolitics.)

While the enforcement of existing European digital privacy laws remains, to put it politely, a work in progress

Yesterday — January 21st 2020Your RSS feeds

AppsFlyer raises $210M for ad attribution and more

By Anthony Ha

AppsFlyer has raised a massive Series D of $210 million led by General Atlantic.

Founded in 2011, the company is best known for mobile ad attribution — allowing advertisers to see which campaigns are driving results. At the same time, AppsFlyer has expanded into other areas like fraud prevention.

And in the funding announcement, General Atlantic Manager Director Alex Crisses suggested that there’s a broader opportunity here.

“Attribution is becoming the core of the marketing tech stack, and AppsFlyer has established itself as a leader in this fast-growing category,” Crisses said. “AppsFlyer’s commitment to being independent, unbiased, and representing the marketer’s interests has garnered the trust of many of the world’s leading brands, and we see significant potential to capture additional opportunity in the market.”

Crisses and General Atlantic’s co-president and global head of technology Anton Levy are both joining AppsFlyer’s board of directors. Previous investors Qumra Capital, Goldman Sachs Growth, DTCP (Deutsche Telekom Capital Partners), Pitango Venture Capital and Magma Venture Partners also participated in the round, which brings the company’s total funding to $294 million.

AppsFlyer said it works with more than 12,000 customers including eBay, HBO, Tencent, NBC Universal, Minecraft, US Bank, Macy’s and Nike. It also says it saw more than $150 million in annual recurring revenue in 2019, up 5x from its Series C in 2017.

Co-founder and CEO Oren Kaniel said that as attribution becomes more important, marketers need a partner they can trust. And with AppsFlyer driving $28 billion in ad spend last year, he argued, “There’s a lot of trust there.”

Kaniel added, “It doesn’t really matter how sophisticated your marketing stack is, or whether you have AI or machine learning — if the data feed is wrong … everything else will be wrong. I think companies realize how sensitive and critical this data platform is for them. I think that in the past couple of years, they’re investing more in selecting the right platform.”

In order to ensure that trust, he said that AppsFlyer has avoided any conflicts of interest in its business model — a position that extends to fundraising, where Kaniel made sure not to raise money from any of the big players in digital advertising.

And moving forward, he said, “We will never go into media business, never go into media services. We want to maintain our independence, we want to maintain our previous unbiased positions.”

Kaniel also argued that while he doesn’t see regulations like Europe GDPR and California’s CCPA hindering ad attribution directly, the regulatory environment has justified AppsFlyer’s investment in privacy and security.

“Even more than just being in compliance, [with AppsFlyer], marketers all of a sudden have full control of their data,” he said. “Let’s say on the web, probably your website is sending data and information to partners who don’t need to have access to this ifnormation. The reason is, there’s no logic, there’s a lot of pixels going everywhere, the publishers don’t have control. If you use our platform, you have full control, you can configure the exact data points that you’d like to share.”

Before yesterdayYour RSS feeds

EU lawmakers are eyeing risk-based rules for AI, per leaked white paper

By Natasha Lomas

The European Commission is considering a temporary ban on the use of facial recognition technology, according to a draft proposal for regulating artificial intelligence obtained by Euroactiv.

Creating rules to ensure AI is ‘trustworthy and human’ has been an early flagship policy promise of the new Commission, led by president Ursula von der Leyen.

But the leaked proposal suggests the EU’s executive body is in fact leaning towards tweaks of existing rules and sector/app specific risk-assessments and requirements, rather than anything as firm as blanket sectoral requirements or bans.

The leaked Commission white paper floats the idea of a three-to-five-year period in which the use of facial recognition technology could be prohibited in public places — to give EU lawmakers time to devise ways to assess and manage risks around the use of the technology, such as to people’s privacy rights or the risk of discriminatory impacts from biased algorithms.

“This would safeguard the rights of individuals, in particular against any possible abuse of the technology,” the Commission writes, adding that: “It would be necessary to foresee some exceptions, notably for activities in the context of research and development and for security purposes.”

However the text raises immediate concerns about imposing even a time-limited ban — which is described as “a far-reaching measure that might hamper the development and uptake of this technology” — and the Commission goes on to state that its preference “at this stage” is to rely on existing EU data protection rules, aka the General Data Protection Regulation (GDPR).

The white paper contains a number of options the Commission is still considering for regulating the use of artificial intelligence more generally.

These range from voluntary labelling; to imposing sectorial requirements for the public sector (including on the use of facial recognition tech); to mandatory risk-based requirements for “high-risk” applications (such as within risky sectors like healthcare, transport, policing and the judiciary, as well as for applications which can “produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage”); to targeted amendments to existing EU product safety and liability legislation.

The proposal also emphasizes the need for an oversight governance regime to ensure rules are followed — though the Commission suggests leaving it open to Member States to choose whether to rely on existing governance bodies for this task or create new ones dedicated to regulating AI.

Per the draft white paper, the Commission says its preference for regulating AI are options 3 combined with 4 & 5: Aka mandatory risk-based requirements on developers (of whatever sub-set of AI apps are deemed “high-risk”) that could result in some “mandatory criteria”, combined with relevant tweaks to existing product safety and liability legislation, and an overarching governance framework.

Hence it appears to be leaning towards a relatively light-touch approach, focused on “building on existing EU legislation” and creating app-specific rules for a sub-set of “high-risk” AI apps/uses — and which likely won’t stretch to even a temporary ban on facial recognition technology.

Much of the white paper is also take up with discussion of strategies about “supporting the development and uptake of AI” and “facilitating access to data”.

“This risk-based approach would focus on areas where the public is at risk or an important legal interest is at stake,” the Commission writes. “This strictly targeted approach would not add any new additional administrative burden on applications that are deemed ‘low-risk’.”

EU commissioner Thierry Breton, who oversees the internal market portfolio, expressed resistance to creating rules for artificial intelligence last year — telling the EU parliament then that he “won’t be the voice of regulating AI“.

For “low-risk” AI apps, the white paper notes that provisions in the GDPR which give individuals the right to receive information about automated processing and profiling, and set a requirement to carry out a data protection impact assessment, would apply.

Albeit the regulation only defines limited rights and restrictions over automated processing — in instances where there’s a legal or similarly significant effect on the people involved. So it’s not clear how extensively it would in fact apply to “low-risk” apps.

If it’s the Commission’s intention to also rely on GDPR to regulate higher risk stuff — such as, for example, police forces’ use of facial recognition tech — instead of creating a more explicit sectoral framework to restrict their use of a highly privacy-hostile AI technologies — it could exacerbate an already confusingly legislative picture where law enforcement is concerned, according to Dr Michael Veale, a lecturer in digital rights and regulation at UCL.

“The situation is extremely unclear in the area of law enforcement, and particularly the use of public private partnerships in law enforcement. I would argue the GDPR in practice forbids facial recognition by private companies in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate. However, the merchants of doubt at facial recognition firms wish to sow heavy uncertainty into that area of law to legitimise their businesses,” he told TechCrunch.

“As a result, extra clarity would be extremely welcome,” Veale added. “The issue isn’t restricted to facial recognition however: Any type of biometric monitoring, such a voice or gait recognition, should be covered by any ban, because in practice they have the same effect on individuals.”

An advisory body set up to advise the Commission on AI policy set out a number of recommendations in a report last year — including suggesting a ban on the use of AI for mass surveillance and social credit scoring systems of citizens.

But its recommendations were criticized by privacy and rights experts for falling short by failing to grasp wider societal power imbalances and structural inequality issues which AI risks exacerbating — including by supercharging existing rights-eroding business models.

In a paper last year Veale dubbed the advisory body’s work a “missed opportunity” — writing that the group “largely ignore infrastructure and power, which should be one of, if not the most, central concern around the regulation and governance of data, optimisation and ‘artificial intelligence’ in Europe going forwards”.

LocalGlobe partner Julia Hawkins discusses femtech’s risks and rewards

By Natasha Lomas

London-based seed fund LocalGlobe is incredibly active at the early-stage end of the startup pipeline with a broad focus across multiple sectors and areas, including health.

We interviewed partner Julia Hawkins about the opportunities and risks related to femtech investing in light of the fund’s early backing for Ferly, a female-founded startup with a subscription app that describes itself as an audio guide to “mindful sex.”

The startup says its mission is to open up conversations around female sexual pleasure and create a place for self-discovery and empowering community — touting “sex-positive” content that it says is “backed by research, written by experts, and personalized to you.”

The interview has been edited for length and clarity.

Privacy experts slam UK’s ‘disastrous’ failure to tackle unlawful adtech

By Natasha Lomas

The UK’s data protection regulator has been slammed by privacy experts for once again failing to take enforcement action over systematic breaches of the law linked to behaviorally targeted ads — despite warning last summer that the adtech industry is out of control.

The Information Commissioner’s Office (ICO) has also previously admitted it suspects the real-time bidding (RTB) system involved in some programmatic online advertising to be unlawfully processing people’s sensitive information. But rather than take any enforcement against companies it suspects of law breaches it has today issued another mildly worded blog post — in which it frames what it admits is a “systemic problem” as fixable via (yet more) industry-led “reform”.

Yet it’s exactly such industry-led self-regulation that’s created the unlawful adtech mess in the first place, data protection experts warn.

The pervasive profiling of Internet users by the adtech ‘data industrial complex’ has been coming under wider scrutiny by lawmakers and civic society in recent years — with sweeping concerns being raised in parliaments around the world that individually targeted ads provide a conduit for discrimination, exploit the vulnerable, accelerate misinformation and undermine democratic processes as a consequence of platform asymmetries and the lack of transparency around how ads are targeted.

In Europe, which has a comprehensive framework of data protection rights, the core privacy complaint is that these creepy individually targeted ads rely on a systemic violation of people’s privacy from what amounts to industry-wide, Internet-enabled mass surveillance — which also risks the security of people’s data at vast scale.

It’s now almost a year and a half since the ICO was the recipient of a major complaint into RTB — filed by Dr Johnny Ryan of private browser Brave; Jim Killock, director of the Open Rights Group; and Dr Michael Veale, a data and policy lecturer at University College London — laying out what the complainants described then as “wide-scale and systemic” breaches of Europe’s data protection regime.

The complaint — which has also been filed with other EU data protection agencies — agues that the systematic broadcasting of people’s personal data to bidders in the adtech chain is inherently insecure and thereby contravenes Europe’s General Data Protection Regulation (GDPR), which stipulates that personal data be processed “in a manner that ensures appropriate security of the personal data”.

The regulation also requires data processors to have a valid legal basis for processing people’s information in the first place — and RTB fails that test, per privacy experts — either if ‘consent’ is claimed (given the sheer number of entities and volumes of data being passed around, which means it’s not credible to achieve GDPR’s ‘informed, specific and freely given’ threshold for consent to be valid); or ‘legitimate interests’ — which requires data processors carry out a number of balancing assessment tests to demonstrate it does actually apply.

“We have reviewed a number of justifications for the use of legitimate interests as the lawful basis for the processing of personal data in RTB. Our current view is that the justification offered by organisations is insufficient,” writes Simon McDougall, the ICO’s executive director of technology and innovation, developing a warning over the industry’s rampant misuse of legitimate interests to try to pass off RTB’s unlawful data processing as legit.

The ICO also isn’t exactly happy about what it’s found adtech doing on the Data Protection Impact Assessment front — saying, in so many words, that it’s come across widespread industry failure to actually, er, assess impacts.

“The Data Protection Impact Assessments we have seen have been generally immature, lack appropriate detail, and do not follow the ICO’s recommended steps to assess the risk to the rights and freedoms of the individual,” writes McDougall.

“We have also seen examples of basic data protection controls around security, data retention and data sharing being insufficient,” he adds.

Yet — again — despite fresh admissions of adtech’s lawfulness problem the regulator is choosing more stale inaction.

In the blog post McDougall does not rule out taking “formal” action at some point — but there’s only a vague suggestion of such activity being possible, and zero timeline for “develop[ing] an appropriate regulatory response”, as he puts it. (His preferred ‘E’ word in the blog is ‘engagement’; you’ll only find the word ‘enforcement’ in the footer link on the ICO’s website.)

“We will continue to investigate RTB. While it is too soon to speculate on the outcome of that investigation, given our understanding of the lack of maturity in some parts of this industry we anticipate it may be necessary to take formal regulatory action and will continue to progress our work on that basis,” he adds.

McDougall also trumpets some incremental industry fiddling — such as trade bodies agreeing to update their guidance — as somehow relevant to turning the tanker in a fundamentally broken system.

(Trade body the Internet Advertising Bureau’s UK branch has responded to developments with an upbeat note from its head of policy and regulatory affairs, Christie Dennehy-Neil, who lauds the ICO’s engagement as “a constructive process”, claiming: “We have made good progress” — before going on to urge its members and the wider industry to implement “the actions outlined in our response to the ICO” and “deliver meaningful change”. The statement climaxes with: “We look forward to continuing to engage with the ICO as this process develops.”)

McDougall also points to Google removing content categories from its RTB platform from next month (a move it announced months back, in November) as an important development; and seizes on the tech giant’s recent announcement of a proposal to phase out support for third party cookies within the next two years as ‘encouraging’.

Privacy experts have responded with facepalmed outrage to yet another can-kicking exercise by the UK regulator — warning that cosmetic tweaks to adtech won’t fix a system that’s designed to feast off an unlawful and inherently insecure high velocity background trading of Internet users’ personal data.

“When an industry is premised and profiting from clear and entrenched illegality that breach individuals’ fundamental rights, engagement is not a suitable remedy,” said UCL’s Veale in a statement. “The ICO cannot continue to look back at its past precedents for enforcement action, because it is exactly that timid approach that has led us to where we are now.”

ICO believes that cosmetic fixes can do the job when it comes to #adtech. But no matter how secure data flows are and how beautiful cookie notices are, can people really understand the consequences of their consent? I'm convinced that this consent will *never* be informed. 1/2 https://t.co/1avYt6lgV3

— Karolina Iwańska (@ka_iwanska) January 17, 2020

The trio behind the RTB complaints (which includes Veale) have also issued a scathing collective response to more “regulatory ambivalence” — denouncing the lack of any “substantive action to end the largest data breach ever recorded in the UK”.

“The ‘Real-Time Bidding’ data breach at the heart of RTB market exposes every person in the UK to mass profiling, and the attendant risks of manipulation and discrimination,” they warn. “Regulatory ambivalence cannot continue. The longer this data breach festers, the deeper the rot sets in and the further our data gets exploited. This must end. We are considering all options to put an end to the systemic breach, including direct challenges to the controllers and judicial oversight of the ICO.”

Wolfie Christl, a privacy researcher who focuses on adtech — including contributing to a recent study looking at how extensively popular apps are sharing user data with advertisers — dubbed the ICO’s response “disastrous”.

“Last summer the ICO stated in their report that millions of people were affected by thousands of companies’ GDPR violations. I was sceptical when they announced they would give the industry six more months without enforcing the law. My impression is they are trying to find a way to impose cosmetic changes and keep the data industry happy rather than acting on their own findings and putting an end to the ubiquitous data misuse in today’s digital marketing, which should have happened years ago. The ICO seems to prioritize appeasing the industry over the rights of data subjects, and this is disastrous,” he told us.

“The way data-driven online marketing currently works is illegal at scale and it needs to be stopped from happening,” Christl added. “Each day EU data protection authorities allow these practices to continue further violates people’s rights and freedoms and perpetuates a toxic digital economy.

“This undermines the GDPR and generally trust in tech, perpetuates legal uncertainty for businesses, and punishes companies who comply and create privacy-respecting services and business models.

“Twenty months after the GDPR came into full force, it is still not enforced in major areas. We still see large-scale misuse of personal information all over the digital world. There is no GDPR enforcement against the tech giants and there is no enforcement against thousands of data companies beyond the large platforms. It seems that data protection authorities across the EU are either not able — or not willing — to stop many kinds of GDPR violations conducted for business purposes. We won’t see any change without massive fines and data processing bans. EU member states and the EU Commission must act.”

Trucks VC general partner Reilly Brennan is coming to TC Sessions: Mobility

By Kirsten Korosec

The future of transportation industry is bursting at the seams with startups aiming to bring everything from flying cars and autonomous vehicles to delivery bots and even more efficient freight to roads.

One investor who is right at the center of this is Reilly Brennan, founding general partner of Trucks VC, a seed-stage venture capital fund for entrepreneurs changing the future of transportation.

TechCrunch is excited to announce that Brennan will join us on stage for TC Sessions: Mobility.

In case you missed last year’s event, TC Sessions: Mobility is a one-day conference that brings together the best and brightest engineers, investors, founders and technologists to talk about transportation and what is coming on the horizon. The event will be held May 14, 2020 in the California Theater in San Jose, Calif.

Brennan is known as much for his popular FoT newsletter as his investments, which include May Mobility, Nauto, nuTonomy, Joby Aviation, Skip and Roadster.

Stay tuned to see who we’ll announce next.

And … $250 Early-Bird tickets are now on sale — save $100 on tickets before prices go up on April 9; book today.

Students, you can grab your tickets for just $50 here.

Indian tech startups raised a record $14.5B in 2019

By Manish Singh

Indian tech startups have never had it so good.

Local tech startups in the nation raised $14.5 billion in 2019, beating their previous best of $10.6 billion last year, according to research firm Tracxn .

Tech startups in India this year participated in 1,185 financing rounds — 459 of those were Series A or later rounds — from 817 investors.

Early-stage startups — those participating in angel or pre-Series A financing round — raised $6.9 billion this year, easily surpassing last year’s $3.3 billion figure, according to a report by venture debt firm InnoVen Capital.

According to InnoVen’s report, early-stage startups that have typically struggled to attract investors saw a 22% year-over-year increase in the number of financing deals they took part in this year. Cumulatively, at $2.6 million, their valuation also increased by 15% from last year.

Overall, there were 81 financing deals of size between $25 million and $100 million, up from 56 last year and 36 the year before, and 27 rounds above $100 million, up from 17 in 2018 and and nine in 2017, Tracxn told TechCrunch.

Also in 2019, 128 startups in India got acquired, four got publicly listed and nine became unicorns. This year, Indian tech startups also attracted a record number of international investors, according to Tracxn.

This year’s fundraise further moves the nation’s burgeoning startup space on a path of steady growth.

Since 2016, when tech startups accumulated just $4.3 billion — down from $7.9 billion the year before — flow of capital has increased significantly in the ecosystem. In 2017, Indian startups raised $10.4 billion, per Tracxn.

“The decade has seen an impressive 25x growth from a tiny $550 million in 2010 to $14.5 billion in 2019 in terms of the total funding raised by the startups,” said Tracxn.

What’s equally promising about Indian startups is the challenges they are beginning to tackle today, said Dev Khare, a partner at VC fund Lightspeed Venture Partners, in a recent interview with TechCrunch.

In 2014 and 2015, startups were largely focused on building e-commerce solutions and replicating ideas that worked in Western markets. But today, they are tackling a wide-range of categories and opportunities and building some solutions that have not been attempted in any other market, he said.

Tracxn’s analysis found that lodging startups raised about $1.7 billion this year — thanks to Oyo alone bagging $1.5 billion, followed by logistics startups such as Elastic Run, Delhivery and Ecom Express that secured $641 million.

Also, 176 horizontal marketplaces, more than 150 education learning apps, over 160 fintech startups, over 120 trucking marketplaces, 82 ride-hailing services, 42 insurance platforms, 33 used car listing providers and 13 startups that are helping businesses and individuals access working capital secured funding this year. Fintech startups alone raised $3.2 billion this year, more than startups operating in any other category, said Tracxn.

The investors

Sequoia Capital, with more than 50 investments — or co-investments — was the most active venture capital fund for Indian tech startups this year. (Rajan Anandan, former executive in charge of Google’s business in India and Southeast Asia, joined Sequoia Capital India as a managing director in April.) Accel, Tiger Global Management, Blume Ventures and Chiratae Ventures were the other top four VCs.

Steadview Capital, with nine investments in startups, including ride-hailing service Ola, education app Unacademy and fintech startup BharatPe, led the way among private equity funds. General Atlantic, which invested in NoBroker and recently turned profitable edtech startup Byju’s, invested in four startups. FMO, Sabre Partners India and CDC Group each invested in three startups.

Venture Catalysts, with more than 40 investments, including in HomeCapital and Blowhorn, was the top accelerator or incubator in India this year. Y Combinator, with over 25 investments, Sequoia Capital’s Surge, Axilor Ventures and Techstars were also very active this year.

Indian tech startups also attracted a number of direct investments from top corporates and banks this year. Goldman Sachs, which earlier this month invested in fintech startup ZestMoney, overall made eight investments this year. Among others, Facebook made its first investment in an Indian startup — social-commerce firm Meesho — and Twitter led a $100 million financing round in local social networking app ShareChat.

Atom Finance’s free Bloomberg Terminal rival raises $12M

By Josh Constine

If you want to win on Wall Street, Yahoo Finance is insufficient but Bloomberg Terminal costs a whopping $24,000 per year. That’s why Atom Finance built a free tool designed to democratize access to professional investor research. If Robinhood made it cost $0 to trade stocks, Atom Finance makes it cost $0 to know which to buy.

Today Atom launches its mobile app with access to its financial modeling, portfolio tracking, news analysis, benchmarking and discussion tools. It’s the consumerization of finance, similar to what we’ve seen in enterprise SaaS. “Investment research tools are too important to the financial well-being of consumers to lack the same cycles of product innovation and accessibility that we have experienced in other verticals,” CEO Eric Shoykhet tells me.

In its first press interview, Atom Finance today revealed to TechCrunch that it has raised a $10.6 million Series A led by General Catalyst to build on its quiet $1.9 million seed round. The cash will help the startup eventually monetize by launching premium tiers with even more hardcore research tools.

Atom Finance already has 100,000 users and $400 million in assets it’s helping steer since soft-launching in June. “Atom fundamentally changes the game for how financial news media and reporting is consumed. I could not live without it,” says The Twenty Minute VC podcast founder and Atom investor Harry Stebbings.

Individual investors are already at a disadvantage compared to big firms equipped with artificial intelligence, the priciest research and legions of traders glued to the markets. Yet it’s becoming increasingly clear that investing is critical to long-term financial mobility, especially in an age of rampant student debt and automation threatening employment.

“Our mission is two-fold,” Shoykhet says. “To modernize investment research tools through an intuitive platform that’s easily accessible across all devices, while democratizing access to institutional-quality investing tools that were once only available to Wall Street professionals.”

Leveling the trading floor

Shoykhet saw the gap between amateur and expert research platforms firsthand as an investor at Blackstone and Governors Lane. Yet even the supposedly best-in-class software was lacking the usability we’ve come to expect from consumer mobile apps. Atom Finance claims that “for example, Bloomberg hasn’t made a significant change to its central product offering since 1982.”

The Atom Finance team

So a year ago, Shoykhet founded Atom Finance in Brooklyn to fill the void. Its web, iOS and Android apps offer five products that combine to guide users’ investing decisions without drowning them in complexity:

  • Sandbox – Instant financial modeling with pre-populated consensus projections that automatically update and are recalculated over time
  • Portfolio – Track your linked investment accounts to monitor overarching stats, real-time profit and loss statements and diversification
  • X-Ray – A financial research search engine for compiling news, SEC filings, transcripts and analysis
  • Compare – Benchmarking tables for comparing companies and sectors
  • Collaborate – Discussion boards and group chat for sharing insights with fellow investors

“Our Sandbox feature allows users to create simple financial models directly within our platform, without having to export data to a spreadsheet,” Shoykhet says. “This saves our users time and prevents them from having to manually refresh the inputs to their model when there is new information.”

Shoykhet positions Atom Finance in the middle of the market, saying, “Existing solutions are either too rudimentary for rigorous analysis (Yahoo Finance, Google Finance) or too expensive for individual investors (Bloomberg, CapIQ, Factset).”

With both its free and forthcoming paid tiers, Atom hopes to undercut Sentieo, a more AI-focused financial research platform that charges $500 to $1,000 per month and raised $19 million a year ago. Cheaper tools like BamSEC and WallMine are often limited to just pulling in earnings transcripts and filings. Robinhood has its own in-app research tools, which could make it a looming competitor or a potential acquirer for Atom Finance.

Shoykhet admits his startup will face stiff competition from well-entrenched tools like Bloomberg. “Incumbent solutions have significant brand equity with our target market, and especially with professional investors. We will have to continue iterating and deliver an unmatched user experience to gain the trust/loyalty of these users,” he says. Additionally, Atom Finance’s access to users’ sensitive data means flawless privacy, security, and accuracy will be essential.

The $12.5 million from General Catalyst, Greenoaks, Global Founders Capital, Untitled Investments, Day One Ventures and a slew of angels gives Atom runway to rev up its freemium model. Robinhood has found great success converting unpaid users to its subscription tier where they can borrow money to trade. By similarly starting out free, Atom’s eight-person team hailing from SoFi, Silver Lake, Blackstone and Citi could build a giant funnel to feed its premium tiers.

Fintech can feel dry and ruthlessly capitalistic at times. But Shoykhet insists he’s in it to equip a new generation with methods of wealth creation. “I think we’ve gone long enough without seeing real innovation in this space. We can’t be complacent with something so important. It’s crucial that we democratize access to these tools and educate consumers . . . to improve their investment well-being.”

GM will bring an electric truck to market in 2021

By Kirsten Korosec

GM CEO Mary Barra said Thursday that the automaker will bring its first electric truck to market in the fall of 2021.

The comments were made Thursday during GM’s investor day. Later this evening, Tesla, which also plans to start selling an electric truck in 2021, will reveal its “cybertruck” at an event in Hawthorne, Calif. Reuters first reported the news.

“General Motors understands truck buyers and… people who are new coming into the truck market,” Barra said during the investor conference, explaining the company’s rationale for the move.

GM’s foray into electric trucks has been public before. Last month, the Detroit Free Press reported the that GM’s Detroit-Hamtramck Assembly Plant would remain open to produce an electric pickup under a deal between the UAW and the automaker.

This is the first time the company has provided a timeline.

Several other companies are expected to bring electric trucks to the marketplace in the next several years, including newcomer Rivian, Tesla and Ford.

A 10-point plan to reboot the data industrial complex for the common good

By Natasha Lomas

A posthumous manifesto by Giovanni Buttarelli, who until his death this summer was Europe’s chief data protection regulator, seeks to join the dots of surveillance capitalism’s rapacious colonization of human spaces, via increasingly pervasive and intrusive mapping and modelling of our data, with the existential threat posed to life on earth by manmade climate change.

In a dense document rich with insights and ideas around the notion that “data means power” — and therefore that the unequally distributed data-capture capabilities currently enjoyed by a handful of tech platforms sums to power asymmetries and drastic social inequalities — Buttarelli argues there is potential for AI and machine learning to “help monitor degradation and pollution, reduce waste and develop new low-carbon materials”. But only with the right regulatory steerage in place.

“Big data, AI and the internet of things should focus on enabling sustainable development, not on an endless quest to decode and recode the human mind,” he warns. “These technologies should — in a way that can be verified — pursue goals that have a democratic mandate. European champions can be supported to help the EU achieve digital strategic autonomy.”

“The EU’s core values are solidarity, democracy and freedom,” he goes on. “Its conception of data protection has always been the promotion of responsible technological development for the common good. With the growing realisation of the environmental and climatic emergency facing humanity, it is time to focus data processing on pressing social needs. Europe must be at the forefront of this endeavour, just as it has been with regard to individual rights.”

One of his key calls is for regulators to enforce transparency of dominant tech companies — so that “production processes and data flows are traceable and visible for independent scrutiny”.

“Use enforcement powers to prohibit harmful practices, including profiling and behavioural targeting of children and young people and for political purposes,” he also suggests.

Another point in the manifesto urges a moratorium on “dangerous technologies”, citing facial recognition and killer drones as examples, and calling generally for a pivot away from technologies designed for “human manipulation” and toward “European digital champions for sustainable development and the promotion of human rights”.

In an afterword penned by Shoshana Zuboff, the US author and scholar writes in support of the manifesto’s central tenet, warning pithily that: “Global warming is to the planet what surveillance capitalism is to society.”

There’s plenty of overlap between Buttarelli’s ideas and Zuboff’s — who has literally written the book on surveillance capitalism. Data concentration by powerful technology platforms is also resulting in algorithmic control structures that give rise to “a digital underclass… comprising low-wage workers, the unemployed, children, the sick, migrants and refugees who are required to follow the instructions of the machines”, he warns.

“This new instrumentarian power deprives us not only of the right to consent, but also of the right to combat, building a world of no exit in which ignorance is our only alternative to resigned helplessness, rebellion or madness,” she agrees.

There are no less than six afterwords attached to the manifesto — a testament to the store in which Buttarelli’s ideas are held among privacy, digital and human rights campaigners.

The manifesto “goes far beyond data protection”, says writer Maria Farrell in another contribution. “It connects the dots to show how data maximisation exploits power asymmetries to drive global inequality. It spells out how relentless data-processing actually drives climate change. Giovanni’s manifesto calls for us to connect the dots in how we respond, to start from the understanding that sociopathic data-extraction and mindless computation are the acts of a machine that needs to be radically reprogrammed.”

At the core of the document is a 10-point plan for what’s described as “sustainable privacy”, which includes the call for a dovetailing of the EU’s digital priorities with a Green New Deal — to “support a programme for green digital transformation, with explicit common objectives of reducing inequality and safeguarding human rights for all, especially displaced persons in an era of climate emergency”.

Buttarelli also suggests creating a forum for civil liberties advocates, environmental scientists and machine learning experts who can advise on EU funding for R&D to put the focus on technology that “empowers individuals and safeguards the environment”.

Another call is to build a “European digital commons” to support “open-source tools and interoperability between platforms, a right to one’s own identity or identities, unlimited use of digital infrastructure in the EU, encrypted communications, and prohibition of behaviour tracking and censorship by dominant platforms”.

“Digital technology and privacy regulation must become part of a coherent solution for both combating and adapting to climate change,” he suggests in a section dedicated to a digital Green New Deal — even while warning that current applications of powerful AI technologies appear to be contributing to the problem.

“AI’s carbon footprint is growing,” he points out, underlining the environmental wastage of surveillance capitalism. “Industry is investing based on the (flawed) assumption that AI models must be based on mass computation.

“Carbon released into the atmosphere by the accelerating increase in data processing and fossil fuel burning makes climatic events more likely. This will lead to further displacement of peoples and intensification of calls for ‘technological solutions’ of surveillance and border controls, through biometrics and AI systems, thus generating yet more data. Instead, we need to ‘greenjacket’ digital technologies and integrate them into the circular economy.”

Another key call — and one Buttarelli had been making presciently in recent years — is for more joint working between EU regulators towards common sustainable goals.

“All regulators will need to converge in their policy goals — for instance, collusion in safeguarding the environment should be viewed more as an ethical necessity than as a technical breach of cartel rules. In a crisis, we need to double down on our values, not compromise on them,” he argues, going on to voice support for antitrust and privacy regulators to co-operate to effectively tackle data-based power asymmetries.

“Antitrust, democracies’ tool for restraining excessive market power, therefore is becoming again critical. Competition and data protection authorities are realising the need to share information about their investigations and even cooperate in anticipating harmful behaviour and addressing ‘imbalances of power rather than efficiency and consent’.”

On the General Data Protection Regulation (GDPR) specifically — Europe’s current framework for data protection — Buttarelli gives a measured assessment, saying “first impressions indicate big investments in legal compliance but little visible change to data practices”.

He says Europe’s data protection authorities will need to use all the tools at their disposal — and find the necessary courage — to take on the dominant tracking and targeting digital business models fuelling so much exploitation and inequality.

He also warns that GDPR alone “will not change the structure of concentrated markets or in itself provide market incentives that will disrupt or overhaul the standard business model”.

“True privacy by design will not happen spontaneously without incentives in the market,” he adds. “The EU still has the chance to entrench the right to confidentiality of communications in the ePrivacy Regulation under negotiation, but more action will be necessary to prevent further concentration of control of the infrastructure of manipulation.”

Looking ahead, the manifesto paints a bleak picture of where market forces could be headed without regulatory intervention focused on defending human rights. “The next frontier is biometric data, DNA and brainwaves — our thoughts,” he suggests. “Data is routinely gathered in excess of what is needed to provide the service; standard tropes, like ‘improving our service’ and ‘enhancing your user  experience’ serve as decoys for the extraction of monopoly rents.”

There is optimism too, though — that technology in service of society can be part of the solution to existential crises like climate change; and that data, lawfully collected, can support public good and individual self-realization.

“Interference with the right to privacy and personal data can be lawful if it serves ‘pressing social needs’,” he suggests. “These objectives should have a clear basis in law, not in the marketing literature of large companies. There is no more pressing social need than combating environmental degradation” — adding that: “The EU should promote existing and future trusted institutions, professional bodies and ethical codes to govern this exercise.”

In instances where platforms are found to have systematically gathered personal data unlawfully Buttarelli trails the interesting idea of an amnesty for those responsible “to hand over their optimisation assets”– as a means of not only resetting power asymmetries and rebalancing the competitive playing field but enabling societies to reclaim these stolen assets and reapply them for a common good.

While his hope for Europe’s Data Protection Board — the body which offers guidance and coordinates interactions between EU Member States’ data watchdogs — is to be “the driving force supporting the Global Privacy Assembly in developing a common vision and agenda for sustainable privacy”.

The manifesto also calls for European regulators to better reflect the diversity of people whose rights they’re being tasked with safeguarding.

The document, which is entitled Privacy 2030: A vision for Europe, has been published on the website of the International Association of Privacy Professionals ahead of its annual conference this week.

Buttarelli had intended — but was finally unable — to publish his thoughts on the future of privacy this year, hoping to inspire discussion in Europe and beyond. In the event, the manifesto has been compiled posthumously by Christian D’Cunha, head of his private office, who writes that he has drawn on discussions with the data protection supervisor in his final months — with the aim of plotting “a plausible trajectory of his most passionate convictions”.

New York State Attorney General reportedly investigating WeWork

By Catherine Shu

WeWork is reportedly being investigated by the New York State Attorney General. According to Reuters, the NYAG’s questions include if WeWork founder and former CEO Adam Neumann engaged in self-dealing.

A WeWork spokesperson said in an email that “we have received an inquiry from the office of the New York State Attorney General and are cooperating in the matter.” TechCrunch also contacted the New York State Attorney General’s office for comment. WeWork is headquartered in New York City.

This comes less than a week after Bloomberg reported WeWork is the subject of a U.S. Securities and Exchange Commission inquiry into potential rule violations related to its cancelled IPO.

WeWork’s parent company, The We Company, announced on Sept. 30 that it was withdrawing its S-1 filing for an initial public offering, shortly after Neumann stepped down as CEO. In addition to questions about the company’s financial state, red flags for investors included that Neumann had borrowed against his WeWork shares and leased properties he owned back to the company.

An entity Neumann controlled also sold the company the right to use the word “We” for $5.9 million, though he later asked the company to unwind the agreement and returned the money after public criticism.

After receiving a lifeline from investor SoftBank worth up to $8 billion, WeWork is now engaging in major cost-cutting measures, including layoffs at Meetup, which it acquired for $200 million in 2017.

Microsoft announces changes to cloud contract terms following EU privacy probe

By Natasha Lomas

Chalk up another win for European data protection: Microsoft has announced changes to commercial cloud contracts following privacy concerns raised by European Union data protection authorities.

The changes to contactual terms will apply globally and to all its commercial customers — whether public or private sector entity, or large or small business, it said today.

The new contractual provisions will be offered to all public sector and enterprise customers at the beginning of 2020, it adds.

In October Europe’s data protection supervisor warned that preliminary results of an investigation into contractual terms for Microsoft’s cloud services had raised serious concerns about compliance with EU data protection rules and the role of the tech giant as a data processor for EU institutions.

Writing on its EU Policy blog, Julie Brill, Microsoft’s corporate VP for global privacy and regulatory affairs and chief privacy officer, announces the update to privacy provisions in the Online Services Terms (OST) of its commercial cloud contracts — saying it’s making the changes as a result of “feedback we’ve heard from our customers”.

“The changes we are making will provide more transparency for our customers over data processing in the Microsoft cloud,” she writes.

She also says the changes reflect those Microsoft developed in consultation with the Dutch Ministry of Justice and Security — which comprised both amended contractual terms and technical safeguards and settings — after the latter carried out risk assessments of Microsoft’s OST earlier this year and also raised concerns.

Specifically, Microsoft is accepting greater data protection responsibilities for additional processing involved in providing enterprise services, such as account management and financial reporting, per Brill:

Through the OST update we are announcing today we will increase our data protection responsibilities for a subset of processing that Microsoft engages in when we provide enterprise services. In the OST update, we will clarify that Microsoft assumes the role of data controller when we process data for specified administrative and operational purposes incident to providing the cloud services covered by this contractual framework, such as Azure, Office 365, Dynamics and Intune. This subset of data processing serves administrative or operational purposes such as account management; financial reporting; combatting cyberattacks on any Microsoft product or service; and complying with our legal obligations.

Microsoft currently designates itself as a data processor, rather than data controller for these administrative and operations functions that can be linked to provision of commercial cloud services, such as its Azure platform.

But under Europe’s General Data Protection framework a data controller has the widest obligations around handling personal data — with responsibility under Article 5 of the GDPR for the lawfulness, fairness and security of the data being processed — and therefore also greater legal risk should it fail to meet the standard.

So, from a regulatory point of view, Microsoft’s current commercial contract structure poses a risk for EU institutions of user data ending up being processed under a lower standard of legal protection than is merited.

The announced switch from data processor to controller should raise the bar around associated purposes that Microsoft may also provide to commercial customers of its cloud services.

For the latter purpose itself, Microsoft says it will remain the data processor, as well as for improving and addressing bugs or other issues related to the service, ensuring security of the services, and keeping the services up to date.

In August a conference organized jointly by the EU’s data protection supervisor and and the Dutch Ministry brought together EU customers of cloud giants to work on a joint response to regulatory risks related to cloud software provision.

Earlier this year the Dutch Ministry obtained contractual changes and technical safeguards and settings in the amended contracts it agreed with Microsoft.

“The only substantive differences in the updated terms [that will roll out globally for all commercial cloud customers] relate to customer-specific changes requested by the Dutch MOJ, which had to be adapted for the broader global customer base,” Brill writes now.

Microsoft’s blog post also points to other global privacy-related changes it says were made following feedback from the Dutch MOJ and others — including a roll out of new privacy tools across major services; specific changes to Office 365 ProPlus; and increased transparency regarding use of diagnostic data.

‘Magic: The Gathering’ game maker exposed 452,000 players’ account data

By Zack Whittaker

The maker of Magic: The Gathering has confirmed that a security lapse exposed the data on hundreds of thousands of game players.

The game’s developer, the Washington-based Wizards of the Coast, left a database backup file in a public Amazon Web Services storage bucket. The database file contained user account information for the game’s online arena. But there was no password on the storage bucket, allowing who with the bucket’s name to access the files inside.

The bucket is not believed to have been exposed for long — since around early-September — but it was long enough for U.K. cybersecurity firm Fidus Information Security to find the database.

A review of the database file showed there were 452,634 players’ information, including about 470 email addresses associated with Wizards’ staff. The database included player names and usernames, email addresses, and the date and time of the account’s creation. The database also had user passwords, which were hashed and salted, making it difficult but not impossible to unscramble.

None of the data was encrypted. The accounts date back to at least 2012, according to our review of the data.

Fidus reached out to Wizards of the Coast but did not hear back. It was only after TechCrunch reached out that the game maker pulled the storage bucket offline.

Bruce Dugan, a spokesperson for the game developer, told TechCrunch in a statement: “We learned that a database file from a decommissioned website had inadvertently been made accessible outside the company.”

“We removed the database file from our server and commenced an investigation to determine the scope of the incident,” he said. “We believe that this was an isolated incident and we have no reason to believe that any malicious use has been made of the data,” but the spokesperson did not provide any evidence for this claim.

“However, in an abundance of caution, we are notifying players whose information was contained in the database and requiring them to reset their passwords on our current system,” he said.

Harriet Lester, Fidus’ director of research and development, said it was “surprising in this day and age that misconfigurations and lack of basic security hygiene still exist on this scale, especially when referring to such large companies with a userbase of over 450,000 accounts.”

“Our research team work continuously, looking for misconfigurations such as this to alert companies as soon as possible to avoid the data falling into the wrong hands. It’s our small way of helping make the internet a safer place,” she told TechCrunch.

The game maker said it informed the U.K. data protection authorities about the exposure, in line with breach notification rules under Europe’s GDPR regulations. The U.K.’s Information Commissioner’s Office did not immediately return an email to confirm the disclosure.

Companies can be fined up to 4% of their annual turnover for GDPR violations.

California’s new data privacy law brings U.S. closer to GDPR

By Walter Thompson
Dimitri Sirota Contributor
Dimitri Sirota is CEO and cofounder of data protection and privacy software company BigID. Sirota is an established serial entrepreneur, investor, mentor, and strategist in the technology and cyber security space.

Data privacy has become one of the defining business and cultural issues of our time.

Companies around the world are scrambling to properly protect their customers’ personal information (PI). However, new regulations have actually shifted the definition of the term, making everything more complicated. With the California Consumer Privacy Act (CCPA) taking effect in January 2020, companies have limited time to get a handle on the customer information they have and how they need to care for it. If they don’t, they not only risk being fined, but also loss of brand reputation and consumer trust — which are immeasurable.

California was one of the first states to provide an express right of privacy in its constitution and the first to pass a data breach notification law, so it was not surprising when state lawmakers in June 2018 passed the CCPA, the nation’s first statewide data privacy law. The CCPA isn’t just a state law — it will become the defacto national standard for the foreseeable future, because the sheer numbers of Californians means most businesses in the country will have to comply. The requirements aren’t insignificant. Companies will have to disclose to California customers what data of theirs has been collected, delete it and stop selling it if the customer requests. The fines could easily add up — $7,500 per violation if intentional, $2,500 for those lacking intent and $750 per affected user in civil damages.

Evolution of personal information

It used to be that the meaning of personally identifiable information (PII) from a legal standpoint was clear — data that can distinguish the identity of an individual. By contrast, the standard for mere PI was lower because there was so much more of it; if PI is a galaxy, PII was the solar system. However, CCPA, and the EU’s General Data Protection Regulation GDPR, which went into effect in 2018, have shifted the definition to include additional types of data that were once fairly benign. The CCPA enshrines personal data rights for consumers, a concept that GDPR first brought into play.

The GDPR states: “Personal data should be as broadly interpreted as possible,” which includes all data associated with an individual, which we call “contextual” information. This includes any information that can “directly or indirectly” identify a person, including real names and screen names, identification numbers, birth date, location data, network addresses, device IDs, and even characteristics that describe the “physical, physiological, genetic, mental, commercial, cultural, or social identity of a person.” This conceivably could include any piece of information about a person that isn’t anonymized.

With the CCPA, the United States is playing catch up to the GDPR and similarly expanding the scope of the definition of personal data. Under the CCPA, personal information is “information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.” This includes a host of information that typically don’t raise red flags but which when combined with other data can triangulate to a specific individual like biometric data, browsing history, employment and education data, as well as inferences drawn from any of the relevant information to create a profile “reflecting the consumer’s preferences, characteristics, psychological trends, preferences, predispositions, behavior, attitudes, intelligence, abilities and aptitudes.”

Know the rules, know the data

These regulations aren’t checklist rules; they require big changes to technology and processes, and a rethinking of what data is and how it should be treated. Businesses need to understand what rules apply to them and how to manage their data. Information management has become a business imperative, but most companies lack a clear road map to do it properly. Here are some tips companies can follow to ensure they are meeting the letter and the spirit of the new regulations.

  • Figure out which regulations apply to you

The regulatory landscape is constantly changing with new rules being adopted at a rapid rate.  Every organization needs to know which regulations they need to comply with and understand the distinctions between them. Some core aspects CCPA and GDPR share include data subject rights fulfillment and automated deletion. But there will be differences so having a platform that allows you to handle a heterogenous environment at scale is important.

  • Create a privacy compliance team that works well with others

Where top VCs are investing in fintech

By Arman Tabatabai

Over the past several years, ‘fintech’ has quietly become the unsung darling of venture.

A rapidly swelling pool of new startups is taking aim at the large incumbent institutions, complex processes and outdated unfriendly interfaces that mar billion dollar financial services verticals, such as insurtech, consumer lending, personal finance, or otherwise.  

In just the past summer, the startup community saw a multitude of hundred-million dollar fintech fundraises. In 2018, fintech companies were the source of close to 1,300 venture deals worth over $15 billion in North America and Europe alone according to data from Pitchbook. Over the same period, KPMG estimates that over $52 billion in investment pour into fintech initiatives globally. 

With the non-stop stream of venture capital flowing into the never-ending list of spaces that fall under the ‘fintech’ umbrella, we asked 12 leading fintech VCs who work at firms that span early to growth stages to share where they see the most opportunity and how they see the market evolving over the long-term.

The participants touched on a number of key trends in the space, including rapid innovation in fintech infrastructure, fintech companies embedding themselves in specific verticals and platforms, rebundling and unbundling of financial services offerings, the rise of challenger banks and the state of fintech valuations into 2020.

Charles Birnbaum, Partner, Bessemer Venture Partners

The great ‘rebundling’ of fintech innovation is in full swing. The emerging consumer leaders in fintech — Chime, SoFi, Robinhood, Credit Karma, and Bessemer portfolio company Betterment — are moving quickly to increase their share of wallet with their valuable customers and become a one-stop-shop for people’s financial lives.

In 2020, we anticipate continued entrepreneurial activity and investor enthusiasm around the infrastructure and middleware layers within the fintech ecosystem that are enabling further rebundling and a rapid convergence of product themes and business models across the consumer fintech landscape.

Many players now look like potential challenger bank models more akin to what we have seen unfold in Europe the past few years. Within consumer fintech, we at Bessemer are more focused on demographically-specific product offerings that tap into underserved themes, whether that be the financial problems facing the aging population in the US or new models to serve the underbanked or underserved population of consumers and small businesses.

Ian Sigalow, Co-founder & Partner, Greycroft

What trends are you most excited in fintech from an investing perspective? 

I suspect that many enterprise software companies become fintech companies over time — collecting payments on behalf of customers and growing revenues as your customers grow. We have seen this trend in many industries over the past few years. Business owners generally prefer a model that moves IT expenditures from Operating Expenses into Cost of Goods Sold, because they can increase prices and pass their entire budget onto the customer.

On the consumer side, we have already made investments in branchless banking, insurance (auto, home, health, workers comp), cross-border payments, alternative investments, loyalty cards/services, and roboadvisor services. The companies we funded are already a few years old, and I think we will have some interesting follow-on activity there over the next few years. We have been picking spots where we think we have an unfair competitive advantage.

Our fintech portfolio is also more global than other sectors we invest in. This is because there are opportunities to achieve billion dollar outcomes in fintech, even in countries that are much smaller than the United States. That is not true in many other sectors.

We have also seen trends emerge in the US and move abroad. As an example we seeded Flutterwave, which is similar to Stripe, and they have expanded across Africa. We were also the lead investor in Yeahka, which is similar to Square in China. These products are heavily localized —tin for instance Yeahka is the largest processor of QR code payments in the world, but QR code payments are not popular in the US yet.

How much time are you spending on fintech right now? Is the market under-heated, over-heated, or just right?

Fintech is about a quarter of my time right now. We continue to see interesting new ideas and the valuations have been more or less consistent over time. The broader market doesn’t impact us very much because we tend to have a 10 year holding period.

Are there startups that you wish you would see in the industry but don’t?

A network of ‘camgirl’ sites exposed millions of users and sex workers

By Zack Whittaker

A number of popular “camgirl” sites have exposed millions of sex workers and users after the company running the sites left the back-end database unprotected.

The sites, run by Barcelona-based VTS Media, include amateur.tv, webcampornoxxx.net, and placercams.com. Most of the sites’ users are based in Spain and Europe, but we found evidence of users across the world, including the United States.

According to Alexa traffic rankings, amateur.tv is one of the most popular in Spain.

The database, containing months-worth of daily logs of the site activities, was left without a password for weeks. Those logs included detailed records of when users logged in — including usernames and sometimes their user-agents and IP addresses, which can be used to identify users. The logs also included users’ private chat messages with other users, as well as promotional emails they were receiving from the various sites. The logs even included failed login attempts, storing usernames and passwords in plaintext. We did not test the credentials as doing so would be unlawful.

The exposed data also revealed which videos users were watching and renting, exposing kinks and private sexual preferences.

In all, the logs were detailed enough to see which users were logging in, from where, and often their email addresses or other identifiable information — which in some cases we could match to real-world identities.

Not only were users affected, the “camgirls” — who broadcast sexual content to viewers — also had some of their account information exposed.

The database was shut off last week, allowing us to publish our findings.

The “camgirl” site, which exposed millions of users’ and sex workers’ account data by failing to protect a backend database with a password. (Image: TechCrunch)

Researchers at Condition:Black, a cybersecurity and internet freedom firm, discovered the exposed database.

“This was a serious failure from a technical and compliance perspective,” said John Wethington, founder of Condition:Black. “After reviewing the sites’ data privacy policy and terms and conditions, it’s clear that users likely had no idea that their activities being monitored to this level of detail.”

“Users should always take into consideration the implications of their data leaking but especially where the implications could be life altering,” he said.

Data exposures — where companies inadvertently leave their own systems open for anyone to access — have become increasingly common in recent years. Dating sites are among those with some of the most sensitive data. Earlier this year, a group dating site 3Fun exposed over a million users’ data, allowing researchers to view users’ real-time locations without permission. These security lapses can be extremely damaging to their users, exposing private sexual encounters and preferences known only to the users themselves. The fallout following the 2016 hack of affair-focused site Ashley Madison resulted in families breaking up and several reports of suicides connected to the breach.

An email to VTS Media bounced over the weekend and could not be reached for comment.

Given both the company and its servers are located in Europe, the exposure of sexual preferences would fall under the “special categories” of GDPR rules, which require more protections. Companies can be fined up to 4% of their annual turnover for GDPR violations.

A spokesperson for the Spanish data protection authority (AEPD) did not respond to a request for comment outside business hours.


Got a tip? You can send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send PGP email with the fingerprint: 4D0E 92F2 E36A EC51 DAAE 5D97 CB8C 15FA EB6C EEA5.

❌