Last month, American tech companies were dealt two of the most consequential legal decisions they have ever faced. Both of these decisions came from thousands of miles away, in Europe. While companies are spending time and money scrambling to understand how to comply with a single decision, they shouldn’t miss the broader ramification: Europe has different operating principles from the U.S., and is no longer passively accepting American rules of engagement on tech.
In the first decision, Apple objected to and was spared a $15 billion tax bill the EU said was due to Ireland, while the European Commission’s most vocal anti-tech crusader Margrethe Vestager was dealt a stinging defeat. In the second, and much more far-reaching decision, Europe’s courts struck a blow at a central tenet of American tech’s business model: data storage and flows.
American companies have spent decades bundling stores of user data and convincing investors of its worth as an asset. In Schrems, Europe’s highest court ruled that masses of free-flowing user data is, instead, an enormous liability, and sows doubt about the future of the main method that companies use to transfer data across the Atlantic.
On the surface, this decision appears to be about data protection. But there is a choppier undertow of sentiment swirling in legislative and regulatory circles across Europe. Namely that American companies have amassed significant fortunes from Europeans and their data, and governments want their share of the revenue.
What’s more, the fact that European courts handed victory to an individual citizen while also handing defeat to one of the commission’s senior leaders shows European institutions are even more interested in protecting individual rights than they are in propping up commission positions. This particular dynamic bodes poorly for the lobbying and influence strategies that many American companies have pursued in their European expansion.
After the Schrems ruling, companies will scramble to build legal teams and data centers that can comply with the court’s decision. They will spend large sums of money on pre-built solutions or cloud providers that can deliver a quick and seamless transition to the new legal reality. What companies should be doing, however, is building a comprehensive understanding of the political, judicial and social realities of the European countries where they do business — because this is just the tip of the iceberg.
American companies need to show Europeans — regularly and seriously — that they do not take their business for granted.
For many years, American tech companies have treated Europe as a market that required minimal, if any, meaningful adaptations for success. If an early-stage company wanted to gain market share in Germany, it would translate its website, add a notice about cookies and find a convenient way to transact in euros. Larger companies wouldn’t add many more layers of complexity to this strategy; perhaps it would establish a local sales office with a European from HQ, hire a German with experience in U.S. companies or sign a local partnership that could help it distribute or deliver its product. Europe, for many small and medium-sized tech firms, was little more than a bigger Canada in a tougher time zone.
Only the largest companies would go to the effort of setting up public policy offices in Brussels, or meaningfully try to understand the noncommercial issues that could affect their license to operate in Europe. The Schrems ruling shows how this strategy isn’t feasible anymore.
American tech must invest in understanding European political realities the same way they do in emerging markets like India, Russia or China, where U.S. tech companies go to great lengths to adapt products to local laws or pull out where they cannot comply. Europe is not just the European Commission, but rather 27 different countries that vote and act on different interests at home and in Brussels.
Governments in Beijing or Moscow refused to accept a reality of U.S. companies setting conditions for them from the outset. After underestimating Europe for years, American companies now need to dedicate headspace to considering how business is materially affected by Europe’s different views on data protection, commerce, taxation and other issues.
This is not to say that American and European values on the internet differ as dramatically as they do with China’s values, for instance. But Europe, from national governments to the EU and to courts, is making it clear that it will not accept a reality where U.S. companies assume that they have license to operate the same way they do at home. Where U.S. companies expect light taxation, European governments expect revenue for economic activity. Where U.S. companies expect a clear line between state and federal legislation, Europe offers a messy patchwork of national and international regulation. Where U.S. companies expect that their popularity alone is proof that consumers consent to looser privacy or data protection, Europe reminds them that (across the pond) the state has the last word on the matter.
Many American tech companies understand their commercial risks inside and out but are not prepared for managing the risks that are out of their control. From reputation risk to regulatory risk, they can no longer treat Europe as a like-for-like market with the U.S., and the winners will be those companies that can navigate the legal and political changes afoot. Having a Brussels strategy isn’t enough. Instead American companies will need to build deeper influence in the member states where they operate. Specifically, they will need to communicate their side of the argument early and often to a wider range of potential allies, from local and national governments in markets where they operate, to civil society activists like Max Schrems .
The world’s offline differences are obvious, and the time when we could pretend that the internet erased them rather than magnified them is quickly ending.
No doubt about it, home fitness is hot. The category had already been gaining considerable traction in recent years and months, but the ongoing pandemic has undoubtedly accelerated interest by orders of magnitude. And understandably so. After all, while some businesses have begun reopening in some locations, gyms are still a big red flag, with one of the highest potential transmission risks of any communal space.
This morning Tempo announced a healthy $60 million Series B, led by Norwest Venture Partners and General Catalyst, along with a repeat investors Founders Fund, Signal Fire, DCM, Y Combinator and Bling Capital.
The news comes almost exactly a month after Mirror, one of the San Francisco-based company’s chief competitors, was acquired by fitness brand Lululemon for $500 million. Also worth noting here is the continued success of Peloton, whose streaming fitness classes have continued to catapult the home fitness equipment maker. A number of other startups have announced raises in recent weeks, while stalwarts like Technogym have introduced their own home streaming services.
Image Credits: Tempo
The Tempo device runs ~$2,000, plus a $39 monthly membership to its content, which includes strength, cardio and various other exercises as either live streams or on-demand content. Notably, the company says it’s on track to hit a $100 million run rate by year’s end, owing in part to sales that have jumped 500% since the company opened up pre-orders this February (without disclosing actual unit sales).
That’s due, no doubt, to word of mouth, but the company certainly isn’t discounting the role of COVID-19 in its fast success. “With tens of millions unable to go to the gym or attend classes in person, consumers’ fitness needs have evolved,” the company notes in a press release. “App-based services lack the necessary equipment to be effective for most people, while previous smart devices often do little more than stream videos without two-way guidance.”
ClimaCell, the weather forecasting and intelligence service that is using a number of interesting new techniques to gather weather data, today announced that it has raised a $23 million Series C round co-led by new investor Pitango Growth and existing investor Square Peg Captial. With this new round, the Boston- and TelAviv-based company’s total funding now exceeds $100 million.
As ClimaCell co-founder and CEO Shimon Elkabetz told me, the round came together well after the worldwide COVID-19 lockdowns had started and the team never met with its new investors in person. Because the pandemic affected many of ClimaCell’s customers in the travel industry, in recent months, the company did take some steps to reduce cost and expand its overall runway, but Elkabetz stressed that the company didn’t need to raise this new round and that the investors approached the company.
“We took some aggressive but respectful actions around reducing our expenses and created a significant runway,” Elkabetz explained. “We didn’t really need to raise money now, but this opportunity came to us and we decided to take it, because it gives us a significant opportunity to invest in strategic things.”
Given the changing business climate, the company did double down on its efforts to brand its service as an intelligence platform that helps businesses make smart decisions about the operations, even if they are not meteorologists. In practice, this means a stronger focus on its Insights service, which helps operators in various industries to make smart decisions based on the company’s forecasts. With this, ClimaCell can help a construction company ensure that a worksite is safe when a storm is coming and when it should shut down its crane operations because of wind, for example, or when a logistics company should expect slowdowns because of heavy rains. Instead of just giving its users a weather forecast, the company’s tools provide actionable suggestions instead.
“65% of the world’s GDP is being impacted by weather events. ClimaCell is the only SaaS company that enables actionable items ahead of weather events rather than reacting to them and their implications and ramifications,” said Aaron Mankovski, Managing General Partner at Pitango Growth, in today’s announcement. “The opportunities coming to ClimaCell across industries including supply chain and logistics, railroads, trucking, shipping, on-demand, energy, insurance, and more represent a complete upending of the existing competitive landscape and is a testament to being laser-focused on customer value.”
Elkabetz noted that the company plans to use the new funding to expand both its go-to-market efforts and to focus on the fundamental R&D that makes its platform work. He wasn’t quite ready to share what those R&D efforts will look like, but he expects to be able to announce these new capabilities “soon.”
The company also expects to launch some updates to its consumer mobile app soon. While the consumer app may not be ClimaCell’s main focus, it uses the same technology in the backend, including a version of Insights for leisure activities, for example. For Elkabetz, the consumer app helps spread the ClimaCell brand but he also expects that it can become a real business in its own right.
In three years Zachariah Reitano’s startup, Ro, has managed to hit a reported $1.5 billion valuation for its transformation from a company focused on treating erectile disfunction to a telemedicine service for a range of elective and urgent care-focused treatments.
Through Rory for women’s health, Roman for men’s health, and Zero for smoking cessation, Reitano’s company now treats 20 conditions including sexual health, weight loss, dermatology, allergies and more, according to a statement from the company.
Image Credit: Zero
Ro also has a new pharmacy business, Ro Pharmacy, which is an online cash pay pharmacy offering over 500 generic medications for just $5 per month per drug. And the company is getting into the weight loss business through a partnership with the private equity-backed health care company, Gelesis.
Ro’s also becoming a gateway into patient acquisition for primary care providers through Ribbon Health, and a test-case for the use of Pfizer’s Greenstone service, which provides certification that a generic drug is validated by one of the major pharmaceuticals.
The company’s $1.5 billion valuation is courtesy of a new $200 million investment from existing investors led by General Catalyst and including FirstMark Capital, Torch, SignalFire, TQ Ventures, Initialized Capital, 3L, and BoxGroup. New first time investor The Chernin Group also participated. In all, Ro has raised $376 million since it launched in 2017.
“This new investment will further our mission to become every patient’s first call. We’ll continue to invest in our vertically-integrated healthcare ecosystem, from our Collaborative Care Center to our national pharmacy operating system. This is just the beginning of Ro’s patient-centered healthcare platform.”
It’s all part of the company’s mission to provide a point of entry into the healthcare system independent of insurance qualifications.
“Telehealth companies like Ro are using technology to address long-standing healthcare disparities that have been exacerbated by Covid-19,” said Dr. Joycelyn Elders, MD, Ro Medical Advisor and Former US Surgeon General. “By empowering providers to leverage their skills as efficiently and effectively as possible, Ro delivers affordable, high-quality care regardless of a patient’s location, insurance status, or physical access to physicians and pharmacies.”
Ro’s new financing is one of several forays by tech investors into reshaping the healthcare system at a time when patient care has been severely disrupted by attempts to mitigate the spread of COVID-19.
Digital medicine is assuming a central position in the healthcare world with most consultations now occurring online. Reimbursement schemes for telemedicine have changed dramatically and investors see an opportunity to capitalize on these changes by aggressively backing the expansion plans of companies looking to bring digital healthcare directly to consumers.
That’s one of the reasons why Ro’s major competitor, Hims, is reported to be seeking access to public markets through its sale to a Special Purpose Acquisition Company for roughly $1 billion, according to Reuters.
In 2019, UnitedHealthcare’s health-services arm, Optum, rolled out a machine learning algorithm to 50 healthcare organizations. With the aid of the software, doctors and nurses were able to monitor patients with diabetes, heart disease and other chronic ailments, as well as help them manage their prescriptions and arrange doctor visits. Optum is now under investigation after research revealed that the algorithm (allegedly) recommends paying more attention to white patients than to sicker Black patients.
Today’s data and analytics leaders are charged with creating value with data. Given their skill set and purview, they are also in the organizationally unique position to be responsible for spearheading ethical data practices. Lacking an operationalizable, scalable and sustainable data ethics framework raises the risk of bad business practices, violations of stakeholder trust, damage to a brand’s reputation, regulatory investigation and lawsuits.
Here are four key practices that chief data officers/scientists and chief analytics officers (CDAOs) should employ when creating their own ethical data and business practice framework.
The CDAO must identify and execute on the economic opportunity for analytics, and with opportunity comes risk. Whether the use of data is internal — for instance, increasing customer retention or supply chain efficiencies — or built into customer-facing products and services, these leaders need to explicitly identify and mitigate risk of harm associated with the use of data.
A great way to begin to build ethical data practices is to look to existing groups, such as a data governance board, that already tackles questions of privacy, compliance and cyber-risk, to build a data ethics framework. Dovetailing an ethics framework with existing infrastructure increases the probability of successful and efficient adoption. Alternatively, if no such body exists, a new body should be created with relevant experts from within the organization. The data ethics governing body should be responsible for formalizing data ethics principles and operationalizing those principles for products or processes in development or already deployed.
All analytics and AI projects require a data collection and analysis strategy. Ethical data collection must, at a minimum, include: securing informed consent when collecting data from people, ensuring legal compliance, such as adhering to GDPR, anonymizing personally identifiable information so that it cannot reasonably be reverse-engineered to reveal identities and protecting privacy.
Some of these standards, like privacy protection, do not necessarily have a hard and fast level that must be met. CDAOs need to assess the right balance between what is ethically wise and how their choices affect business outcomes. These standards must then be translated to the responsibilities of product managers who, in turn, must ensure that the front-line data collectors act according to those standards.
CDAOs also must take a stance on algorithmic ethics and transparency. For instance, should an AI-driven search function or recommender system strive for maximum predictive accuracy, providing a best guess as to what the user really wants? Is it ethical to micro-segment, limiting the results or recommendations to what other “similar people” have clicked on in the past? And is it ethical to include results or recommendations that are not, in fact, predictive, but profit-maximizing to some third party? How much algorithmic transparency is appropriate, and how much do users care? A strong ethical blueprint requires tackling these issues systematically and deliberately, rather than pushing these decisions down to individual data scientists and tech developers that lack the training and experience to make these decisions.
Division and product managers need guidance on how to anticipate inequitable and biased outcomes. Inequalities and biases can arise due simply to data collection imbalances — for instance, a facial recognition tool that has been trained on 100,000 male faces and 5,000 female faces will likely be differently effective by gender. CDAOs must help ensure balanced and representative data sets.
Other biases are less obvious, but just as important. In 2019, Apple Card and Goldman Sachs were accused of gender bias when extending higher credit lines to men than women. Though Goldman Sachs maintained that creditworthiness — not gender — was the driving factor in credit decisions, the fact that women have historically had fewer opportunities to build credit likely meant that the algorithm favored men.
To mitigate inequities, CDAOs must help tech developers and product managers alike navigate what it means to be fair. While computer science literature offers myriad metrics and definitions of fairness, developers cannot reasonably choose one in the absence of collaborations with the business managers and external experts who can offer deep contextual understanding of how data will eventually be used. Once standards for fairness are chosen, they must also be effectively communicated to data collectors to ensure adherence.
CDAOs often build analytics capacity in one of two ways: via a center of excellence, in service to an entire organization, or a more distributed model, with data scientists and analytics investments committed to specific functional areas, such as marketing, finance or operations. Regardless of organizational structure, the processes and rubrics for identifying ethical risk must be clearly communicated and appropriately incentivized.
Key steps include:
CDAOs are charged with the strategic use and deployment of data to drive revenue with new products and to create greater internal consistencies. Too many business and data leaders today attempt to “be ethical” by simply weighing the pros and cons of decisions as they arise. This short-sighted view creates unnecessary reputational, financial and organizational risk. Just as a strategic approach to data requires a data governance program, good data governance requires an ethics program. Simply put, good data governance is ethical data governance.
Speed sells. When Tesla launches a new vehicle or updates an existing vehicle, the car company often leads with 0-60mph time. These numbers often outclass those from gasoline cars thanks to how electric motors deliver power. These 0-60 times are often irrelevant to daily driving, and yet, Tesla, like most automakers, sees them as a critical marketable statistic.
Ford today unveiled a special edition of its forthcoming four-door electric Mustang. It’s fast because, as mentioned above, speed sells. Seven electric motors produce a total of 1,400 HP, which to put into layman’s terms, is a shit-ton of power.
Ford doesn’t intend to sell this example. The car company says this vehicle was built to explore the limits of electric vehicle technology — and, clearly, to show it off to the public.
The upcoming Ford Mustang Mach-E will come in a performance trim called the GT. While it will only have two electric motors instead of seven, it won’t be a slouch. The two motors will produce 459 HP, which is plenty of power to thrill.
This is Ford’s second special edition Mustang Mach-E. The Mustang Cobra Jet was unveiled earlier this year, and sports 1,400 HP, but does so in a different configuration that’s primarily designed to go fast in a straight line.
These concept Mustangs build excitement from key demographics, much like how Tesla’s Insane and Ludicrous modes make excitement around its vehicles. Ford is in a tight spot with the Mustang Mach-E, and it needs to show buyers that this four-door electric vehicle is worth of the Mustang nameplate. And what are Mustangs known for? Affordable excitement.
The Mustang Mach-E is set to be Ford’s first modern electric vehicle, and so far, Ford is following a different path than General Motors when it launched its first electric vehicle, the Chevy Bolt. By all accounts, the Chevy Bolt is an excellent electric vehicle with a low price tag, decent range, and quick speed. But Chevy positioned it as a boring people mover. The Mustang Mach-E has similar people moving capacity, but Ford upped the excitement with the Mustang name and marketing the performance.
There’s an old automative adage that says says winning in races produces sales. “Win on Sunday, sell on Monday” spoke of a time when NASCAR vehicles were similar to their road-worthy counterparts. That’s no longer the case. NASCAR vehicles rarely share any parts with what’s available on a dealer’s lot, but the adage is still relevant. Instead of NASCAR, automakers are looking at winning in the world of YouTube, where views are as critical as a checkered flag.
Tesla’s first vehicle was a reworked Lotus coupe. At the time, most electric cars were designed for moving people and goods. They were utilitarian. The Tesla Roadster had little utility but had a lot of excitement. From there, Tesla moved onto the Model S and quickly built out its performance capability by adding duel motors and tuning them to beat a Porsche to 60 miles per hour. When launching Tesla’s Model X SUV, the automaker often showed it beating supercars in drag races, because, once again, speed sells even if owners rarely use the power.
The UK lacks a comprehensive and cohesive high level strategy to respond to the cyber threat posed by Russia and other hostile states using online disinformation and influence ops to target democratic institutions and values, a parliamentary committee has warned in a long-delayed report that’s finally been published today.
“The UK is clearly a target for Russia’s disinformation campaigns and political influence operations and must therefore equip itself to counter such efforts,” the committee warns, calling for legislation to tackle the multi-pronged threat posed by hostile foreign influence operations in the digital era.
The report also urges the government to do the leg work of attributing state-backed cyber attacks — recommending a tactic of ‘naming and shaming’ perpetrators, while recognizing that UK agencies have, since the WannaCry attack, been more willing to publicly attribute a cyber attack to a state actor like Russia than they were in decades past. (Last week the government did just that in relation to COVID-19 vaccine R&D efforts — attacking Russia for targeting the work with custom malware, as UK ministers sought to get out ahead of the committee’s recommendations.)
“Russia’s cyber capability, when combined with its willingness to deploy it in a malicious capacity, is a matter of grave concern, and poses an immediate and urgent threat to our national security,” the committee warns.
On the threat posed to democracy by state-backed online disinformation and influence campaigns, the committee also points a finger of blame at social media giants for “failing to play their part”.
“It is the social media companies which hold the key and yet are failing to play their part,” the committee writes, urging the government to establish “a protocol” with platform giants to ensure they “take covert hostile state use of their platforms seriously, and have clear timescales within which they commit to removing such material”.
“Government should ‘name and shame’ those which fail to act,” the committee adds, suggesting such a protocol could be “usefully expanded” to other areas where the government is seeking action from platforms giants.
The Intelligence and Security Committee (ISC) prepared the dossier for publication last year, after conducting a lengthy enquiry into Russian state influence in the UK — including examining how money from Russian oligarchs flows into the country, and especially into London, via wealthy ex-pats and their establishment links; as well as looking at Russia’s use of hostile cyber operations to attempt to influence UK elections.
UK prime minister Boris Johnson blocked publication ahead of last year’s general election — meaning it’s taken a full nine months for the report to make it into the public domain, despite then committee chair urging publication ahead of polling day. The UK’s next election, meanwhile, is not likely for some half a decade’s time. (Related: Johnson was able to capitalize on unregulated social media ads during his own election campaign last year, so, er… )
The DCMS committee, which was one of the bodies that submitted evidence to the ISC’s inquiry, has similarly been warning for years about the threats posed to democracy by online disinformation and political targeting — as have the national data watchdog and others. Yet successive Conservative-led governments have failed to act on urgent recommendations in this area.
Last year ministers set out a proposal to regulate a broad swathe of ‘online harms’, although the focus is not specifically on political disinformation — and draft legislation still hasn’t been laid before parliament.
“The clearest requirement for immediate action is for new legislation,” the ISC committee writes of the threat posed by Russia. “The Intelligence Community must be given the tools it needs and be put in the best possible position if it is to tackle this very capable adversary, and this means a new statutory framework to tackle espionage, the illicit financial dealings of the Russian elite and the ‘enablers’ who support this activity.”
The report labels foreign disinformation operations and online influence campaigns something of a “hot potato” no UK agency wants to handle. A key gap the report highlights is this lack of ministerial responsibility for combating the democratic threat posed by hostile foreign states, leveraging connectivity to spread propaganda or deploy malware.
“Protecting our democratic discourse and processes from hostile foreign interference is a central responsibility of Government, and should be a ministerial priority,” the committee writes, flagging both the lack of central, ministerial responsibility and a reluctance by the UK’s intelligence and security agencies to involve themselves in actively defending democratic processes.
“Whilst we understand the nervousness around any suggestion that the intelligence and security Agencies might be involved in democratic processes – certainly a fear that is writ large in other countries – that cannot apply when it comes to the protection of those processes. And without seeking in any way to imply that DCMS [the Department for Digital, Culture, Media and Sport] is not capable, or that the Electoral Commission is not a staunch defender of democracy, it is a question of scale and access. DCMS is a small Whitehall policy department and the Electoral Commission is an arm’s length body; neither is in the central position required to tackle a major hostile state threat to our democracy.”
Last July the government did announce what it called its Defending Democracy programme, which — per the ISC committee report — is intended to “co-ordinate work on protecting democratic discourse and processes from interference under the leadership of the Cabinet Office, with the Chancellor of the Duchy of Lancaster and the Deputy National Security Adviser holding overall responsibility at ministerial and official level respectively”.
However the committee points out this structure is “still rather fragmented”, noting that at least ten separate teams are involved across government.
It also questions the level of priority being attached to the issue, writing that: “It seems to have been afforded a rather low priority: it was signed off by the National Security Council only in February 2019, almost three years after the EU referendum campaign and the US presidential election which brought these issues to the fore.”
“In the Committee’s view, a foreign power seeking to interfere in our democratic processes – whether it is successful or not – cannot be taken lightly; our democracy is intrinsic to our country’s success and well-being and any threat to it must be treated as a serious national security issue by those tasked with defending us,” it adds.
The lack of an overarching ministerial body invested with central responsibility to tackle online threats to democracy goes a long way to explaining the damp squib of a response around breaches of UK election law which relate to the Brexit vote — when social media platforms were used to funnel in dark money to fund digital ads aimed at influencing the outcome of what should have been a UK-only vote.
(A redacted footnote in the report touches on the £8M donation by Arron Banks to the Leave.EU campaign — “the biggest donor in British political history”; noting how the Electoral Commission, which had been investigating the source of the donation, referred the case to the National Crime Agency — “which investigated it ***” [redacting any committee commentary on what was or was not found by the NCA]; before adding: “In September 2019, the National Crime Agency announced that it had concluded the investigation, having found no evidence that any criminal offences had been committed under the Political Parties, Elections and Referendums Act 2000 or company law by any of the individuals or organisations referred to it by the Electoral Commission.”)
“The regulation of political advertising falls outside this Committee’s remit,” the ISC report adds, under a brief section on ‘Political advertising on social media’. “We agree, however, with the DCMS Select Committee’s conclusion that the regulatory framework needs urgent review if it is to be fit for purpose in the age of widespread social media.
“In particular, we note and affirm the Select Committee’s recommendation that all online political adverts should include an imprint stating who is paying for it. We would add to that a requirement for social media companies to co-operate with MI5 where it is suspected that a hostile foreign state may be covertly running a campaign.”
On Brexit itself, and the heavily polarizing question of how much influence Russia was able to exert over the UK’s vote to leave the European Union, the committee suggests this would be “difficult” or even “impossible” to assess. But it emphasizes: “it is important to establish whether a hostile state took deliberate action with the aim of influencing a UK democratic process, irrespective of whether it was successful or not.”
The report then goes on to query the lack of evidence of an attempt by the UK government or security agencies to do just that.
In one interesting — and heavily redacted paragraph — the committee notes it sought to ascertain whether UK intelligence agencies hold “secret intelligence” that might support or supplement open source studies that have pointed to attempts by Russia to influence the Brexit vote — but was sent only a very brief response.
Here the committee writes:
In response to our request for written evidence at the outset of the Inquiry, MI5 initially provided just six lines of text. It stated that ***, before referring to academic studies. This was noteworthy in terms of the way it was couched (***) and the reference to open source studies ***. The brevity was also, to us, again, indicative of the extreme caution amongst the intelligence and security Agencies at the thought that they might have any role in relation to the UK’s democratic processes, and particularly one as contentious as the EU referendum. We repeat that this attitude is illogical; this is about the protection of the process and mechanism from hostile state interference, which should fall to our intelligence and security Agencies.
The report also records a gap in the government’s response on this issue — with the committee being told of no active attempt by government to understand whether or not UK elections have been targeted by Russia.
“The written evidence provided to us appeared to suggest that HMG had not seen or sought evidence of successful interference in UK democratic processes or any activity that has had a material impact on an election, for example influencing results,” it writes.
A later redacted paragraph indicates an assessment by the committee that the government failed to fully take into account open source material which had indicated attempts to influence Brexit (such as the studies of attempts to influence the referendum using Russia state mouthpieces RT and Sputnik; or via social media campaigns).
“Given that the Committee has previously been informed that open source material is now fully represented in the Government’s understanding of the threat picture, it was surprising to us that in this instance it was not,” the committee adds.
The committee also raises an eyebrow at the lack of any post-referendum analysis of Russian attempts to influence the vote by UK intelligence agencies — which it describes as in “stark contrast” to the US agency response following the revelations of Russian disops targeted at the 2016 US presidential election.
“Whilst the issues at stake in the EU referendum campaign are less clear-cut, it is nonetheless the Committee’s view that the UK Intelligence Community should produce an analogous assessment of potential Russian interference in the EU referendum and that an unclassified summary of it be published,” it suggests.
In other recommendations related to Russia’s “offensive cyber” capabilities, the committee reiterates that there’s a need for “a common international approach” to tackling the threat.
“It is clear there is now a pressing requirement for the introduction of a doctrine, or set of protocols, to ensure that there is a common approach to Offensive Cyber. While the UN has agreed that international law, and in particular the UN Charter, applies in cyberspace, there is still a need for a greater global understanding of how this should work in practice,” it writes, noting that it made the same recommendation in its 2016-17 annual
“It is imperative that there are now tangible developments in this area in light of the increasing threat from Russia (and others, including China, Iran and the Democratic People’s Republic of Korea). Achieving a consensus on this common approach will be a challenging process, but as a leading proponent of the Rules Based International Order it is essential that the UK helps to promote and shape Rules of Engagement, working
with our allies.”
The security-cleared committee notes that the public report is a redacted summary of a more detailed dossier it felt unable to publish on account of classified information and the risk of Russia being able to use it to glean too much intelligence on the level of UK intelligence of its activities. Hence opting for a more truncated (and redacted) document than it would usually publish — which again raises questions over why Johnson sought repeatedly to delay publication.
Plenty of sections of the report contain a string of asterisk at a crucial point, eliding strategic specifics (e.g. this paragraph on exactly how Russia is targeting critical UK infrastructure: “Russia has also undertaken cyber pre-positioning activity on other nations’ Critical National Infrastructure (CNI). The National Cyber Security Centre (NCSC) has advised that there is *** Russian cyber intrusion into the UK’s CNI – particularly marked in the *** sectors.)”)
Most recently Number 10 sought to influence the election of the ISC committee chair by seeking to parachute a preferred candidate into the seat — which could have further delayed publication of the report. However the attempt at stacking the committee was thwarted when new chair, Conservative MP Julian Lewis, sided with opposition MPs to vote for himself. After which the newly elected committee voted unanimously to release the Russia report before the summer recess of parliament, avoiding another multi-month delay.
Another major chunk of the report, which tackles the topic of Russian expatriate oligarchs and their money; how they’ve been welcomed into UK society with “open arms”, enabling their illicit finance to be recycled through “the London ‘laundromat’, and to find its way inexorably into political party coffers, may explain the government’s reluctance for the report to be made public.
“It is widely recognised that the key to London’s appeal was the exploitation of the UK’s investor visa scheme, introduced in 1994, followed by the promotion of a light and limited touch to regulation, with London’s strong capital and housing markets offering sound investment opportunities,” the committee writes, further noting that Russian money was also invested in “extending patronage and building influence across a wide sphere of the British establishment – PR firms, charities, political interests, academia and cultural institutions were all willing beneficiaries of Russian money, contributing to a ‘reputation laundering’ process”.
“In brief, Russian influence in the UK is ‘the new normal’, and there are a lot of Russians with very close links to Putin who are well integrated into the UK business and social scene, and accepted because of their wealth,” it adds.
You can read the full report here.
In the wake of yesterday’s landmark ruling by Europe’s top court — striking down a flagship transatlantic data transfer framework called Privacy Shield, and cranking up the legal uncertainty around processing EU citizens’ data in the US in the process — Europe’s lead data protection regulator has fired its own warning shot at the region’s data protection authorities (DPAs), essentially telling them to get on and do the job of intervening to stop people’s data flowing to third countries where it’s at risk.
Countries like the U.S.
The original complaint that led to the the Court of Justice of the EU (CJEU) ruling focused on Facebook’s use of a data transfer mechanism called Standard Contractual Clauses (SCCs) to authorize moving EU users’ data to the US for processing.
Complainant Max Schrems asked the Irish Data Protection Commission (DPC) to suspend Facebook’s SCC data transfers in light of US government mass surveillance programs. Instead the regulator went to court to raise wider concerns about the legality of the transfer mechanism.
That in turn led Europe’s top judges to nuke the Commission’s adequacy decision which underpinned the EU-US Privacy Shield — meaning the US no longer has a special arrangement greasing the flow of personal data from the EU. Yet, at the time of writing, Facebook is still using SCCs to process EU users’ data in the US. Much has changed but the data hasn’t stopped flowing — yet.
Yesterday the tech giant said it would “carefully consider” the findings and implications of the CJEU decision on Privacy Shield, adding that it looked forward to “regulatory guidance”. It certainly didn’t offer to proactively flip a kill switch and stop the processing itself.
Ireland’s DPA, meanwhile, which is Facebook’s lead data regulator in the region, sidestepped questions over what action it would be taking in the wake of yesterday’s ruling — saying it (also) needed (more) time to study the legal nuances.
The DPC’s statement also only went so far as to say the use of SCCs for taking data to the US for processing is “questionable” — adding that case by case analysis would be key.
The regulator remains the focus of sustained criticism in Europe over its enforcement record for major cross-border data protection complaints — with still zero decisions issued more than two years after the EU’s General Data Protection Regulation (GDPR) came into force, and an ever growing backlog of open investigations into the data processing activities of platform giants.
In May, the DPC finally submitted its first draft decision on a cross-border case (an investigation into a Twitter security breach) to other DPAs for review, saying it hoped the decision would be finalized in July. At the time of writing we’re still waiting for the bloc’s regulators to reach consensus on that.
The painstaking pace of enforcement around Europe’s flagship data protection framework remains a problem for EU lawmakers — whose two-year review last month called for uniformly “vigorous” enforcement by regulators.
The European Data Protection Supervisor (EDPS) made a similar call today, in the wake of the Schrems II ruling — which only looks set to further complicate the process of regulating data flows by piling yet more work on the desks of underfunded DPAs.
“European supervisory authorities have the duty to diligently enforce the applicable data protection legislation and, where appropriate, to suspend or prohibit transfers of data to a third country,” writes EDPS, Wojciech Wiewiórowski, in a statement which warns against further dithering or can-kicking on the intervention front.
“The EDPS will continue to strive, as a member of the European Data Protection Board (EDPB), to achieve the necessary coherent approach among the European supervisory authorities in the implementation of the EU framework for international transfers of personal data,” he goes on, calling for more joint working by the bloc’s DPAs.
Wiewiórowski’s statement also highlights what he dubs “welcome clarifications” regarding the responsibilities of data controllers and European DPAs — to “take into account the risks linked to the access to personal data by the public authorities of third countries”.
“As the supervisory authority of the EU institutions, bodies, offices and agencies, the EDPS is carefully analysing the consequences of the judgment on the contracts concluded by EU institutions, bodies, offices and agencies. The example of the recent EDPS’ own-initiative investigation into European institutions’ use of Microsoft products and services confirms the importance of this challenge,” he adds.
Part of the complexity of enforcement of Europe’s data protection rules is the lack of a single authority; a varied patchwork of supervisory authorities responsible for investigating complaints and issuing decisions.
Now, with a CJEU ruling that calls for regulators to assess third countries themselves — to determine whether the use of SCCs is valid in a particular use-case and country — there’s a risk of further fragmentation should different DPAs jump to different conclusions.
Yesterday, in its response to the CJEU decision, Hamburg’s DPA criticized the judges for not also striking down SCCs, saying it was “inconsistent” for them to invalidate Privacy Shield yet allow this other mechanism for international transfers. Supervisory authorities in Germany and Europe must now quickly agree how to deal with companies that continue to rely illegally on the Privacy Shield, the DPA warned.
In the statement Hamburg’s data commissioner, Johannes Caspar, added: “Difficult times are looming for international data traffic.”
He also shot off a blunt warning that: “Data transmission to countries without an adequate level of data protection will… no longer be permitted in the future.”
Compare and contrast that with the Irish DPC talking about use of SCCs being “questionable”, case by case. (Or the UK’s ICO offering this bare minimum.)
Caspar also emphasized the challenge facing the bloc’s patchwork of DPAs to develop and implement a “common strategy” towards dealing with SCCs in the wake of the CJEU ruling.
In a press note today, Berlin’s DPA also took a tough line, warning that data transfers to third countries would only be permitted if they have a level of data protection essentially equivalent to that offered within the EU.
In the case of the US — home to the largest and most used cloud services — Europe’s top judges yesterday reiterated very clearly that that is not in fact the case.
“The CJEU has made it clear that the export of data is not just about the economy but people’s fundamental rights must be paramount,” Berlin data commissioner Maja Smoltczyk said in a statement [which we’ve translated using Google Translate].
“The times when personal data could be transferred to the US for convenience or cost savings are over after this judgment,” she added.
Both DPAs warned the ruling has implications for the use of cloud services where data is processed in other third countries where the protection of EU citizens’ data also cannot be guaranteed too, i.e. not just the US.
On this front, Smoltczyk name-checked China, Russia and India as countries EU DPAs will have to assess for similar problems.
“Now is the time for Europe’s digital independence,” she added.
Some commentators (including Schrems himself) have also suggested the ruling could see companies switching to local processing of EU users data. Though it’s also interesting to note the judges chose not to invalidate SCCs — thereby offering a path to legal international data transfers, but only provided the necessary protections are in place in that given third country.
Also issuing a response to the CJEU ruling today was the European Data Protection Board (EDPB). Aka the body made up of representatives from DPAs across the bloc. Chair Andrea Jelinek put out an emollient statement, writing that: “The EDPB intends to continue playing a constructive part in securing a transatlantic transfer of personal data that benefits EEA citizens and organisations and stands ready to provide the European Commission with assistance and guidance to help it build, together with the U.S., a new framework that fully complies with EU data protection law.”
Short of radical changes to US surveillance law it’s tough to see how any new framework could be made to legally stick, though. Privacy Shield’s predecessor arrangement, Safe Harbour, stood for around 15 years. Its shiny ‘new and improved’ replacement didn’t even last five.
In the wake of the CJEU ruling, data exporters and importers are required to carry out an assessment of a country’s data regime to assess adequacy with EU legal standards before using SCCs to transfer data there.
“When performing such prior assessment, the exporter (if necessary, with the assistance of the importer) shall take into consideration the content of the SCCs, the specific circumstances of the transfer, as well as the legal regime applicable in the importer’s country. The examination of the latter shall be done in light of the non-exhaustive factors set out under Art 45(2) GDPR,” Jelinek writes.
“If the result of this assessment is that the country of the importer does not provide an essentially equivalent level of protection, the exporter may have to consider putting in place additional measures to those included in the SCCs. The EDPB is looking further into what these additional measures could consist of.”
Again, it’s not clear what “additional measures” a platform could plausibly deploy to ‘fix’ the gaping lack of redress afforded to foreigners by US surveillance law. Major legal surgery does seem to be required to square this circle.
Jelinek said the EDPB would be studying the judgement with the aim of putting out more granular guidance in future. But her statement warns data exporters they have an obligation to suspend data transfers or terminate SCCs if contractual obligations are not or cannot be complied with, or else to notify a relevant supervisory authority if it intends to continue transferring data.
In her roundabout way, she also warns that DPAs now have a clear obligation to terminate SCCs where the safety of data cannot be guaranteed in a third country.
“The EDPB takes note of the duties for the competent supervisory authorities (SAs) to suspend or prohibit a transfer of data to a third country pursuant to SCCs, if, in the view of the competent SA and in the light of all the circumstances of that transfer, those clauses are not or cannot be complied with in that third country, and the protection of the data transferred cannot be ensured by other means, in particular where the controller or a processor has not already itself suspended or put an end to the transfer,” Jelinek writes.
One thing is crystal clear: Any sense of legal certainty US cloud services were deriving from the existence of the EU-US Privacy Shield — with its flawed claim of data protection adequacy — has vanished like summer rain.
In its place, a sense of déjà vu and a lot more work for lawyers.
Western intelligence agencies say they’ve found evidence that Russian cyber espionage is targeting efforts to develop a coronavirus vaccine in a number of countries.
In an advisory report, the UK’s National Cyber Security Centre (NCSC) said the Russia-linked cyber espionage group commonly known as ‘APT29’ — which is also sometimes referred to as ‘the Dukes’ or ‘Cozy Bear’ — has targeted various organisations involved in medical R&D and COVID-19 vaccine development in Canada, the US and the UK throughout 2020.
Per the report, APT29 is using custom malware known as ‘WellMess’ and ‘WellMail’ to target a number of organisations globally, including those involved with COVID-19 vaccine development.
WellMess and WellMail have not previously been publicly associated to APT29, it notes.
The NCSC, which is a public facing branch of the UK’s GCHQ intelligence agency, said it believes it “highly likely” that the intention of the malware attacks is to steal information and IP related to the development and testing of COVID-19 vaccines.
The findings in the report are also endorsed by Canada’s Communications Security Establishment (CSE) and the US National Security Agency (NSA).
“In recent attacks targeting COVID-19 vaccine research and development, the group conducted basic vulnerability scanning against specific external IP addresses owned by the organisations. The group then deployed public exploits against the vulnerable services identified,” the advisory adds.
It concludes by assessing APT29 is “likely” to continue to target organisations involved in COVID-19 vaccine R&D — as “they seek to answer additional intelligence questions relating to the pandemic”.
“It is strongly recommended that organisations use the rules and IOCs [indicators of compromise] in the [report] appendix in order to detect the activity detailed in this advisory,” it adds, flagging compromise indicators and detection and mitigation advice contained in the document.
Responding to the advisory the UK government condemned what it called Russia’s “irresponsible” cyber attacks against COVID-19 vaccine development.
“It is completely unacceptable that the Russian Intelligence Services are targeting those working to combat the coronavirus pandemic,” said foreign secretary, Dominic Raab, in a statement. “While others pursue their selfish interests with reckless behaviour, the UK and its allies are getting on with the hard work of finding a vaccine and protecting global health.”
“The UK will continue to counter those conducting such cyber attacks, and work with our allies to hold perpetrators to account,” he added.
Last month EU lawmakers named Russia and China as states behind major disinformation campaigns related to the coronavirus which they said had targeted Internet users in the region.
The NCSC advisory follows hard on the heels of an assertion by Raab that Russia attempted to influence the 2019 UK election via the online amplification of leaked documents.
“On the basis of extensive analysis, the government has concluded that it is almost certain that Russian actors sought to interfere in the 2019 general election through the online amplification of illicitly acquired and leaked government documents,” Raab said in a statement yesterday.
The Guardian reports that UK intelligence agencies have spent months investigating how a 451-page dossier of official emails ended up with the opposition Labour party during the election campaign — providing an opportunity for then leader Jeremy Corbyn to make political capital out of details related to UK-US trade talks.
Back in 2017 former UK and Conservative prime minister, Theresa May, also warned publicly that Russia was trying to meddle in Western elections. However she failed to act on a series of recommendations from a parliamentary committee that scrutinized the democratic threats posed by online disinformation.
The timing of this latest flurry of Russian cyberops warnings from UK state sources is especially interesting in light of a much delayed report by the UK parliament’s Intelligence & Security Committee (ISC) into Russia’s role in election interference.
Publication of this report was blocked last year on orders of prime minister, Boris Johnson. But, this week, an attempt by Number 10 to install Chris Grayling, a former secretary of state for transport, as chair of the ISC was thwarted after Conservative MP Julian Lewis sided with opposition MPs to vote for himself as new committee chair instead. Publication of the long delayed Russia report is now imminent, after the committee voted unanimously for it to be released next week before parliament breaks for the summer.
Last November The Guardian newspaper reported that the dossier examines allegations Russian money has flowed into British politics in general and to the Conservative party in particular; as well as looking into claims Russia launched a major influence operation in 2016 in support of Brexit.
In 2017, under pressure from the DCMS committee, Facebook admitted Russian agents had used its platform to try to interfere in the UK’s referendum on EU membership — though it claimed not to have found “significant coordination” of ad buys or political misinformation targeting the Brexit vote.
Last year, former ISC chair, Dominic Grieve, called for the Russia report to be published before election day — saying it contained knowledge “germane” to voters. Instead, Johnson blocked publication — going on to be elected with a huge Conservative majority.
In this pandemic world, in-person meetings are a thing of the past. Most meetings these days are done via video conference, and no company has capitalized on the shift quite like Zoom.
Macro, a new FirstMark-backed company, is looking to capitalize on the capitalization. To Capitalism!
Sorry. Let’s get back on track. Macro is a native app that employs the Zoom SDK to add depth and analysis to your daily work meetings.
There are two modes. The first is essentially focused on collaboration, which turns the usual Zoom meeting into a light overlay, where folks are shown in small, circular bubbles at the top of the screen. This mode is to be used when folks are working on the same project, such as a wireframe or a collaborative document. The UI is meant to kind of fade into the background, allowing users to click on taps or objects behind other attendees’ bubbles.
The other mode is an Arena or Stadium mode, which is meant for hands-on meetings and presentations. It has two distinct features. The first is an Airtime feature, which shows how much different participants have ‘had the floor’ for the past five minutes, thirty minutes, or in total during the meeting. The second is a text-input system on the right side of the UI that lets people enter Questions, Takeaways, Action Items and Insights from the call.
Macro automatically adds that text to a Google Doc, and formats it into something instantly shareable.
There is no extra hassle involved in getting Macro up and running. When a user installs Macro on their computer, they’re instantly loaded into Macro each time they click a Zoom link, whether it’s in an email, a calendar invite, or in Slack.
Macro cofounders Ankith Harathi and John Keck explained to TechCrunch that this isn’t your usual enterprise play. The product is free to use and, with the Google Doc export, is still useful even as a single-player product. The Google Doc is auto-formatted with Macro messaging, explaining that it was compiled by the company with a link to the product.
In other words, Harathi and Keck want to see individuals within organizations get Macro for themselves and let the product grow organically within an organization, rather than trying to sell to large teams right off the bat.
“A lot of collaborative productivity SaaS applications need your whole team to switch over to get any value out of them,” said Harathi. “That’s a pretty big barrier, especially since so many new products are coming out and teams are constantly switching and that creates a lot of noise. So our plan was to ensure one person can use this and get value out of it, and nobody else is affected. They get the better interface and other team members will want to switch over without any requirement to do so.”
This is possible in large part to the cost of the Zoom SDK, which is $0. The heavy lifting of audio and video is handled by Zoom, as is the high compute cost. This means that Macro can offer its product for free at a relatively low cost to the company as it tries to grow.
Of course, there is some risk involved with building on an existing platform. Namely, one Zoom platform change could wreak havoc on Macro’s product or model. However, the team has plans to expand beyond Zoom to other video conferencing platforms like Google, BlueJeans, WebEx, etc. Roelof Botha told TechCrunch back in May that businesses built on other platforms have a much greater chance of success when there is platform across that sector, as there certainly is here.
And there seems to be some competition for Macro in particular — for one, Microsoft Teams just added some new features to its video conferencing UI to relieve brain fatigue and Hello is looking to offer app-free video chat via browser.
Macro is also looking to add additional functionality to the platform, such as the ability to integrate an agenda into the meeting and break up the accompanying Google doc by agenda item.
The company has raised a total of $4.8 million since launch, including a new $4.3 million seed round from FirstMark Capital, General Catalyst and Underscore VC. Other investors include NextView Ventures, Jason Warner (CTO GitHub), Julie Zhuo (former VP Design Facebook), Harry Stebbings (Founder/Host of 20minVC), Adam Nash (Dropbox, Wealthfront, LinkedIn), Clark Valberg (CEO Invision), among others.
Macro has more than 25,000 users and has been a part of 50,000 meetings to date.
As expected, BigCommerce has filed to go public. The Austin, Texas, based e-commerce company raised over $200 million while private. The company’s IPO filing lists a $100 million placeholder figure for its IPO raise, giving us directional indication that this IPO will be in the lower, and not upper, nine-figure range.
BigCommerce, similar to public market darling Shopify, provides e-commerce services to merchants. Given how enamored public investors are with its Canadian rival, the timing of BigCommerce’s debut is utterly unsurprising and is prima facie intelligent.
Of course, we’ll know more when it prices. Today, however, the timing appears fortuitous.
BigCommerce is a SaaS business, meaning that it sells a digital service for a recurring payment. For more on how it derives revenue from customers, head here. For our purposes what matters is that public investors will classify it along with a very popular — today’s trading notwithstanding — market segment.
Starting with broad strokes, here’s how the company performed in 2019 compared to 2018, and Q1 2020 in contrast to Q1 2019:
BigCommerce didn’t grow too quickly in 2019, but its Q1 2020 expansion pace is much better. BigCommerce will file an S-1/A with more information in Q2 2020, we expect; it can’t go public without sharing more about its recent financial performance.
If the company’s revenue growth acceleration continues in the most recent period — bearing in mind that e-commerce as a segment has proven attractive to many businesses during the COVID-19 pandemic — BigCommerce’s IPO timing would appear even more intelligent than it did at first blush. Investors love growth acceleration.
Moving from revenue growth to revenue quality, BigCommerce’s Q1 2020 gross margins came in at 77.5%, a solid SaaS result. In Q1 2019 its gross margin was 76.8%, a slightly worse figure. Still, improving gross margins are popular as they indicate that future cash flows will grow at a faster clip than revenues, all else held equal.
In 2018 BigCommerce lost $38.9 million on a GAAP basis. Its net loss expanded modestly to $42.6 million in 2020, a larger dollar figure in gross terms, but a slimmer percent of its yearly top line. You can read those results however you’d like. In Q1 2020, however, things got better, as the company’s GAAP net loss fell to $4 million from its year-ago Q1 result of $10.5 million.
The BigCommerce big commerce business is growing more slowly than I had anticipated, but its overall operational health is better than I expected.
A few other notes, before we tear deeper into its S-1 filing tomorrow morning. BigCommerce’s adjusted EBITDA, a metric that gives a distorted, partial view of a company’s profitability, improved along similar lines to its net income, falling from -$9.2 million in Q1 2019 to -$5.7 million in Q1 2020.
The company’s cash flow is, akin to its adjusted EBITDA, worse than its net loss figures would have you guess. BigCommerce’s operating activities consumed $10 million in Q1 2020, an improvement from its Q1 2019 operating cash burn of $11.1 million.
The company is further in debt than many SaaS companies, but not so far as to be a problem. BigCommerce’s long-term debt, net of its current portion, was just over $69 million at the end of Q1 2020. It’s not a nice figure, per se, but it is one small enough that a good IPO haul could sharply reduce while still providing good amounts of working capital for the business.
Investors listed in its IPO document include Revolution, General Catalyst, GGV Capital, and SoftBank.
Everywhere you go, you are being followed. Not by some creep in a raincoat, but by the advertisers wanting to sell you things.
The more advertisers know about you — where you go, which shops you visit, and what purchases you make — the more they can profile you, understand your tastes, your hobbies and interests, and use that information to target you with ads. You can thank the phone in your pocket — the apps on it, to be more accurate — that invisibly spits out gobs of data about you as you go about your day.
Your location, chief among the data, is by far the most revealing.
Apps, just like websites, are filled with trackers that send your real-time location to data brokers. In return, these data brokers sell on that data to advertisers, while the app maker gets a cut of the money. If you let your weather app know your location to serve you the forecast, you’re also giving your location to data brokers.
By collecting your location data, these data brokers have access to intensely personal aspects of your life and can easily build a map of everywhere you go. This data isn’t just for advertising. Immigration authorities have bought access to users’ location data to help catch the undocumented. In one case, a marketing firm used location data harvested from phones to predict the race, age, and gender of Black Lives Matter protesters. It’s an enormous industry, said to be worth at least $200 billion.
It’s only been in recent years that it was possible to learn what these data brokers know about us. But the law is slowly catching up. Anyone in Europe can request access to obtain or delete their data under the GDPR rules. California’s new consumer privacy law grants California residents access to their data.
But because so many data brokers collect and resell that data, the data marketplace is a fragmented mess, making it impossible to know which companies have your data. That can make requesting it a nightmare.
Jordan Wright, a senior security architect at Duo Security, requested his data from some of the biggest data brokers in the industry, citing California’s new consumer privacy law. Not all went to plan. As an out-of-state resident, only one of the 14 data brokers approved his request and sent him his data.
What came back was a year’s worth of location data.
Wright works in cybersecurity and knows better than most how much data spills out of his phone. But he takes precautions, and is careful about the apps he puts on his phone. Yet the data he got back knew where he lives, where he works, and where he took his family on holiday before the pandemic hit.
“It’s frustrating not fully knowing what data has been collected or shared and by whom,” he wrote in a blog post. “The reality is that dozens of companies are monitoring the location of hundreds of millions of unsuspecting people every single day.”
Avoiding this invasive tracking is nearly impossible. Just like with web ad tracking, you have little choice but to accept the app’s terms. Allow the tracking, or don’t use the app.
But the winds are changing and there is an increasing appetite to rein in the data brokers and advertising giants by kneecapping their data collection efforts. As privacy became a more prominent selling point for phone consumers, the two largest smartphone makers, Apple and Google, in recent years began to curb the growing power of data brokers.
Both iPhones and Android devices now let you opt-out of ad tracking, a move that doesn’t reduce the ads that appear but prevents advertisers from tracking you across the web or between apps.
Apple threw down the gauntlet last month when it said its next software update, iOS 14, would let users opt-out of app tracking altogether, serving a severe blow to data brokers and advertisers by reducing the amount of data that these ad giants collect on millions without their explicit and direct consent. That prompted an angry letter from the Interactive Advertising Bureau, an industry trade group that represents online advertisers, expressed its “strong concerns” and effectively asked it to back down from the plans.
Google also plans to roll out new app controls for location data in its next Android release.
It’s not the only effort taking on data brokers but it’s been the most effective — so far. Lawmakers are scrambling to find bipartisan support for a proposed federal data protection agency before the end of the year, when Congress resets and enters a legislative session.
Shy of an unlikely fix by Washington, it’s up to the tech giants to send a message.
LA-based bio science startup Kernel has raised $53 million from investors including General Catalyst, Khosla Ventures, Eldridge, Manta Ray Ventures, Tiny Blue Dot and more. The funding is the first outside money that Kernel has taken in, though it’s a Series C round, because founder and CEO Bryan Johnson has provided $54 million in investment for Kernel to date. Johnson also participated in this latest round alongside external investors.
The funding will go towards further scaling “on-demand” access to its non-invasive technology for recording brain activity, which consists of two main approaches. Kernel has distinguished these as two separate products: Flow, which detects magnetic fields created by the collective activity of neutrons in the brain; and Flux, which measures blood through through the brain. These are both key signals that researchers and medical practitioners monitor when working with the brain, but typically they require use of invasive, expensive hardware – or even brain surgery.
Kernel’s goal is to make this much more broadly available, offering access via a ‘Neuroscience as a Service’ (NaaS) model that can provide paying clients access to its brain imaging devices even remotely. Earlier this year, Kernel announced that this platform was available generally to commercial customers.
The technology sounds like sci-fi – but it’s really an attempt to take what has been a relatively closed and prohibitively costly, expert and potentially dangerous to its subjects tech, and make it available as an on-demand capability – in much the same way that many human genome companies have emerged to take advantage of the advances in the speed and availability of human genome sequencing to do the same, for the business and research community.
Johnson’s ambitious long-term goal with the company is to ultimately develop a much deeper understanding in the field of neuroscience.
“If we can quantify thoughts and emotions, conscious and subconscious, a new era of understanding, wellness, and human improvement will emerge,” Johnson writes in a press release.
It’s true that the brain’s inner workings are still largely a mystery to most researchers, especially in terms of how they translate to our cognition, feelings and actions. Kernel’s platform could mean significantly more people studying the
The direct-to-consumer health insurer Oscar has raised another $225 million in its latest, late-stage round of funding as its vision of tech-enabled healthcare services to drive down consumer costs becomes more and more of a reality.
In an effort to prevent a patient’s potential exposure to the novel coronavirus, COVID-19, most healthcare practices are seeing patients remotely via virtual consultations, and more patients are embracing digital health services voluntarily, which reduces costs for insurers and potentially provides better access to basic healthcare needs. Indeed, Oscar now has a $2 billion revenue base to point to and now a fresh pile of cash from which to draw.
“Transforming the health insurance experience requires the creation of personalized, affordable experiences at scale,” said Mario Schlosser, the co-founder and chief executive of Oscar.
Oscar’s insurance customers have the distinction of being among the most active users of telemedicine among all insurance providers in the U.S., according to the company. Around 30% of patients with insurance plans from the company have used telemedical services, versus only 10% of the country as a whole.
The new late-stage funding for Oscar includes new investors Baillie Gifford and Coatue, two late-stage investor that typically come in before a public offering. Other previous investors, including Alphabet, General Catalyst, Khosla Ventures, Lakestar and Thrive Capital, also participated in the round.
With the new funding, Oscar was able to shrug off the latest criticisms and controversies that swirled around the company and its relationship with White House official Jared Kushner as the president prepared its response to the COVID-19 epidemic.
As the Atlantic reported, engineers at Oscar spent days building a standalone website that would ask Americans to self report their symptoms and, if at risk, direct them to a COVID-19 test location. The project was scrapped within days of its creation, according to the same report.
The company now offers its services in 15 states and 29 U.S. cities, with more than 420,000 members in individual, Medicare Advantage and small group products, the company said.
As Oscar gets more ballast on its balance sheet, it may be readying itself for a public offering. The insurer wouldn’t be the first new startup to test public investor appetite for new listings. Lemonade, which provides personal and home insurance, has already filed to go public.
Oscar’s investors and executives may be watching closely to see how that listing performs. Despite its anemic target, the public market response could signal that more startups in the insurance space could make lemonade from frothy market conditions — even as employment numbers and the broader national economy continue to suffer from pandemic-induced economic shocks.
It’s more than two years since a flagship update to the European Union’s data protection regime moved into the application phase. Yet the General Data Protection Regulation (GDPR) has been dogged by criticism of a failure of enforcement related to major cross-border complaints — lending weight to critics who claim the legislation has created a moat for dominant multinationals, at the expense of smaller entities.
While EU lawmakers’ top-line message is the clear claim: ‘GDPR is working’ — with commissioners lauding what they couched as the many positives of this “modern and horizontal piece of legislation”; which they also said has become a “global reference point” — they conceded there is a “very serious to-do list”, calling for uniformly “vigorous” enforcement of the regulation across the bloc.
So, in other words, GDPR decisions need to flow more smoothly than they have so far.
Speaking at a Commission briefing today, Věra Jourová, Commission VP for values and transparency, said: “The European Data Protection Board and the data protection authorities have to step up their work to create a truly common European culture — providing more coherent and more practical guidance, and work on vigorous but uniform enforcement.
“We have to work together, as the Board and the Member States, to address concerns — in particular those of the small and medium enterprises.”
Justice commissioner, Didier Reynders, also speaking at the briefing, added: “We have to ensure that [GDPR] is applied harmoniously — or at least with the same vigour across the European territory. There may be some nuanced differences but it has to be applied with the same vigour.
“In order for that to happen data protection authorities have to be sufficiently equipped — they have to have the relevant number of staff, the relevant budgets, and there is a clear will to move in that direction.”
Front and center for GDPR enforcement is the issue of resourcing for national data protection authorities (DPAs), who are tasked with providing oversight and issuing enforcement decisions.
Jourová noted today that EU DPAs — taken as a whole — have increased headcount by 42% and budget by 49% between 2016 and 2019.
However that’s an aggregate which conceals major differences in resourcing. A recent report by pro-privacy browser Brave found that half of all national DPAs receive just €5M or less in annual budget from their governments, for example. Brave also found budget increases peaked for the application of the GDPR — saying, two years in, governments are now slowing the increase.
It’s also true that DPA case load isn’t uniform across the bloc, with certain Member States (notably Ireland and Luxembourg) handling many more and/or more complex complaints than others as a result of how many multinationals locate their regional HQs there.
One key issue for GDPR thus relates to how the regulation handles cross border cases.
A one-stop-shop mechanism was supposed to simplify this process — by having a single regulator (typically in the country where the business has its main establishment) taking a lead on complaints that affect users in multiple Member States, and other interested DPAs not dealing directly with the data processor. But they do remain involved — and, once there’s a draft decision, play an important role as they can raise objections to whatever the lead regulator has decided.
However a lot of friction seems to be creeping in via current processes, via technical issues related to sharing data between DPAs — and also via the opportunity for additional legal delays.
In the case of big tech, GDPR’s one-stop-shop has resulted in a major backlog around enforcement, with multiple complaints being re-routed via Ireland’s Data Protection Commission (DPC) — which is yet to issue a single decision on a cross border case. And has more than 20 such investigations ongoing.
Last month Ireland’s DPC trailed looming decisions on Twitter and Facebook — saying it had submitted a draft decision on the Twitter case to fellow DPAs and expressed hope that case could be finalized in July.
Its data protection commissioner, Helen Dixon, had previously suggested the first cross border decisions would be coming in “early” 2020. In the event, we’re past half way through the year still with no enforcement on show.
This looks especially problematic as there is a counter example elsewhere in the EU: France’s CNIL managed to issue a decision in a major GDPR case against Google all the way back in January 2019. Last week the country’s top court for administrative law cemented the regulator’s findings — dismissing Google’s appeal. Its $57M fine against Google remains the largest yet levied against big tech under GDPR.
Asked directly whether the Commission believes Ireland’s DPC is sufficiently resourced — with the questioner noting it has multiple ongoing investigations into Facebook, in particular, with still no decisions taken on the company — Jourová emphasized DPAs are “fully independent”, before adding: “The Commission has no tools to push them to speed up but the cases you mention, especially the cases that relate to big tech, are always complex and they require thorough investigation — and it simply requires more time.”
However CNIL’s example shows effective enforcement against major tech platforms is possible — at least, where there’s a will to take on corporate power. Though France’s relative agility may also have something to do with not having to deal simultaneously with such a massive load of complex cross-border cases.
At the same time, critics point to Ireland’s cosy political relationship with the corporate giants it attracts via low tax rates — which in turn raises plenty of questions when set against the oversized role its DPA has in overseeing most of big tech. The stench of forum shopping is unmistakable.
Criticism of national regulators extends beyond Ireland, too, though. In the UK, privacy experts have slammed the ICO’s repeated failure to enforce the law against the adtech industry — despite its own assessments finding systemic flouting of the law. The country remains an EU Member State until the end of the year — and the ICO is the best resourced DPA in the bloc, in terms of budget and headcount (and likely tech expertise too). Which hardly reflects well on the functional state of the regulation.
Despite all this, the Commission continues to present GDPR as a major geopolitical success, claiming — as it did again today — that it’s ahead of the digital regulatory curve globally at a time when lawmakers almost everywhere are considering putting harder limits on Internet players.
But there’s only so long it can sell a success on paper. Without consistently “vigorous” enforcement, the whole framework crumbles — so the EU’s executive has serious skin in the game when it comes to GDPR actually doing what it says on the tin.
Pressure is coming from commercial quarters too — not only privacy and consumer rights groups.
Earlier this year, Brave lodged a complaint with the Commission against 27 EU Member States — accusing them of under resourcing their national data protection watchdogs. It called on the EU executive to launch an infringement procedure against national governments, and refer them to the bloc’s top court if necessary. So startups are banging the drum for enforcement too.
If decision wheels don’t turn on their own, courts may eventually be needed to force Europe’s DPAs to get a move on — albeit, the Commission is still hoping it won’t have to come to that.
“We saw a considerable increase of capacities both in Ireland and Luxembourg,” said Jourová, discussing the DPA resourcing issue. “We saw a sufficient increase in at least half of other Member States DPAs so we have to let them do very responsible and good work — and of course wait for the results.”
Reynders suggested that while there has been an increase in resource for DPAs the Commission may need to conduct a “deeper” analysis — to see if more resource is needed in some Member States, “due to the size of the companies at work in the jurisdiction of such a national authority”.
“We have huge differences between the Member States about the need to react to the requests from the companies. And of course we need to reinforce the cooperation and the co-ordination on cross border issues. We need to be sure that it’s possible for all the national authorities to work together. And in the network of national authorities it’s the case — and with the Board [EDPB] it’s possible to organize that. So we’ll continue to work on it,” he said.
“So it’s not only a question to have the same kind of approach in all the Member States. It’s to be fit to all the demands coming in your jurisdiction and it’s true that in some jurisdictions we have more multinationals and more members of high tech companies than in others.”
“The best answer will be a decision from the Irish data protection authority about important cases,” he added.
We’ve reached out to the Irish DPC and the EDPB for comment on the Commission’s GDPR assessment. The former put out a report yesterday summarizing its regulatory activity over the past two years — in which it notes that since May 2018, when GDPR begun being applied, it’s opened 24 cross-border inquiries and 53 national inquiries. It further notes: “Through Supervision action, the DPC has brought about the postponement or revision of six planned big tech projects with implications for the rights and freedoms of individuals.”
Update: The DPC’s deputy commissioner, Graham Doyle, told us: “The Irish DPC has grown considerably in terms of staffing in recent years, from 30 staff in 2014 to 140 staff today. This growth will continue in 2020 and we expect to have 175 staff by year end. However, we must continue to increase these resources beyond 2020, including further expansion of specialist resources, e.g. technologists and legal specialists, to ensure that we can continue to fulfil the important role that we have.”
Asked whether the Commission has a list of Member States that it might instigate infringement proceedings against related to the terms of GDPR — which, for example, require governments to provide adequate resourcing to their national DPA in order that they can properly oversee the regulation — Reynders said it doesn’t currently have such a list.
“We have a list of countries where we try to see if it’s possible to reinforce the possibilities for the national authorities to have enough resources — human resources, financial resources, to organize better cross border activities — if at the end we see there’s a real problem about the enforcement of the GDPR in one Member State we will propose to go maybe to the court with an infringement proceeding — but we don’t have, for the moment, a list of countries to organize such a kind of process,” he said.
The commissioners were a lot more comfortable talking up the positives of GDPR, with Jourová noting, with a sphinx-like smile, how three years ago there was “literal panic” and an army of lobbyists warning of a “doomsday” for business and innovation should the legislation pass. “I have good news today — no dooms day was here,” she said.
“Our approach to the GDPR was the right one,” she went on. “It created the more harmonized rules across the Single Market and more and more companies are using GDPR concepts, such as privacy by design and by default, as a competitive differentiation.
“I can say that the philosophy of one continent, one law is very advantageous for European small and medium enterprises who want to operate on the European Single Market.
“In general GDPR has become a truly European trade mark,” she added. “It puts people and their rights at the center. It does not leave everything to the market like in the US. And it does not see data as a means for state supervision, as in China. Our truly European approach to data is the first answer to difficult questions we face as a society.”
“It makes us pause before facial recognition technology, for instance, will be fully developed or implemented. And I dare to say that it makes Europe fit for the digital age. On the international side the GDPR has become a reference point — with a truly global convergence movement. In this context we are happy to support trade and safe digital data flows and work against digital protectionism.”
Another success the commissioners credited to the GDPR framework is the region’s relatively swift digital response to the coronavirus — with the regulation helping DPAs to more quickly assess the privacy implications of COVID-19 contacts tracing apps and tools.
Reynders lauded “a certain degree of flexibility in the GDPR” which he said had been able to come into play during the crisis, feeding into discussions around tracing apps — on “how to ensure protection of personal data in the context of such tracing apps linked to public and individual health”.
Under its to-do list, other areas of work the Commission cited today included ensuring DPAs provide more such support related to the application of the regulation by coming out with guidelines related to other new technologies. “In various new areas we will have to be able to provide guidance quickly, just as we did on the tracing apps recently,” noted Reynders.
Further increasing public awareness of GDPR and the rights it affords is another Commission focus — though it said more than two-third of EU citizens above the age of 16 have at least heard of the GDPR. But it wants citizens to be able to make what Reynders called “best use” of their rights, perhaps via new applications.
“So the GDPR provides support to innovation in this respect,” he said. “And there’s a lot of work that still needs to be done in order to strengthen innovation.”
“We also have to convince those who may still be reticent about the GDPR. Certain companies, for instance, who have complained about how difficult it is to implement it. I think we need to explain to them what the requirements of the GDPR and how they can implement these,” he added.
France’s top court for administrative law has dismissed Google’s appeal against a $57M fine issued by the data watchdog last year for not making it clear enough to Android users how it processes their personal information.
The State Council issued the decision today, affirming the data watchdog CNIL’s earlier finding that Google did not provide “sufficiently clear” information to Android users — which in turn meant it had not legally obtained their consent to use their data for targeted ads.
“Google’s request has been rejected,” a spokesperson for the Conseil D’Etat confirmed to TechCrunch via email.
“The Council of State confirms the CNIL’s assessment that information relating to targeting advertising is not presented in a sufficiently clear and distinct manner for the consent of the user to be validly collected,” the court also writes in a press release [translated with Google Translate] on its website.
It found the size of the fine to be proportionate — given the severity and ongoing nature of the violations.
Importantly, the court also affirmed the jurisdiction of France’s national watchdog to regulate Google — at least on the date when this penalty was issued (January 2019).
The CNIL’s multimillion dollar fine against Google remains the largest to date against a tech giant under Europe’s flagship General Data Protection Regulation (GDPR) — lending the case a certain symbolic value, for those concerned about whether the regulation is functioning as intended vs platform power.
While the size of the fine is still relative peanuts vs Google’s parent entity Alphabet’s global revenue, changes the tech giant may have to make to how it harvests user data could be far more impactful to its ad-targeting bottom line.
Under European law, for consent to be a valid legal basis for processing personal data it must be informed, specific and freely given. Or, to put it another way, consent cannot be strained.
In this case French judges concluded Google had not provided clear enough information for consent to be lawfully obtained.
It also objected to a pre-ticked checkbox — which it said does not meet the requirements of the GDPR.
So, tl;dr, the CNIL’s decision has been entirely vindicated.
Reached for comment on the court’s dismissal of its appeal, a Google spokeswoman sent us this statement:
People expect to understand and control how their data is used, and we’ve invested in industry-leading tools that help them do both. This case was not about whether consent is needed for personalised advertising, but about how exactly it should be obtained. In light of this decision, we will now review what changes we need to make.
GDPR came into force in 2018, updating long standing European data protection rules and opening up the possibility of supersized fines of up to 4% of global annual turnover.
However actions against big tech have largely stalled, with scores of complaints being funnelled through Ireland’s Data Protection Commission — on account of a one-stop-shop mechanism in the regulation — causing a major backlog of cases. The Irish DPC has yet to issue decisions on any cross border complaints, though it has said its first ones are imminent — on complaints involving Twitter and Facebook.
Ireland’s data watchdog is also continuing to investigate a number of complaints against Google, following a change Google announced to the legal jurisdiction of where it processes European users’ data — moving them to Google Ireland Limited, based in Dublin, which it said applied from January 22, 2019 — with ongoing investigations by the Irish DPC into a long running complaint related to how Google handles location data and another major probe of its adtech, to name two.
On the GDPR one-stop shop mechanism — and, indirectly, the wider problematic issue of ‘forum shopping’ and European data protection regulation — the French State Council writes: “Google believed that the Irish data protection authority was solely competent to control its activities in the European Union, the control of data processing being the responsibility of the authority of the country where the main establishment of the data controller is located, according to a ‘one-stop-shop’ principle instituted by the GDPR. The Council of State notes however that at the date of the sanction, the Irish subsidiary of Google had no power of control over the other European subsidiaries nor any decision-making power over the data processing, the company Google LLC located in the United States with this power alone.”
In its own statement responding to the court’s decision, the CNIL also notes its view that GDPR’s one-stop-shop mechanism was not applicable in this case — writing that: “It did so by applying the new European framework as interpreted by all the European authorities in the guidelines of the European Data Protection Committee.”
Privacy NGO noyb — one of the privacy campaign groups which lodged the original ‘forced consent’ complaint against Google, all the way back in May 2018 — welcomed the court’s decision on all fronts, including the jurisdiction point.
Commenting in a statement, noyb’s honorary chairman, Max Schrems, said: “It is very important that companies like Google cannot simply declare themselves to be ‘Irish’ to escape the oversight by the privacy regulators.”
A key question is whether CNIL — or another (non-Irish) EU DPA — will be found to be competent to sanction Google in future, following it’s shift to naming Google Ireland as their data processor.
French digital rights group, La Quadrature du Net — which had filed a related complaint against Google, feeding the CNIL’s investigation — also declared victory today, noting it’s the first sanction in a number of GDPR complaints it has lodged against tech giants on behalf of 12,000 citizens.
Nouvelle victoire !
— La Quadrature du Net (@laquadrature) June 19, 2020
“The rest of the complaints against Google, Facebook, Apple and Microsoft are still under investigation in Ireland. In any case, this is what his authority promises us,” it adds in another tweet.
If your venture fund was not one of the ten investors that backed Reliance Jio Platforms in recent weeks, you won’t be able to plough cash into the fast-growing top Indian telecom network for at least a few quarters now as it is no longer scouting for fresh deals.
Reliance Jio Platforms, which has raised $15.2 billion in the past nine weeks, said today that Saudi Arabia’s PIF $1.5 billion investment on Thursday marked the “end of Jio Platforms’ current phase of induction of financial partners.”
Mukesh Ambani, who controls Reliance Industries (the parent firm of Jio Platforms and a range of other businesses), said that Jio Platforms and Reliance Retail, the largest retail chain in the country, “have received strong interest from strategic and financial investors,” but he will now “induct leading global partners in these businesses in the next few quarters.”
India’s richest man added that he plans to publicly list both Jio Platforms and Reliance Retail within the next five years. “With these initiatives, I have no doubt that your company will have one of the strongest balance sheets in the world.”
Mukesh Ambani, chairman and managing director of the Reliance Industries Ltd., arrives for the company’s annual general meeting in Mumbai, India, on Monday, Aug. 12, 2019. Photographer: Dhiraj Singh/Bloomberg via Getty Images
The announcement today caps perhaps the buzziest fundraising news cycle that lasted for nearly three months. Reliance Jio Platforms, which has amassed over 388 million subscribers in less than four years, announced in April that it had secured $5.7 billion from Facebook.
In the weeks since, the telecom operator has raised an additional $9.5 billion from a roster of nine high-profile investors including Silver Lake, KKR, and General Atlantic .
The huge capital infusion at the height of a global pandemic accounted for more than half of the investment into telecom companies globally this year, according to Bloomberg. By raising $15.2 billion, Jio Platforms, which Ambani describes as a “startup,” alone mopped up more capital than India’s entire tech startup ecosystem last year.
On Friday, Ambani also confirmed a market speculation about why Reliance Jio Platforms was raising money at all. Ambani said that the capital has helped him repay Reliance Industries’ net debt of $21 billion well ahead of schedule. The oil-to-retail giant, which was debt free in 2012, is now “net debt free,” he said.
Last August, Ambani promised shareholders that Reliance Industries, which is India’s most valued firm, would repay its debt by early 2021.
“Today I am both delighted and humbled to announce that we have fulfilled our promise to the shareholders by making Reliance net debt-free much before our original schedule of 31st March 2021,” he said.
Have you ever wondered why online ads appear for things that you were just thinking about?
There’s no big conspiracy. Ad tech can be creepily accurate.
Tech giant Oracle is one of a few companies in Silicon Valley that has near-perfected the art of tracking people across the internet. The company has spent a decade and billions of dollars buying startups to build its very own panopticon of users’ web browsing data.
One of those startups, BlueKai, which Oracle bought for a little over $400 million in 2014, is barely known outside marketing circles, but it amassed one of the largest banks of web tracking data outside of the federal government.
BlueKai uses website cookies and other tracking tech to follow you around the web. By knowing which websites you visit and which emails you open, marketers can use this vast amount of tracking data to infer as much about you as possible — your income, education, political views, and interests to name a few — in order to target you with ads that should match your apparent tastes. If you click, the advertisers make money.
But for a time, that web tracking data was spilling out onto the open internet because a server was left unsecured and without a password, exposing billions of records for anyone to find.
Security researcher Anurag Sen found the database and reported his finding to Oracle through an intermediary — Roi Carthy, chief executive at cybersecurity firm Hudson Rock and former TechCrunch reporter.
TechCrunch reviewed the data shared by Sen and found names, home addresses, email addresses and other identifiable data in the database. The data also revealed sensitive users’ web browsing activity — from purchases to newsletter unsubscribes.
“There’s really no telling how revealing some of this data can be,” said Bennett Cyphers, a staff technologist at the Electronic Frontier Foundation, told TechCrunch.
“Oracle is aware of the report made by Roi Carthy of Hudson Rock related to certain BlueKai records potentially exposed on the Internet,” said Oracle spokesperson Deborah Hellinger. “While the initial information provided by the researcher did not contain enough information to identify an affected system, Oracle’s investigation has subsequently determined that two companies did not properly configure their services. Oracle has taken additional measures to avoid a reoccurrence of this issue.”
Oracle did not name the companies or say what those additional measures were, and declined to answer our questions or comment further.
But the sheer size of the exposed database makes this one of the largest security lapses this year.
BlueKai relies on vacuuming up a never-ending supply of data from a variety of sources to understand trends to deliver the most precise ads to a person’s interests.
Marketers can either tap into Oracle’s enormous bank of data, which it pulls in from credit agencies, analytics firms, and other sources of consumer data including billions of daily location data points, in order to target their ads. Or marketers can upload their own data obtained directly from consumers, such as the information you hand over when you register an account on a website or when you sign up for a company’s newsletter.
But BlueKai also uses more covert tactics like allowing websites to embed invisible pixel-sized images to collect information about you as soon as you open the page — hardware, operating system, browser and any information about the network connection.
This data — known as a web browser’s “user agent” — may not seem sensitive, but when fused together it can create a unique “fingerprint” of a person’s device, which can be used to track that person as they browse the internet.
BlueKai can also tie your mobile web browsing habits to your desktop activity, allowing it to follow you across the internet no matter which device you use.
Say a marketer wants to run a campaign trying to sell a new car model. In BlueKai’s case, it already has a category of “car enthusiasts” — and many other, more specific categories — that the marketer can use to target with ads. Anyone who’s visited a car maker’s website or a blog that includes a BlueKai tracking pixel might be categorized as a “car enthusiast.” Over time that person will be siloed into different categories under a profile that learns as much about you to target you with those ads.
The technology is far from perfect. Harvard Business Review found earlier this year that the information collected by data brokers, such as Oracle, can vary wildly in quality.
But some of these platforms have proven alarmingly accurate.
In 2012, Target mailed maternity coupons to a high school student after an in-house analytics system figured out she was pregnant — before she had even told her parents — because of the data it collected from her web browsing.
Some might argue that’s precisely what these systems are designed to do.
Jonathan Mayer, a science professor at Princeton University, told TechCrunch that BlueKai is one of the leading systems for linking data.
“If you have the browser send an email address and a tracking cookie at the same time, that’s what you need to build that link,” he said.
The end goal: the more BlueKai collects, the more it can infer about you, making it easier to target you with ads that might entice you to that magic money-making click.
But marketers can’t just log in to BlueKai and download reams of personal information from its servers, one marketing professional told TechCrunch. The data is sanitized and masked so that marketers never see names, addresses or any other personal data.
As Mayer explained: BlueKai collects personal data; it doesn’t share it with marketers.
Behind the scenes, BlueKai continuously ingests and matches as much raw personal data as it can against each person’s profile, constantly enriching that profile data to make sure it’s up to date and relevant.
But it was that raw data spilling out of the exposed database.
TechCrunch found records containing details of private purchases. One record detailed how a German man, whose name we’re withholding, used a prepaid debit card to place a €10 bet on an esports betting site on April 19. The record also contained the man’s address, phone number and email address.
Another record revealed how one of the largest investment holding companies in Turkey used BlueKai to track users on its website. The record detailed how one person, who lives in Istanbul, ordered $899 worth of furniture online from a homeware store. We know because the record contained all of these details, including the buyer’s name, email address and the direct web address for the buyer’s order, no login needed.
We also reviewed a record detailing how one person unsubscribed from an email newsletter run by an electronics consumer, sent to his iCloud address. The record showed that the person may have been interested in a specific model of car dash-cam. We can even tell based on his user agent that his iPhone was out of date and needed a software update.
The more BlueKai collects, the more it can infer about you, making it easier to target you with ads that might entice you to that magic money-making click.
The data went back for months, according to Sen, who discovered the database. Some logs dated back to August 2019, he said.
“Fine-grained records of people’s web-browsing habits can reveal hobbies, political affiliation, income bracket, health conditions, sexual preferences, and — as evident here — gambling habits,” said the EFF’s Cyphers. “As we live more of our lives online, this kind of data accounts for a larger and larger portion of how we spend our time.”
Oracle declined to say if it informed those whose data was exposed about the security lapse. The company also declined to say if it had warned U.S. or international regulators of the incident.
Under California state law, companies like Oracle are required to publicly disclose data security incidents, but Oracle has not to date declared the lapse. When reached, a spokesperson for California’s attorney general’s office declined to say if Oracle had informed the office of the incident.
Under Europe’s General Data Protection Regulation, companies can face fines of up to 4% of their global annual turnover for flouting data protection and disclosure rules.
BlueKai is everywhere — even when you can’t see it.
One estimate says BlueKai tracks over 1% of all web traffic — an unfathomable amount of daily data collection — and tracks some of the world’s biggest websites: Amazon, ESPN, Forbes, Glassdoor, Healthline, Levi’s, MSN.com, Rotten Tomatoes, and The New York Times. Even this very article has a BlueKai tracker because our parent company, Verizon Media, is a BlueKai partner.
But BlueKai is not alone. Nearly every website you visit contains some form of invisible tracking code that watches you as you traverse the internet.
As invasive as it is that invisible trackers are feeding your web browsing data to a gigantic database in the cloud, it’s that very same data that has kept the internet largely free for so long.
To stay free, websites use advertising to generate revenue. The more targeted the advertising, the better the revenue is supposed to be.
While the majority of web users are not naive enough to think that internet tracking does not exist, few outside marketing circles understand how much data is collected and what is done with it.
Take the Equifax data breach in 2017, which brought scathing criticism from lawmakers after it collected millions of consumers’ data without their explicit consent. Equifax, like BlueKai, relies on consumers skipping over the lengthy privacy policies that govern how websites track them.
In any case, consumers have little choice but to accept the terms. Be tracked or leave the site. That’s the trade-off with a free internet.
But there are dangers with collecting web-tracking data on millions of people.
“Whenever databases like this exist, there’s always a risk the data will end up in the wrong hands and in a position to hurt someone,” said Cyphers.
Cyphers said the data, if in the hands of someone malicious, could contribute to identity theft, phishing or stalking.
“It also makes a valuable target for law enforcement and government agencies who want to piggyback on the data gathering that Oracle already does,” he said.
Even when the data stays where it’s intended, Cyphers said these vast databases enable “manipulative advertising for things like political issues or exploitative services, and it allows marketers to tailor their messages to specific vulnerable populations,” he said.
“Everyone has different things they want to keep private, and different people they want to keep them private from,” said Cyphers. “When companies collect raw web browsing or purchase data, thousands of little details about real people’s lives get scooped up along the way.”
“Each one of those little details has the potential to put somebody at risk,” he said.
Send tips securely over Signal and WhatsApp to +1 646-755-8849.
Plume, the Denver-based startup that provides hormone replacement therapies and medical consultations tailored to the trans community, could not be launching at a time when the company’s services are more needed.
It’s no hyperbole to say that transgender citizens in the United States are under attack. Whether from government policies that are intended to defund their access to insurer-provided medical care, or actual physical assaults, transgender Americans are living in physically and politically perilous times.
That’s one reason why Matthew Wetschler and his co-founder Jerrica Kirkley founded Plume, which provides telehealth services tailored for the transgender community.
The two doctors met and became friends in medical school. From the earliest days, the two were inseparable, Dr. Wetschler recalled. “She and I spent nearly 12 hours a day together,” he said.
Dr. Jerrica Kirkley, Plume co-founder Image Credit: Plume
After medical school, Wetschler moved to the Bay Area to finish his residency at Stanford and then went on to run a consulting firm that worked primarily with digital health startups. Kirkley, who is transgender, focused on gender therapy in the trans community.
A little over a year ago the two began to discuss the potential for creating a primarily telehealth service for the trans community, Wetschler said.
“We have always shared a belief that the healthcare system can do better for patients and doctors,” he said. And almost no population is quite as exposed to the shortcomings of the current healthcare system as the transgender community.
“I had been increasingly interested in the telehealth space and the emerging trend of leveraging mobile technology to provide unparalleled access to clinical care at the touch of a button,” said Wetschler. “And many of the problems [Kirkley] was seeing with her patients involved finding doctors with expertise and safe sources of medications.”
In many instances, despite the duty of care that physicians have to maintain, transgender patients are subjected to discriminatory practices and even the denial of care. Roughly 20% of transgender patients who seek care are either denied that care or harassed because of their gender identity, Wetschler said.
Many patients don’t have access to the medications they need, which can lead to up to 30% of patients seeking out the medications they need on the black market.
It’s an issue for the more than 1.4 million Americans who identify as transgender.
Plume provides a safe, on-demand service for patients that need it, said Wetschler. And does it for $99 per month.
The company doesn’t perform gender reassignment surgeries, but that’s about the only limitation on the care that the company offers. It can recommend local surgeons who will perform those procedures and it will provide consultations for patients or potential patients considering various hormone-related or surgical therapies. A majority of the Plume care team is transgender, according to Wetschler.
“What we’re proud of with Plume is that we offer a way of accessing this way of trans-specific care regardless of policy or insurance coverage,” said Wetschler.
At the heart of Plume’s services is access to gender-affirming hormone therapy. “This is the fundamental medical treatment for the trans community,” Wetschler said. “The trans experience is unique in that for most it involves navigating a gender and cis-normative healthcare system that may not understand their experiences. It can be highly traumatic.”
Plume offers a medical evaluation, ongoing monitoring and lab assignments and prescriptions. Soon, the company will also provide medication delivery, as well.
For most Americans, there’s a presumption that medical care will be delivered in a non-judgmental and safe way (both psychologically and physically). For many trans Americans there’s a lack of comfort and risk that’s inherent in the end-to-end care experience. Plume is trying to solve for that.
Dr. Matthew Wetschler, Plume, co-founder Image Credit: Plume
Investors from the nation’s top venture capital firms, General Catalyst and Slow Ventures, believe in the company’s vision and have backed it with $2.9 million in seed financing. Springbank Collective is also an investor in the company.
“What I was drawn to with Plume is the commitment and conviction Mathew and Jerrica operate with in providing the trans community — a woefully underserved group with access to the health care they deserve,” wrote General Catalyst partner, Olivia Lew, in a statement. “The rollback of healthcare protections for the trans community this past week have only heightened awareness for the dire need for this company. One of the things we’re most excited about in the next wave of health innovation are companies that are using modern platforms like telehealth to serve people’s individual needs with more consumer friendly, personalized experiences.”
These personalized services become even more important for populations at risk, like the trans community, and they’re also more valuable.
“When people take hormone therapy… there’s an opportunity to have an ongoing longitudinal relationship and that’s something that’s highly valued,” said Wetschler.
Currently the transgender population spends around $4.5 billion to $6 billion on medication. And there’s an opportunity to provide better emotional and behavioral support to patients, as well, according to Wetschler.
Plume began providing services in Colorado a year ago, and is now available in California, New York, Florida, Texas, Colorado, North Carolina, Virginia, Oregon, Maine and Massachusetts.
There are roughly 700,000 transgender patients who can now avail themselves of the services Plume offers, but the population, and therefore the need, is growing.
“The estimates on the size of the trans population since a decade ago has been growing 20% year over year,” says Wetschler. “And Generation Z is five times more likely than baby boomers to identify as trans. The full visibility of the trans community is yet to be realized.”
Microsoft tried to sell its facial recognition technology to the Drug Enforcement Administration as far back as 2017, according to newly released emails.
The American Civil Liberties Union obtained the emails through a public records lawsuit it filed in October, challenging the secrecy surrounding the DEA’s facial recognition program. The ACLU shared the emails with TechCrunch.
The emails, dated between September 2017 and December 2018, show that Microsoft privately hosted DEA agents at its Reston, Va. office to demonstrate its facial recognition system, and that the DEA later piloted the technology.
It was during this time Microsoft’s president Brad Smith was publicly calling for government regulations covering the use of facial recognition.
But the emails also show that the DEA expressed concern with purchasing the technology, fearing criticism from the FBI’s use of facial recognition at the time that caught the attention of government watchdogs.
Critics have long said this face-matching technology violates Americans’ right to privacy, and that the technology disproportionately shows bias against people of color. But despite the rise of facial recognition by police and in public spaces, Congress has struggled to keep pace and introduce legislation that would oversee the as-of-yet unregulated space.
But things changed in the wake of the nationwide and global protests in the wake of the death of George Floyd, which prompted a renewed focus about law enforcement and racial injustice.
An email from a Microsoft account executive inviting DEA agents to its Reston, Va. office to demo its facial recognition technology. (Source: ACLU/supplied)
Microsoft was the third company last week to say it will no longer sell its facial recognition technology to police until more federal regulation is put into place, following in the footsteps of Amazon, which put a one-year moratorium on selling its technology to police. IBM went further, saying it will wind down its facial recognition business entirely.
But Microsoft, like Amazon, did not say if it would no longer sell to federal departments and agencies like the DEA.
“It is bad enough that Microsoft tried to sell a dangerous technology to a law enforcement agency tasked with spearheading the racist drug war, but it gets worse,” said Nathan Freed Wessler, a senior staff attorney at the ACLU. “Even after belatedly promising not to sell face surveillance tech to police last week, Microsoft has refused to say whether it would sell the technology to federal agencies like the DEA,” said Wessler.
“This is troubling given the U.S. Drug Enforcement Administration’s record, but it’s even more disturbing now that Attorney General Bill Barr has reportedly expanded this very agency’s surveillance authorities, which could be abused to spy on people protesting police brutality,” he said.
Lawmakers have since called for a halt to the DEA’s covert surveillance of protesters, powers that were granted by the Justice Department earlier in June as protests spread across the U.S. and around the world.
When reached, DEA spokesperson Michael Miller declined to answer our questions. A spokesperson for Microsoft did not respond to a request for comment.