The UK’s data watchdog has restarted an investigation of adtech practices that, since 2018, have been subject to scores of complaints across Europe under the bloc’s General Data Protection Regulation (GDPR).
The high velocity trading of Internet users’ personal data can’t possibly be compliant with GDPR’s requirement that such information is adequately secured, the complaints contend.
Other concerns attached to real-time bidding (RTB) focus on consent, questioning how this can meet the required legal standard with data being broadcast to so many companies — including sensitive information, such as health data or religious, political and sexual affiliation.
Since the first complaints were filed the UK’s Information Commissioner’s Office (ICO) has raised its own concerns over what it said are systemic problems with lawfulness in the adtech sector. But last year announced it was pausing its investigation on account of disruption to businesses from the COVID-19 pandemic.
Today it said it’s unpausing its multi-year probe to keep on prodding.
In an update on its website, deputy commissioner, Simon McDougall, ICO, who takes care of “Regulatory Innovation and Technology” at the agency, writes that the eight-month freeze is over. And the audits are coming.
“We have now resumed our investigation,” he says. “Enabling transparency and protecting vulnerable citizens are priorities for the ICO. The complex system of RTB can use people’s sensitive personal data to serve adverts and requires people’s explicit consent, which is not happening right now.”
“Sharing people’s data with potentially hundreds of companies, without properly assessing and addressing the risk of these counterparties, also raises questions around the security and retention of this data,” he goes on. “Our work will continue with a series of audits focusing on digital market platforms and we will be issuing assessment notices to specific companies in the coming months. The outcome of these audits will give us a clearer picture of the state of the industry.”
It’s not clear what data the ICO still lacks to come to a decision on complaints that are approaching 2.5 years old at this point. But the ICO has committed to resume looking at adtech — including at data brokers, per McDougall, who writes that “we will be reviewing the role of data brokers in this adtech eco-system”.
“The investigation is vast and complex and, because of the sensitivity of the work, there will be times where it won’t be possible to provide regular updates. However, we are committed to publishing our final findings, once the investigation is concluded,” he goes on, managing expectations of any swift resolution to this vintage GDPR complaint.
Commenting on the ICO’s continued reluctance to take enforcement action against adtech despite mounds of evidence of rampant breaches of the law, Johnny Ryan, a senior fellow at the Irish Council for Civil Liberties who was involved in filing the first batch of RTB GDPR complaints — and continues to be a vocal critic of EU regulatory inaction against adtech — told TechCrunch: “It seems to me that the facts are clearly set out in the ICO’s mid 2019 adtech report.
“Indeed, that report merely confirms the evidence that accompanied our complaints in September 2018 in Ireland and the UK. It is therefore unclear why the ICO requires several months further. Nor is it clear why the ICO accepted empty gestures from the IAB and Google a year ago.”
“I have since published evidence of the impact that failure to enforce has had: Including documented use of RTB data to influence an election,” he added. “As that evidence shows, the scale of the vast data breach caused by the RTB system has increased significantly in the three years since I blew the whistle to the ICO in early 2018.”
Despite plentiful data on the scale of the personal data leakage involved in RTB, and widespread concern that all sorts of tangible harms are flowing from adtech’s mass surveillance of Internet users (from discrimination and societal division to voter manipulation), the ICO is in no rush to enforce.
In fact, it quietly closed the 2018 complaint last year — telling the complainants it believed it had investigated the matter “to the extent appropriate”. It’s in the process of being sued by the complainants as a result — for, essentially, doing nothing about their complaint. (The Open Rights Group, which is involved in that legal action, is running this crowdfunder to raise money to take the ICO to court.)
So what does the ICO’s great adtech investigation unpausing mean exactly for the sector?
Not much more than gentle notice you might be the recipient of an “assessment notice” at some future point, per the latest mildly worded ICO blog post (and judging its past performance).
Per McDougall, all organizations should be “assessing how they use personal data as a matter of urgency”.
“We already have existing, comprehensive guidance in this area, which applies to RTB and adtech in the same way it does to other types of processing — particularly in respect of consent, legitimate interests, data protection by design and data protection impact assessments (DPIAs),” he goes on, eschewing talk of any firmer consequences following should all that guidance continue being roundly ignored.
He ends the post with a nod to the Competition and Markets Authority’s recent investigation of Google’s Privacy Sandbox proposals (to phase out support for third party cookies on Chrome) — saying the ICO is “continuing” to work the CMA on that active antitrust complaint. You’ll have to fill in the blanks as to exactly what work it might be doing there — because, again, McDougall isn’t saying.
If it’s a veiled threat to the adtech industry to finally ‘get with the ICO’s privacy program’, or risk not having it fighting adtech’s corner in that crux antitrust vs privacy complaint, it really is gossamer thin.
The European Parliament is being investigated by the EU’s lead data regulator over a complaint that a website it set up for MEPs to book coronavirus tests may have violated data protection laws.
The complaint, which has been filed by six MEPs and is being supported by the privacy campaign group noyb, alleges third party trackers were dropped without proper consent and that cookie banners presented to visitors were confusing and deceptively designed.
It also alleges personal data was transferred to the US without a valid legal basis, making reference to a landmark legal ruling by Europe’s top court last summer (aka Schrems II).
The European Data Protection Supervisor (EDPS), which oversees EU institutions’ compliance with data rules, confirmed receipt of the complaint and said it has begun investigating.
It also said the “litigious cookies” had been disabled following the complaints, adding that the parliament told it no user data had in fact been transferred outside the EU.
“A complaint was indeed filed by some MEPs about the European Parliament’s coronavirus testing website; the EDPS has started investigating it in accordance with Article 57(1)(e) EUDPR (GDPR for EU institutions),” an EDPS spokesman told TechCrunch. “Following this complaint, the Data Protection Office of the European Parliament informed the EDPS that the litigious cookies were now disabled on the website and confirmed that no user data was sent to outside the European Union.”
“The EDPS is currently assessing this website to ensure compliance with EUDPR requirements. EDPS findings will be communicated to the controller and complainants in due course,” it added.
MEP, Alexandra Geese, of Germany’s Greens, filed an initial complaint with the EDPS on behalf of other parliamentarians.
Two of the MEPs that have joined the complaint and are making their names public are Patrick Breyer and Mikuláš Peksa — both members of the Pirate Party, in Germany and the Czech Republic respectively.
We’ve reached out to the European Parliament and the company it used to supply the testing website for comment.
The complaint is noteworthy for a couple of reasons. Firstly because the allegations of a failure to uphold regional data protection rules look pretty embarrassing for an EU institution. Data protection may also feel especially important for “politically exposed persons like Members and staff of the European Parliament”, as noyb puts it.
Back in 2019 the European Parliament was also sanctioned by the EDPS over use of US-based digital campaign company, NationBuilder, to process citizens’ voter data ahead of the spring elections — in the regulator’s first ever such enforcement of an EU institution.
So it’s not the first time the parliament has got in hot water over its attention to detail vis-a-vis third party data processors (the parliament’s COVID-19 test registration website is being provided by a German company called Ecolog Deutschland GmbH). Once may be an oversight, twice starts to look sloppy…
Secondly, the complaint could offer a relatively quick route for a referral to the EU’s top court, the CJEU, to further clarify interpretation of Schrems II — a ruling that has implications for thousands of businesses involved in transferring personal data out of the EU — should there be a follow-on challenge to a decision by the EDPS.
“The decisions of the EDPS can be directly challenged before the Court of Justice of the EU,” noyb notes in a press release. “This means that the appeal can be brought directly to the highest court of the EU, in charge of the uniform interpretation of EU law. This is especially interesting as noyb is working on multiple other cases raising similar issues before national DPAs.”
Guidance for businesses involved in transferring data out of the EU who are trying to understand how to (or often whether they can) be compliant with data protection law, post-Schrems II, is so far limited to what EU regulators have put out.
Further interpretation by the CJEU could bring more clarifying light — and, indeed, less wiggle room for processors wanting to keep schlepping Europeans’ data over the pond legally, depending on how the cookie crumbles (if you’ll pardon the pun).
noyb notes that the complaint asks the EDPS to prohibit transfers that violate EU law.
“Public authorities, and in particular the EU institutions, have to lead by example to comply with the law,” said Max Schrems, honorary chairman of noyb, in a statement. “This is also true when it comes to transfers of data outside of the EU. By using US providers, the European Parliament enabled the NSA to access data of its staff and its members.”
Per the complaint, concerns about third party trackers and data transfers were initially raised to the parliament last October — after an MEP used a tracker scanning tool to analyze the COVID-19 test booking website and found a total of 150 third-party requests and a cookie were placed on her browser.
Specifically, the EcoCare COVID-19 testing registration website was found to drop a cookie from the US-based company Stripe, as well as including many more third-party requests from Google and Stripe.
The complaint also notes that a data protection notice on the site informed users that data on their usage generated by the use of Google Analytics is “transmitted to and stored on a Google server in the US”.
Where consent was concerned, the site was found to serve users with two different conflicting data protection notices — with one containing a (presumably copypasted) reference to Brussels Airport.
Different consent flows were also presented, depending on the user’s region, with some visitors being offered no clear opt out button. The cookie notices were also found to contain a ‘dark pattern’ nudge toward a bright green button for ‘accepting all’ processing, as well as confusing wording for unclear alternatives.
A screengrab of the cookie consent prompt that the parliament’s COVID-19 test booking website displayed at the time of writing – with still no clearly apparent opt-out for non-essential cookies (Image credit: TechCrunch)
The EU has stringent requirements for (legally) gathering consents for (non-essential) cookies and other third party tracking technologies which states that consent must be clearly informed, specific and freely given.
In 2019, Europe’s top court further confirmed that consent must be obtained prior to dropping non-essential trackers. (Health-related data also generally carries a higher consent-bar to process legally in the EU, although in this case the personal information relates to appointment registrations rather than special category medical data).
The complaints allege that EU cookie consent requirements are not being met on the website.
While the presence of requests for US-based services (and the reference to storing data in the US) is a legal problem in light of the Schrems II judgement.
The US no longer enjoys legally frictionless flows of personal data out of the EU after the CJEU torpedoed the adequacy arrangement the Commission had granted (invalidating the EU-US Privacy Shield mechanism) — which in turn means transfers of data on EU peoples to US-based companies are complicated.
Data controllers are responsible for assessing each such proposed transfer, on a case by case basis. A data transfer mechanism called Standard Contractual Clauses was not invalidated by the CJEU. But the court made it clear SCCs can only be used for transfers to third countries where data protection is essentially equivalent to the legal regime offered in the EU — doing so at the same time as saying the US does not meet that standard.
Guidance from the European Data Protection Board in the wake of the ruling suggests that some EU-US data transfers may be possible to carry in compliance with European law. Such as those that involve encrypted data with no access by the receiving US-based entity.
However the bar for compliance varies depending on the specific context and case.
Additionally, for a subset of companies that are definitely subject to US surveillance law (such as Google) the compliance bar may be impossibly high — as surveillance law is the main legal sticking point for EU-US transfers.
So, once again, it’s not a good look for the parliament website to have had a notice on its COVID-19 testing website that said personal data would be transferred to a Google’s server in the US. (Even if that functionality had not been activated, as seems to have been claimed.)
Another reason the complaint against the European Parliament is noteworthy is that it further highlights how much web infrastructure in use within Europe could be risking legal sanction for failing to comply with regional data protection rules. If the European Parliament can’t get it right, who is?
noyb filed a raft of complaints against EU websites last year which it had identified still sending data to the US via Google Analytics and/or Facebook Connect integrations a short while after the Schrems II ruling. (Those complaints are being looked into by DPAs across the EU.)
Facebook’s EU data transfers are also very much on the hook here. Earlier this month the tech giant’s lead EU data regulator agreed to ‘swiftly resolve’ a long-standing complaint over its transfers.
Schrems filed that complaint all the way back in 2013. He told us he expects the case to be resolved this year, likely within around six to nine months. So a final decision should come in 2021.
He has previously suggested the only way for Facebook to fix the data transfers issue is to federate its service, storing European users’ data locally. While last year the tech giant was forced to deny it would shut its service in Europe if its lead EU regulator followed through on enforcing a preliminary order to suspend transfers (which it blocked by applying for a judicial review of the Irish DPC’s processes).
The alternative outcome Facebook has been lobbying for is some kind of a political resolution to the legal uncertainty clouding EU-US data transfers. However the European Commission has warned there’s no quick fix — and reform of US surveillance law is needed.
So with options for continued icing of EU data protection enforcement against US tech giants melting fast in the face of bar-setting CJEU rulings and ongoing strategic litigation like this latest noyb-supported complaint pressure is only going to keep building for pro-privacy reform of US surveillance law. Not that Facebook has openly come out in support of reforming FISA yet.
Fyllo said it already works with 320 cannabis retailers across 25 states (plus Puerto Rico and Jamaica). According to Chief Marketing Officer Conrad Lisco, this acquisition allows the company to offer the industry’s “first end-to-end marketing solution,” combining consumer data, digital advertising, regulatory compliance (thanks to Fyllo’s acquisition of CannaRegs last year) and, through DataOwl, CRM and loyalty tied into a business’ point-of-sale system.
As an example, founder and CEO Chad Bronstein (previously the chief revenue officer at digital marketing company Amobee) said that retailers will be able to use the Fyllo platform to send promotional texts to regular customers while, crucially, ensuring that those campaigns are fully in compliance with state and local regulations. He added that eventually, the platform could be used beyond cannabis, in other regulated industries.
“Beauty, gambling, etc. — the same things need to happen in every regulated industry, they would all benefit from loyalty and compliance automation,” Bronstein said.
In addition, he argued that mainstream brands are increasingly interested in using data around cannabis and CBD consumers, as borne out in a Forrester study commissioned by Fyllo.
Lisco said this acquisition comes at a crucial time for the cannabis industry, with dispensaries classified as essential businesses in many states, as well as continuing momentum behind marijuana legalization.
“In 2020, cannabis came of age,” he said. “We would say it went from illicit to essential in 10 months … 2021 is really about watching endemic [marijuana] brands try to scale, so that they can capitalize on the explosive growth. They’ve historically been excluded from the kinds of integrated marketing capabilities that other non-endemic [mainstream] brands get to use when they go to market.”
Bronstein said Fyllo aims to bring those capabilities to marijuana brands, first by bringing its compliance capabilities into the DataOwl product. The company also aims to create a national cannabis loyalty platform, allowing a marijuana retailer in one state to easily expand its marketing capabilities into other states in a compliant fashion.
The financial terms of the acquisition were not disclosed. DataOwl co-founders Dan Hirsch and Vartan Arabyan are joining Fyllo, as is the rest of their team, bringing the company’s total headcount to 110.
“By integrating with Fyllo, DataOwl’s solutions will reach the widest possible audience via the industry’s most innovative marketing platform,” Hirsch said in a statement.
Podchaser, a startup building what it calls “IMDB for podcasts,” recently announced that it has raised $4 million in a funding round led by Greycroft.
In other words, it’s a site where — similar to the Amazon-owned Internet Movie Database — users can look up who’s appeared in which podcasts, rate and review those podcasts and add them to lists. In fact, CEO Bradley Davis told me that the startup’s “vibrant, exciting community of podcast nerds” have already created 8.5 million podcast credits in the database.
Davis said this is something he simply wanted to exist and was, in fact, convinced that it had to exist already. When he realized that it didn’t, he posted on Reddit asking whether anyone was willing to build the company with him — which is how he connected with his eventual co-founder and CTO Ben Slinger in Australia. (Podchaser is a fully distributed company, with Davis currently based in Oklahoma City.)
To be clear, Davis doesn’t think podcast nerds are the only ones taking advantage of the listings. Instead, he suggested that it’s useful for anyone looking to learn more about podcasts and discover new ones, with Podchaser’s monthly active users quintupling over the past year.
For example, he said that one of the most popular pages is politician Pete Buttigieg’s profile, where visitors don’t just learn about Buttigieg’s own podcast but see others on which he’s appeared. (You can also use Podchaser to learn more about TechCrunch’s Equity, Mixtape and Original Content podcasts, though those profiles could stand to be filled out a bit more.)
There has been endless discussion about how to fix podcast discovery, and while Davis isn’t claiming that Podchaser will solve it wholesale, he thinks it can be part of the solution — not just through its own database, but through the broader Podcast Taxonomy project that it’s organizing.
“I think if we are successful at standardizing a lot fo the terminology, and if we do an analysis of all podcasts, of how popular they are, that [will help many listeners] to cull and find the good stuff,” he said.
Podchaser plans to add new features that will further encourage user contributions, like a gamification system and a discussion system.
While the consumer site is free, the startup recently launched a paid product called Podchaser Pro, which provides reach and demographic data across 1.8 million podcasts. It also monetizes by providing podcast players with access to its credits through an API.
Davis said the startup was “lucky” that it decided to build a database that’s “agnostic” from any specific podcast player.
“So we had a lot of latitude to work with those platforms, we integrate with many of those platforms and you’re going to see a lot of our credits showing up [in podcast players],” he said.
In addition to Greycroft, Advancit Capital, LightShed Ventures, Powerhouse Capital, High Alpha, Hyde Park Venture Partners and Poplar Ventures also participated in the round, as did TrendKite founder A.J. Bruno, Ad Results Media CEO Marshall Williams and Shamrock Capital Partner Mike LaSalle.
“Even in the face of a pandemic, the podcast market continues to grow at a breakneck pace,” said Greycroft co-founder and chairman Alan Patricof in a statement. “The demand from consumers and brands is insatiable. Podchaser’s data and discovery tools are crucial to taking podcasting to new heights.”
A long-running investigation in the European Union focused on the transparency of data-sharing between Facebook and WhatsApp has taken the first major step towards a resolution. Ireland’s Data Protection Commission (DPC) confirmed Saturday it sent a draft decision to fellow EU DPAs towards the back end of last year.
This will trigger a review process of the draft by other DPAs. Majority backing for Facebook’s lead EU data supervisor’s proposed settlement is required under the bloc’s General Data Protection Regulation (GDPR) before a decision can be finalized.
The DPC’s draft WhatsApp decision, which it told us was sent to the other supervisors for review on December 24, is only the second such draft the Irish watchdog has issued to-date in cross-border GDPR cases.
The first case to go through the process was an investigation into a Twitter security breach — which led to the company being issued with a $550,000 fine last month.
The WhatsApp case may look very timely, given the recent backlash over an update to its T&Cs, but it actually dates back to 2018, the year GDPR begun being applied — and relates to WhatsApp Ireland’s compliance with Articles 12-14 of the GDPR (which set out how information must be provided to data subjects whose information is being processed in order that they are able to exercise their rights).
In a statement, the DPC said:
“As you are aware, the DPC has been conducting an investigation into WhatsApp Ireland’s compliance with Articles 12-14 of the GDPR in terms of transparency, including in relation to transparency around what information is shared with Facebook, since 2018. The DPC has provisionally concluded this investigation and we sent a draft decision to our fellow EU Data Protection Authorities on December 24, 2020 (in accordance with Article 60 of the GDPR in order to commence the co-decision-making process) and we are waiting to receive their comments on this draft decision.
“When the process is completed and a final decision issues, it will make clear the standard of transparency to which WhatsApp is expected to adhere as articulated by EU Data Protection Authorities,” it added.
Ireland has additional ongoing GDPR investigations into other aspects of the tech giant’s business, including related to complaints filed back in May 2018 by the EU privacy rights not-for-profit, noyb (over so called ‘forced consent’). In May 2020 the DPC said that separate investigation was at the decision-making phase — but so far it has not confirmed sending a draft decision for review.
It’s also notably that the time between the DPC’s Twitter draft and the final decision being issued — after gaining majority backing from other EU DPAs — was almost seven months.
The Twitter case was relatively straightforward (a data breach) vs the more complex business of assessing ‘transparency’. So a final decision on WhatsApp seems unlikely to come to a swifter resolution. There are clearly substantial differences of opinion between DPAs on how the GDPR should be enforced across the bloc. (In the Twitter case, for example, German DPAs suggested a fine of up to $22M vs Ireland’s initial proposal of a maximum of $300k). Although there is some hope that GDPR enforcement of cross border cases will speed up as DPAs gain experience of the various processes involved in making these co-decisions.
Returning to WhatsApp, the messaging platform has had plenty of problems with transparency in recent weeks — garnering lots of unwelcome attention and concern over the privacy implications of a confusing mandatory update to its T&Cs which has contributed to a major migration of users to alternative chat platforms, such as Signal and Telegram.
The backlash led WhatsApp to announced last week that it was delaying enforcement of the new terms by three months. Last week Italy’s data protection agency also issued a warning over a lack of clarity in the T&Cs — saying it could intervene using an emergency process allowed for by EU law (which would be in addition to the ongoing DPC procedure).
On the WhatsApp T&Cs controversy, the DPC’s deputy commissioner Graham Doyle told us the regulator had received “numerous queries” from confused and concerned stakeholders which he said led it to re-engage with the company. The regulator previously obtained a commitment from WhatsApp that there is “no change to data-sharing practices either in the European Region or the rest of the world”. But it subsequently confirmed it would delay enforcement of the new terms.
“The updates made by WhatsApp last week are about providing clearer, more detailed information to users on how and why they use data. WhatsApp have confirmed to us that there is no change to data-sharing practices either in the European Region or the rest of the world arising from these updates. However, the DPC has received numerous queries from stakeholders who are confused and concerned about these updates,” Doyle said.
“We engaged with WhatsApp on the matter and they confirmed to us that they will delay the date by which people will be asked to review and accept the terms from February 8th to May 15th. In the meantime, WhatsApp will launch information campaigns to provide further clarity about how privacy and security works on the platform. We will continue to engage with WhatsApp on these updates.”
While there’s no doubt Europe’s record of enforcement of its much vaunted data protection laws against tech giants remains a major weak point of the regulation, there are signs that increased user awareness of rights and, more broadly, concern for privacy, is causing a shift in the balance of power in favor of users.
Proper privacy enforcement is still sorely lacking but Facebook being forced to put a T&Cs update on ice for three months — as its business is subject to ongoing regulatory scrutiny — suggests the days of platform giants being able to move fast and break things are firmly on the wain.
Similarly, for example, Facebook recently had to delay the launch of a dating feature in Europe while it consulted with the DPC. It also remains limited in the data it can share between WhatsApp and Facebook because of the existence of the GDPR — so still can’t share data for ad targeting and product enhancement purposes, even under the new terms.
Europe, meanwhile, is coming with ex ante rules for platform giants that will place further obligations on how they can operate with the aim of counteracting abusive behaviors and bolstering competition in digital markets.
Confusion over an update to Facebook-owned chat platform WhatsApp’s terms and conditions has triggered an intervention by Italy’s data protection agency.
The Italian GPDP said today it has contacted the European Data Protection Board (EDPB) to raise concerns about a lack of clear information over what’s changing under the incoming T&Cs.
In recent weeks WhatsApp has been alerting users they must accept new T&Cs in order to keep using the service after February 8.
A similar alert over updated terms has also triggered concerns in India — where a petition was filed today in the Delhi High Court alleging the new terms are a violation of users’ fundamental rights to privacy and pose a threat to national security.
In a notification on its website the Italian agency writes that it believes it is not possible for WhatsApp users to understand the changes that are being introduced under the new terms, nor to “clearly understand which data processing will actually be carried out by the messaging service after February 8”.
Screengrab of the T&Cs alert being shown to WhatsApp users in Europe (Image credit: TechCrunch)
For consent to be a valid legal basis for processing personal data under EU law the General Data Protection Regulation (GDPR) requires that users are properly informed of each specific use and given a free choice over whether their data is processed for each purpose.
The Italian agency adds that it reserves the right to intervene “as a matter of urgency” in order to protect users and enforce EU laws on the protection of personal data.
We’ve reached out to the EDPB with questions about the GPDP’s intervention. The steering body’s role is typically to act as a liaison between EU DPAs. But it also issues guidance on the interpretation of EU law and can step in to cast the deciding vote in cases where there is disagreement on cross-border EU investigations.
Earlier this week Turkish antitrust authorities also announced they are investigating WhatsApp’s updated T&Cs — objecting to what they claimed are differences in how much data will be shared with Facebook under the new terms in Europe and outside.
While, on Monday, Ireland’s Data Protection Commission — which is WhatsApp’s lead data regulator in the EU — told us the messaging app has given it a commitment EU users are not affected by any broader change to data-sharing practices. So Facebook’s lead regulator in the EU has not raised any objections to the new WhatsApp T&Cs.
WhatsApp itself has also claimed there are no changes at all to its data sharing practices anywhere in the world under this update.
Clearly there’s been a communications failure somewhere along the chain — which makes the Italian objection to a lack of clarity in the wording of the new T&Cs seem reasonable.
Reached for comment on the GDPD’s intervention, a WhatsApp spokesperson told us:
How exactly the Italian agency could intervene over the WhatsApp T&Cs is an interesting question. (And, indeed, we’ve reached out to the GPDP with questions.)
The GDPR’s one-stop-shop mechanism means cross-border complaints get funnelled through a lead data supervisor where a company has its main regional base (Ireland in WhatsApp’s case). But as noted above, Ireland has — thus far — said it doesn’t have a problem with WhatsApp’s updated T&Cs.
However under the GDPR, other DPAs do have powers to act off their own bat when they believe there is a pressing risk to users’ data.
Such as, in 2019, when the Hamburg DPA ordered Google to stop manual reviews of snippets of Google Assistant users’ audio (which it had been reviewing as part of a grading program).
In that case Hamburg informed Google of its intention to use the GDPR’s Article 66 powers — which allows a national agency to order data processing to stop if it believes there is “an urgent need to act in order to protect the rights and freedoms of data subjects” — which immediately led to Google suspending human reviews across Europe.
The tech giant later amended how the program operates. The Hamburg DPA didn’t even need to use Article 66 — just the mere threat of the order to stop processing was enough.
Some 1.5 years later and there are signs many EU data protection agencies — outside a couple of key jurisdictions which oversee the lions’ share of big tech — are becoming frustrated by perceived regulatory inaction against big tech.
So there may be an increased willingness among these agencies to resort to creative procedures of their own to protect citizens’ data. (And it’s certainly interesting to note that France’s CNIL recently slapped Amazon and Google with big fines over cookie consents — acting under the ePrivacy Directive, which does not include a GDPR-style one-stop-shop mechanism.)
In related news this week, an opinion by an advisor to the EU’s top court also appears to be responding to concern at GDPR enforcement bottlenecks.
In the opinion Advocate General Bobek takes the view that the law allows national DPAs to bring their own proceedings in certain situations — including in order to adopt “urgent measures” or to intervene “following the lead data protection authority having decided not to handle a case”.
The CJEU ruling on that case is still pending but the court tends to align with the position of its advisors so it seems likely we’ll see data protection enforcement activity increasing across the board from EU DPAs in the coming years, rather than being stuck waiting for a few DPAs to issue all the major decisions.
In the last decade we’ve seen massive changes in how we consume and interact with our world. The Yellow Pages is a concept that has to be meticulously explained with an impertinent scoff at our own age. We live within our smartphones, within our apps.
While we thrive with the information of the world at our fingertips, we casually throw away any semblance of privacy in exchange for the convenience of this world.
This line we straddle has been drawn with recklessness and calculation by big tech companies over the years as we’ve come to terms with what app manufacturers, large technology companies, and app stores demand of us.
According to Symantec, 89% of our Android apps and 39% of our iOS apps require access to private information. This risky use sends our data to cloud servers, to both amplify the performance of the application (think about the data needed for fitness apps) and store data for advertising demographics.
While large data companies would argue that data is not held for long, or not used in a nefarious manner, when we use the apps on our phones, we create an undeniable data trail. Companies generally keep data on the move, and servers around the world are constantly keeping data flowing, further away from its source.
Once we accept the terms and conditions we rarely read, our private data is no longer such. It is in the cloud, a term which has eluded concrete understanding throughout the years.
A distinction between cloud-based apps and cloud computing must be addressed. Cloud computing at an enterprise level, while argued against ad nauseam over the years, is generally considered to be a secure and cost-effective option for many businesses.
Even back in 2010, Microsoft said 70% of its team was working on things that were cloud-based or cloud-inspired, and the company projected that number would rise to 90% within a year. That was before we started relying on the cloud to store our most personal, private data.
To add complexity to this issue, there are literally apps to protect your privacy from other apps on your smart phone. Tearing more meat off the privacy bone, these apps themselves require a level of access that would generally raise eyebrows if it were any other category of app.
Consider the scenario where you use a key to encrypt data, but then you need to encrypt that key to make it safe. Ultimately, you end up with the most important keys not being encrypted. There is no win-win here. There is only finding a middle ground of contentment in which your apps find as much purchase in your private data as your doctor finds in your medical history.
The cloud is not tangible, nor is it something we as givers of the data can access. Each company has its own cloud servers, each one collecting similar data. But we have to consider why we give up this data. What are we getting in return? We are given access to applications that perhaps make our lives easier or better, but essentially are a service. It’s this service end of the transaction that must be altered.
App developers have to find a method of service delivery that does not require storage of personal data. There are two sides to this. The first is creating algorithms that can function on a local basis, rather than centralized and mixed with other data sets. The second is a shift in the general attitude of the industry, one in which free services are provided for the cost of your personal data (which ultimately is used to foster marketing opportunities).
Of course, asking this of any big data company that thrives on its data collection and marketing process is untenable. So the change has to come from new companies, willing to risk offering cloud privacy while still providing a service worth paying for. Because it wouldn’t be free. It cannot be free, as free is what got us into this situation in the first place.
What we can do right now is at least take a stance of personal vigilance. While there is some personal data that we cannot stem the flow of onto cloud servers around the world, we can at least limit the use of frivolous apps that collect too much data. For instance, games should never need access to our contacts, to our camera and so on. Everything within our phone is connected, it’s why Facebook seems to know everything about us, down to what’s in our bank account.
This sharing takes place on our phone and at the cloud level, and is something we need to consider when accepting the terms on a new app. When we sign into apps with our social accounts, we are just assisting the further collection of our data.
The cloud isn’t some omnipotent enemy here, but it is the excuse and tool that allows the mass collection of our personal data.
The future is likely one in which devices and apps finally become self-sufficient and localized, enabling users to maintain control of their data. The way we access apps and data in the cloud will change as well, as we’ll demand a functional process that forces a methodology change in service provisions. The cloud will be relegated to public data storage, leaving our private data on our devices where it belongs. We have to collectively push for this change, lest we lose whatever semblance of privacy in our data we have left.
The FTC has reached a settlement with Flo, a period- and fertility tracking app with 100M+ users, over allegations it shared users’ health data with third party app analytics and marketing services like Facebook despite promising to keep users’ sensitive health data private.
Flo must obtain an independent review of its privacy practices and obtain app users’ consent before sharing their health information, under the terms of the proposed settlement.
The action follows a 2019 reports in the Wall Street Journal which conducted an analysis of a number of apps’ data sharing activity.
It found the fertility tracking app had informed Facebook of in-app activity — such as when a user was having their period or had informed it of an intention to get pregnant despite. It did not find any way for Flo users to prevent their health information from being sent to Facebook.
In the announcement of a proposed settlement today, the FTC said press coverage of Flo sharing users data with third party app analytics and marketing firms including Facebook and Google had led to hundreds of complaints.
The app only stopped leaking users’ health data following the negative press coverage, it added.
Under the FTC settlement terms, Flo is prohibited from misrepresenting the purposes for which it (or entities to whom it discloses data) collect, maintain, use, or disclose the data; how much consumers can control these data uses; its compliance with any privacy, security, or compliance program; and how it collects, maintains, uses, discloses, deletes, or protects users’ personal information.
Flo must also notify affected users about the disclosure of their personal information and instruct any third party that received users’ health information to destroy that data.
The app maker has been contacted for comment.
No financial penalty is being levied but the FTC’s proposed settlement is noteworthy as it’s the first time the US regulator has ordered notice of a privacy action.
“Apps that collect, use, and share sensitive health information can provide valuable services but consumers need to be able to trust these apps. We are looking closely at whether developers of health apps are keeping their promises and handling sensitive health information responsibly,” said Andrew Smith, director of the FTC’s Bureau of Consumer Protection, in a statement.
While the settlement received unanimous backing from five commissioners, two — Rohit Chopra and Rebecca Kelly Slaughter — have issued a joint dissent statement in which they highlight the lack of a finding of a breach of a US’ health breach notification rule which they argue should have applied in this case.
Congress directed the FTC to protect our sensitive health data through this rule, to complement HIPAA, but the agency has never brought an action under this rule. This should change.
— Rohit Chopra (@chopraftc) January 13, 2021
“In our view, the FTC should have charged Flo with violating the Health Breach Notification Rule. Under the rule, Flo was obligated to notify its users after it allegedly shared their health information with Facebook, Google, and others without their authorization. Flo did not do so, making the company liable under the rule,” they write.
“The Health Breach Notification Rule was first issued more than a decade ago, but the explosion in connected health apps make its requirements more important than ever. While we would prefer to see substantive limits on firms’ ability to collect and monetize our personal information, the rule at least ensures that services like Flo need to come clean when they experience privacy or security breaches. Over time, this may induce firms to take greater care in collecting and monetizing our most sensitive information,” they add.
Flo is by no means the only period tracking app to have attracted attention for leaking user data in recent years.
A report last year by the Norwegian Consumer Council found fertility/period tracker apps Clue and MyDays unexpectedly sharing data with adtech giants Facebook and Google, for example.
That report also found similarly non-transparent data leaking going on across a range of apps, including dating, religious, make-up and kids apps — suggesting widespread breaches of regional data processing laws which require that for consent to be valid users must be properly informed and given a genuine free choice. Although app makers have so far faced little enforcement for analytics/marketing-related data leaking in the region.
In the US regulatory action around apps hinges on misleading claims — whether about privacy (in Flo’s case) or in relation to the purposes of data processing, as in a separate settlement the FTC put out earlier this week related to cloud storage app Ever.
Ireland’s Data Protection Commission (DPC) has agreed to swiftly finalize a long-standing complaint against Facebook’s international data transfers which could force the tech giant to suspend data flows from the European Union to the US within in a matter of months.
The complaint, which was filed in 2013 by privacy campaigner Max Schrems, relates to the clash between EU privacy rights and US government intelligent agencies’ access to Facebook users’ data under surveillance programs that were revealed in high resolution detail by NSA whistleblower Edward Snowden.
The DPC has made the commitment to a swift resolution of Schrems’ complaint now in order to settle a judicial review of its processes which noyb, his privacy campaign group, filed last year in response to its decision to pause his complaint and opt to open a new case procedure.
Under the terms of the settlement Schrems will also be heard in the DPC’s “own volition” procedure, as well as getting access to all submissions made by Facebook — assuming the Irish courts allow that investigation to go ahead, noyb said today.
And while noyb acknowledged may be further pause, as/if the DPC waits on a High Court judgement of Facebook’s own Judicial Review of its processes before revisiting the original complaint, Schrems suggests his 7.5 year old complaint could be headed for a final decision within a matter of months.
“The courts in Ireland would be reluctant to give a deadline and the DPC played that card and said they can’t provide a timeline… So we got the maximum that’s possible under Irish law. Which is ‘swift’,” he told TechCrunch, describing this as “frustrating but the maximum possible”.
Asked for his estimate of when a final decision will at last close out the complaint, he suggested it could be as soon as this summer — but said that more “realistically” it would be fall.
Schrems has been a vocal critic of how the DPC has handled his complaint — and more widely of the slow pace of enforcement of the bloc’s data protection rules vs fast-moving tech giants — with Ireland’s regulator choosing to raise wider concerns about the legality of mechanisms for transferring data from the EU to the US, rather than ordering Facebook to suspend data flows as Schrems had asked in the complaint.
The saga has already had major ramifications, leading to a landmark ruling by Europe’s top court last summer when the CJEU struck down a flagship EU-US data transfer arrangement after it found the US does not provide the same high standards of protection for personal data as the EU does.
The CJEU also made it clear that EU data protection regulators have a duty to step in and suspend transfers to third countries when data is at risk — putting the ball squarely back in Ireland’s court.
Reached for comment on the latest development the DPC told us it would have a response later today. So we’ll update this report when we have it.
The DPC, which is Facebook’s lead data regulator in the EU under the bloc’s General Data Protection Regulation (GDPR), already sent the tech giant a preliminary order to suspend data transfers back in September — following the landmark ruling by the CJEU.
However Facebook immediately filed a legal challenge — couching the DPC’s order as premature, despite the complaint itself being more than seven years old.
noyb said today that it’s expecting Facebook to continue to try to use the Irish courts to delay enforcement of EU law. And the tech giant admitted last year that it’s using the courts to ‘send a signal’ to lawmakers to come up with a political resolution for an issue that affects scores of businesses which also transfer data between the EU and the US, as well as to buy time for a new US administration to be in a position to grapple with the issue.
But the clock is now ticking on how much longer Zuckerberg can play this game of regulatory whack-a-mole. And a final reckoning for Facebook’s EU data flows could come within half a year.
This sets a fairly tight deadline for any negotiations between EU and US lawmakers over a replacement for the defunct Privacy Shield.
European commissioners said last fall that no replacement will be possible without reform of US surveillance law. And whether such radical retooling of US law could come as soon as the summer or even fall seems doubtful — unless there’s a major effort among US tech companies to lobby their own lawmakers to make the necessary changes.
In court documents it filed last year linked to its challenge of the DPC’s preliminary order, Facebook suggested it might have to close service in Europe if EU law is enforced against its data transfers.
However its PR chief, Nick Clegg, swiftly denied it would ever pull service — instead urging EU lawmakers to look favorably on its data-dependent business model by claiming that “personalized advertising” is vital to the EU’s post-COVID-19 economic recovery.
The consensus among the bloc’s digital lawmakers, however, is that tech giants need more regulation, not less.
Separately today, an opinion by an influential advisor to the CJEU could have implications for how swiftly GDPR is enforced in Europe in the future if the court aligns with Advocate General Bobek’s opinion — as he appears to be taking aim at bottlenecks that have formed in key jurisdictions like Ireland as a result of the GDPR’s one-stop-shop mechanism.
So while Bobek confirms the general competence of a lead regulator to investigate in cross-border cases, he also writes that “the lead data protection authority cannot be deemed as the sole enforcer of the GDPR in cross-border situations and must, in compliance with the relevant rules and time limits provided for by the GDPR, closely cooperate with the other data protection authorities concerned, the input of which is crucial in this area”.
He also sets out specific conditions where national DPAs could bring their own proceedings, in his view, including for the purpose of adopting “urgent measures” or to intervene “following the lead data protection authority having decided not to handle a case”.
Responding to the AG’s opinion, the DPC’s deputy commissioner, Graham Doyle, told us: “We, along with our colleague EU DPAs, note the opinion of the Advocate General and await the final judgment of the Court in terms of its interpretation of any relevant One Stop Shop rules.”
Asked for a view on the AG’s remarks, Jef Ausloos, a postdoc researcher in data privacy at the University of Amsterdam, said the opinion conveys “a clear recognition that ACTUAL protection and enforcement might be crippled by the [one-stop-shop] mechanism”.
However he suggested any new openings for DPAs to bypass a lead regulator which could flow from the opinion aren’t likely to shake things up in the short term. “I think the door is open for some changes/bypassing DPC. BUT, only in the long run,” he said.
The spread of misinformation and fake news online has a dangerous impact on public well-being. Misinformation is difficult to fight, and 73% of Americans surveyed by Pew Research ahead of the presidential election expressed little or no confidence in the ability of major tech companies to keep their platforms from being misused. The open-source Starling Framework for Data Integrity was launched to protect the veracity of online content using blockchain technology, creating “birth certificates” for photos and videos and tracking any changes made to them. Numbers Protocol, a Taipei, Taiwan-based startup, founded by Startling Framework collaborators, is now commercializing its tech to make it more widely available.
While journalism, especially citizen journalism, is an obvious use case for Capture App, it can also be used by people who want to prove that they created images that are being shared online. Numbers will add more features to the app, including a video camera.
All photos taken by the Capture App have their metadata certified and sealed on the blockchain (users can adjust privacy settings if they, for example, don’t want to share their precise location). Then any changes to the photo, including ones made with editing software, are traced and recorded.
Numbers plans to add a video function to the app and create a channel where people can publish certified content, with the goal of changing the information industry, co-founder Tammy Yang told TechCrunch.
Before launching Numbers, Yang worked with the Starling Framework, an initiative by Stanford University and the USC Shoah Foundation. The Shoah Foundation’s work includes preserving testimonies from survivors of genocide and mass violence and the Starling Framework’s technology was created to help them safeguard photos and videos. The Starling Framework was also used by Reuters journalists to capture, verify and store photos taken during the U.S. presidential primaries in March. (The Starling Framework’s other collaborators include Filechain, Hala Systems and Protocol Labs).
The Starling Framework worked with the Shoah Foundation and Reuters to integrate its technology into their workflows, since many photojournalists use digital SLRs and programs like Adobe Photoshop. Capture App was created to allow wider access to the same technology.
Fake news and misinformation has created more public awareness of the need to preserve photo integrity, said Yang. While there are other companies that use blockchain tech to protect data and content, Numbers focuses on certifying photos at their point of origin, and then continuing to record any alterations.
“We focus very much on the camera itself, so at the time the photo is taken, the integrity is already preserved,” said Yang. “If content is captured on a camera app and then copied to a content platform, it’s already very difficult to verify its origin. If I take a photo from Facebook and register it on the blockchain, it means nothing. It’s very different if I take a photo with Capture App and immediately create a registration on the blockchain.”
Roboflow, a startup that aims to simplify the process of building computer vision models, today announced that it has raised a $2.1 million seed round co-led by Lachy Groom and Craft Ventures. Additional investors include Segment co-founder Calvin French-Owen, Lob CEO Leore Avidar, Firebase co-founder James Tamplin and early Dropbox engineer Aston Motes, among others. The company is a graduate of this year’s Y Combinator summer class.
Co-founded by Joseph Nelson (CEO) and Brad Dwyer (CTO), Roboflow is the result of the team members’ previous work on AR and AI apps, including Magic Sudoku from 2017. After respectively exiting their last companies, the two co-founders teamed up again to launch a new AR project, this time with a focus on board games. In 2019, the team actually participated in the TC Disrupt hackathon to add chess support to that app — but in the process, the team also realized that it was spending a lot of time trying to solve the same problems that everybody else in the computer vision field was facing.
“In building both those [AR] products, we realized most of our time wasn’t spent on the board game part of it, it was spent on the image management, the annotation management, the understanding of ‘do we have enough images of white queens, for example? Do we have enough images from this angle or this angle? Are the rooms brighter or darker?’ This data mining of understanding in visual imagery is really underdeveloped. We had built a bunch of — at the time — internal tooling to make this easier for us,” Nelson explained. “And in the process of building this company, of trying to make software features for real-world objects, realize that developers didn’t need inspiration. They needed tooling.”
So shortly after participating in the hackathon, the founders started putting together the first version of Roboflow and launched the first version a year ago in January 2020. And while the service started out as a platform for managing large image data sets, it has since grown to become an end-to-end solution for handling image management, analysis, pre-processing and augmentation, up to building the image recognition models and putting them into production. As Nelson noted, while the team didn’t set out to build an end-to-end solution, its users kept pushing the team to add more features.
So far, about 20,000 developers have used the service, with use cases ranging from accelerating cancer research to smart city applications. The thesis here, Nelson said, is that computer vision is going to be useful for every single industry. But not every company has the in-house expertise to set up the infrastructure for building models and putting it into production, so Roboflow aims to provide an easy to use platform for this that individual developers and (over time) large enterprise teams can use to quickly iterate on their ideas.
Roboflow plans to use the new funding to expand its team, which currently consists of five members, both on the engineering and go-to-market side.
“As small cameras become cheaper and cheaper, we’re starting to see an explosion of video and image data everywhere,” Segment co-founder and Roboflow investor French-Owen noted. “Historically, it’s been hard for anyone but the biggest tech companies to harness this data, and actually turn it into a valuable product. Roboflow is building the pipelines for the rest of us. They’re helping engineers take the data that tells a thousand words, and giving them the power to turn that data into recommendations and insights.”
The maker of a defunct cloud photo storage app that pivoted to selling facial recognition services has been ordered to delete user data and any algorithms trained on it, under the terms of an FTC settlement.
The regulator investigated complaints the Ever app — which gained earlier notoriety for using dark patterns to spam users’ contacts — had applied facial recognition to users’ photographs without properly informing them what it was doing with their selfies.
Under the proposed settlement, Ever must delete photos and videos of users who deactivated their accounts and also delete all face embeddings (i.e. data related to facial features which can be used for facial recognition purposes) that it derived from photos of users who did not give express consent to such a use.
Moreover, it must delete any facial recognition models or algorithms developed with users’ photos or videos.
This full suite of deletion requirements — not just data but anything derived from it and trained off of it — is causing great excitement in legal and tech policy circles, with experts suggesting it could have implications for other facial recognition software trained on data that wasn’t lawfully processed.
Or, to put it another way, tech giants that surreptitiously harvest data to train AIs could find their algorithms in hot water with the US regulator.
This is revolutionary – and fascinating to see the US beats the EU in drawing this consequence https://t.co/20evtGaZM5
— Mireille Hildebrandt (@mireillemoret) January 12, 2021
That could require deleting the core ML models underlying Facebook Newsfeed or Google Search
— ashkan soltani (@ashk4n) January 12, 2021
The quick background here is that the Ever app shut down last August, claiming it had been squeezed out of the market by increased competition from tech giants like Apple and Google.
However the move followed an investigation by NBC News — which in 2019 reported that app maker Everalbum had pivoted to selling facial recognition services to private companies, law enforcement and the military (using the brand name Paravision) — apparently repurposing people’s family snaps to train face reading AIs.
One commissioner, Rohit Chopra, issued a standalone statement in which he warns that current gen facial recognition technology is “fundamentally flawed and reinforces harmful biases”, saying he supports “efforts to enact moratoria or otherwise severely restrict its use”.
“Until such time, it is critical that the FTC meaningfully enforce existing law to deprive wrongdoers of technologies they build through unlawful collection of Americans’ facial images and likenesses,” he adds.
Chopra’s statement highlights the fact that commissioners have previously voted to allow data protection law violators to retain algorithms and technologies that “derive much of their value from ill-gotten data”, as he puts it — flagging an earlier settlement with Google and YouTube under which the tech giant was allowed to retain algorithms and other technologies “enhanced by illegally obtained data on children”.
And he dubs the Ever decision “an important course correction”.
Ever has not been fined under the settlement — something Chopra describes as “unfortunate” (saying it’s related to commissioners “not having restated this precedent into a rule under Section 18 of the FTC Act”).
He also highlights the fact that Ever avoided processing the facial data of a subset of users in States which have laws against facial recognition and the processing of biometric data — citing that as an example of “why it’s important to maintain States’ authority to protect personal data”. (NB: Ever also avoided processing EU users’ biometric data; another region with data protection laws.)
“With the tsunami of data being collected on individuals, we need all hands on deck to keep these companies in check,” he goes on. “State and local governments have rightfully taken steps to enact bans, moratoria, and other restrictions on the use of these technologies. While special interests are actively lobbying for federal legislation to delete state data protection laws, it will be important for Congress to resist these efforts. Broad federal preemption would severely undercut this multifront approach and leave more consumers less protected.
“It will be critical for the Commission, the states, and regulators around the globe to pursue additional enforcement actions to hold accountable providers of facial recognition technology who make false accuracy claims and engage in unfair, discriminatory conduct.”
Paravision has been contacted for comment on the FTC settlement.
Ubiquiti, one of the biggest sellers of networking gear, including routers, webcams and mesh networks, has alerted its customers to a data breach.
In a short email to customers on Monday, the tech company said it became aware of unauthorized access to its systems hosted by a third-party cloud provider. Ubiquiti didn’t name the cloud company, when the breach happened or what caused the security incident. A company spokesperson did not respond to requests for comment.
But the company confirmed that it “cannot be certain” that customer data had not been exposed.
“This data may include your name, email address, and the one-way encrypted password to your account,” said the email to customers. “The data may also include your address and phone number if you have provided that to us.”
Although the email says passwords are scrambled, the company says users should update their passwords and also enable two-factor authentication, which makes it harder for hackers from taking the stolen passwords and using them to break into accounts.
Ubiquiti account users can remotely access and manage their routers and devices from the web.
The networking company quickly followed its email with a post on its community pages confirming that the email was authentic, after several complained that the email sent to customers included typos.
“A new law to follow” seems unlikely to have featured on many business wishlists this holiday season, particularly if that law concerned data privacy. Digital privacy management is an area that takes considerable resources to whip into shape, and most SMBs just aren’t equipped for it.
But for 2021, I believe startups in the United States should be demanding that legislators deliver a federal privacy law. Yes, they should demand to be regulated.
For every day that goes by without agreed-upon federal standards for data, these companies lose competitive edge to the rest of the world. Soon there may be no coming back.
For every day that goes by without agreed-upon federal standards for data, these companies lose competitive edge to the rest of the world.
Businesses should not view privacy and trust infrastructure requirements as burdensome. They should view them as keys that can unlock the full power of the data they possess. They should stop thinking about privacy as compliance and begin thinking of it as a harmonization of the customer relationship. The rewards flowing to each party from such harmonization are bountiful. The U.S. federal government is in a unique position to help realize those rewards.
To understand what I mean, cast your eyes to Europe, where it’s become clear that the GDPR was nowhere near the final destination of EU data policy. Indeed it was just the launchpad. Europe’s data regime can frustrate (endless cookie banners anyone?), but it has set an agreed-upon standard of protection for citizens and elevated their trust in internet infrastructure.
For example, a Deloitte survey found that 44% of consumers felt that organizations cared more about their privacy after GDPR came into force. With a baseline standard established — seatbelts in every car — Europe is now squarely focused on raising the speed limit.
EU lawmakers recently unveiled plans for “A Europe fit for the Digital Age.” in the words of Internal Market Commissioner Thierry Breton, it’s a plan to make Europe “the most data-empowered continent in the world.”
Here are some pillars of the plan. While reading, imagine that you are a U.S.-based health tech startup. Imagine the disadvantage you would face against a similar, European-based company, if these initiatives came to fruition:
There are so many ways governments can help businesses maximize their data leverage in ways that improve society. But the American public currently has no appetite for that. They don’t trust the internet.
They want to see Mark Zuckerberg and Jeff Bezos sweating it out under Senate Committee questioning. Until we trust our leaders to protect basic online rights, widespread data empowerment initiatives will not be politically viable.
In Europe, the equation is totally different. GDPR was the foundation of a European data strategy, not the capstone.
While the EU powers forward, America’s ability to enact federal privacy reform is stymied by two quintessentially American privacy sticking points:
These are important questions that must be answered as a function of our country’s unique cultural and political history. But currently they’re the roadblocks that stall American industry while the EU, seatbelts secure, begins speeding down the data autobahn.
If you want a visceral example of how this gap is already impacting American businesses, look no further than the fallout of the ECJ’s Schrems II decision in the middle of last summer. Europe’s highest court invalidated a key agreement used to transfer EU data back to the U.S., essentially because there’s no federal law to ensure EU citizens’ data would be protected once it lands in America.
The legal wrangling continues, but the impact of this decision was so considerable that Facebook legitimately threatened to quit operating Europe if the Schrems II ruling was enforced.
While issues generated for smaller businesses don’t grab as many headlines, rest assured that on the front lines of this issue, I’ve seen many SMB’s data operations thrown into total chaos. In other words, the geopolitical battle for a data-driven business edge is already well underway. We are losing.
To sum it up, the United States increasingly finds itself in a position that’s unprecedented since the dawn of the internet era: laggard. American tech companies still innovate at a fantastic rate, but America’s inability to marshal private sector practices to reflect evolving public sentiment threatens to become a yoke around the economy’s neck.
The catastrophic response to the COVID-19 pandemic fell far short of other nations’ efforts. Our handling of data privacy protection costs far less in human terms, but it grows astronomically more expensive in dollar terms with every passing day.
That’s why I believe America’s startup community should demand federal lawmakers follow the recent example of Europe, India, New Zealand, Brazil, South Africa and Canada. They need to introduce federally guaranteed modern data privacy protections as soon as possible.
Former U.S. cybersecurity official Chris Krebs and former Facebook chief security officer Alex Stamos have founded a new cybersecurity consultancy firm, which already has its first client: SolarWinds .
The two have been hired as consultants to help the Texas-based software maker recover from a devastating breach by suspected Russian hackers, which used the company’s software to set backdoors in thousands of organizations and to infiltrate at least 10 U.S. federal agencies and several Fortune 500 businesses.
At least the Treasury Dept., State Dept. and the Department of Energy have been confirmed breached, in what has been described as likely the most significant espionage campaign against the U.S. government in years. And while the U.S. government has already pinned the blame on Russia, the scale of the intrusions are not likely to be known for some time.
Krebs was one of the most senior cybersecurity officials in the U.S. government, most recently serving as the director of Homeland Security’s CISA cybersecurity advisory agency from 2018, until he was fired by President Trump for his efforts to debunk false election claims — many of which came from the president himself. Stamos, meanwhile, joined the Stanford Internet Observatory after holding senior cybersecurity positions at Facebook and Yahoo. He also consulted for Zoom amid a spate of security problems.
In an interview with the Financial Times, which broke the story, Krebs said it could take years before the hackers are ejected from infiltrated systems.
SolarWinds chief executive Sudhakar Ramakrishna acknowledged in a blog post that it had brought on the consultants to help the embattled company to be “transparent with our customers, our government partners, and the general public in both the near-term and long-term about our security enhancements.”
It’s the image that’s been seen around the world. One of hundreds of pro-Trump supporters in the private office of House Speaker Nancy Pelosi after storming the Capitol and breaching security in protest of the certification of the election results for President-elect Joe Biden. Police were overrun (when they weren’t posing for selfies) and some lawmakers’ offices were trashed and looted.
As politicians and their staffs were told to evacuate or shelter in place, one photo of a congressional computer left unlocked still with an evacuation notice on the screen spread quickly around the internet. At least one computer was stolen from Sen. Jeff Merkley’s office, reports say.
A supporter of U.S. President Donald Trump leaves a note in the office of U.S. Speaker of the House Nancy Pelosi as the protest inside the U.S. Capitol in Washington, D.C, January 6, 2021. Demonstrators breached security and entered the Capitol as Congress debated the 2020 presidential election Electoral Vote Certification. Image Credits: SAUL LOEB/AFP via Getty Images
Most lawmakers don’t have ready access to classified materials, unless it’s for their work sitting on sensitive committees, such as Judiciary or Intelligence. The classified computers are separate from the rest of the unclassified congressional network and in a designated sensitive compartmented information facility, or SCIFs, in locked-down areas of the Capitol building.
“No indication those [classified systems] were breached,” tweeted Mieke Eoyang, a former House Intelligence Committee staffer.
Hi, former HPSCI staffer here.
Congressional offices deal in unclassified information. Most of the things they deal with are open source.
Classified information dealt with in designated Congressional SCIFs. No indication those were breached. https://t.co/Ciel6BW3oU
— Mieke "18 USC 2383" Eoyang (@MiekeEoyang) January 7, 2021
But the breach will likely present a major task for Congress’ IT departments, which will have to figure out what’s been stolen and what security risks could still pose a threat to the Capitol’s network. Kimber Dowsett, a former government security architect, said there was no plan in place to respond to a storming of the building.
My heart goes out to the unsung IT heroes at the Capitol tonight. My guess is they’ve never had to run asset inventory IR before – a daunting, stressful task in a tabletop exercise – and they’re running one (prob w/o a playbook) following a full on assault of the Capitol.
— socially distant, mask wearing bat (@mzbat) January 7, 2021
The threat to Congress’ IT network is probably not as significant as the ongoing espionage campaign against U.S. federal networks. But the only saving grace is that so many congressional staffers were working from home during the assault due to the ongoing pandemic, which yesterday reported a daily record of almost 4,000 people dead from COVID-19 in one day.
Segment, the startup Twilio bought last fall for $3.2 billion, was just beginning to take off in 2015 when it ran into a scaling problem: It was growing so quickly, the tools it had built to process marketing data on its platform were starting to outgrow the original system design.
Inaction would cause the company to hit a technology wall, managers feared. Every early-stage startup craves growth and Segment was no exception, but it also needed to begin thinking about how to make its data platform more resilient or reach a point where it could no longer handle the data it was moving through the system. It was — in a real sense — an existential crisis for the young business.
The project that came out of their efforts was called Centrifuge, and its purpose was to move data through Segment’s data pipes to wherever customers needed it quickly and efficiently at the lowest operating cost.
Segment’s engineering team began thinking hard about what a more robust and scalable system would look like. As it turned out, their vision would evolve in a number of ways between the end of 2015 and today, and with each iteration, they would take a leap in terms of how efficiently they allocated resources and processed data moving through its systems.
The project that came out of their efforts was called Centrifuge, and its purpose was to move data through Segment’s data pipes to wherever customers needed it quickly and efficiently at the lowest operating cost. This is the story of how that system came together.
The systemic issues became apparent the way they often do — when customers began complaining. When Tido Carriero, Segment’s chief product development officer, came on board at the end of 2015, he was charged with finding a solution. The issue involved the original system design, which like many early iterations from startups was designed to get the product to market with little thought given to future growth and the technical debt payment was coming due.
“We had [designed] our initial integrations architecture in a way that just wasn’t scalable in a number of different ways. We had been experiencing massive growth, and our CEO [Peter Reinhardt] came to me maybe three times within a month and reported various scaling challenges that either customers or partners of ours had alerted him to,” said Carriero.
The good news was that it was attracting customers and partners to the platform at a rapid clip, but it could all have come crashing down if the company didn’t improve the underlying system architecture to support the robust growth. As Carriero reports, that made it a stressful time, but having come from Dropbox, he was actually in a position to understand that it’s possible to completely rearchitect the business’s technology platform and live to tell about it.
“One of the things I learned from my past life [at Dropbox] is when you have a problem that’s just so core to your business, at a certain point you start to realize that you are the only company in the world kind of experiencing this problem at this kind of scale,” he said. For Dropbox that was related to storage, and for Segment it was processing large amounts of data concurrently.
In the build-versus-buy equation, Carriero knew that he had to build his way out of the problem. There was nothing out there that could solve Segment’s unique scaling issues. “Obviously that led us to believe that we really need to think about this a little bit differently, and that was when our Centrifuge V2 architecture was born,” he said.
The company began measuring system performance, at the time processing 8,442 events per second. When it began building V2 of its architecture, that number had grown to an average of 18,907 events per second.
TaskRabbit has reset an unknown number of customer passwords after confirming it detected “suspicious activity” on its network.
The IKEA -owned online marketplace for on-demand labor said it reset user passwords out of an abundance of caution and that it “took steps to prevent access to any user accounts,” a TaskRabbit spokesperson told TechCrunch.
The company later confirmed it was a credential stuffing attack, where existing sets of exposed or breached usernames and passwords are matched against different websites to access accounts.
“We acted in an abundance of caution and reset passwords for many TaskRabbit accounts, including all users who had not logged in since May 1, 2020, as well as all users who logged in during the time period of the attack, even though most of the latter activity was attributable to users’ regular use of our services,” the spokesperson said.
“As always, the safety and security of the TaskRabbit community is our priority, and we will continue to be vigilant about protecting our users’ personal information,” said the spokesperson.
TaskRabbit customers were alerted to the incident in a vague email that only noted their password had been recently changed “as a security precaution,” without saying what specifically prompted the account change. TechCrunch confirmed that the email was legitimate.
The password reset email sent to TaskRabbit customers. (Image: Sarah Perez/TechCrunch)
It’s not uncommon for companies to reset passwords after a security incident where customer or account information is accessed or stolen in a breach.
Last year, online apparel marketplace StockX reset customer passwords after initially citing “system updates,” but later admitted it took action after it found suspicious activity on its network. Days later, a hacker provided TechCrunch with 6.8 million StockX account records stolen from the company’s servers.
TaskRabbit’s freelance labor marketplace was founded in 2008, and grew over time from an auction-style platform for negotiating tasks and errands to a more mature and tailored marketplace to match customers with contractors. That eventually attracted the attention of furniture retailer IKEA, which bought the startup in September 2017 after TaskRabbit put itself on the market for a strategic buyer.
The year after the acquisition, however, TaskRabbit had to take its website and app down due to a “cybersecurity incident.” The company later revealed an attacker had gained unauthorized access to its systems. Then-TaskRabbit CEO Stacy Brown-Philpot said the company had contracted with an outside forensics team to identify what customer information had been compromised by the attack, and urged both users and providers to stay vigilant in monitoring their own accounts for suspicious activity.
Following the attack, the company said it was implementing several new security measures and would work on making the log-in process more secure. It also said it would reduce the amount of data retained about taskers and customers as well as “enhance overall network cyber threat detection technology.”
Updated with additional comment from TaskRabbit.
The security industry is reverberating with news of the FireEye breach and the announcement that the U.S. Treasury Department, DHS and potentially several other government agencies, were hacked due (in part, at least) to a supply chain attack on SolarWinds.
These breaches are reminders that nobody is immune to risk or being hacked. I’ve no doubt that both FireEye and SolarWinds take security very seriously, but every company is subject to the same reality: Compromise is inevitable.
The way I judge these events is not by whether someone is hacked, but by how much effort the adversary needed to expend to turn a compromise into a meaningful breach. We’ve heard FireEye put effort and execution into the protection of sensitive tools and accesses, forcing the Russians to put stunning effort into a breach.
Run a red-team security program, see how well you stack up and learn from your mistakes.
More evidence of FireEye’s dedication to security can be seen by the speed with which its moved to publish countermeasure tools. While the Solarwinds breach has had stunning immediate fallout, I’ll reserve opining about SolarWinds until we learn details of the whole event, because while a breach that traverses the supply should be exceedingly rare, they’ll never be stopped entirely.
All this is to say, this news isn’t surprising to me. Security organizations are a top adversarial target, and I would expect a nation-state like Russia to go to great lengths to impede FireEye’s ability to protect its customers. FireEye has trusted relationships with many enterprise organizations, which makes it a juicy target for espionage activities. SolarWinds, with its lengthy list of government and large enterprise customers, is a desirable target for an adversary looking to maximize its efforts.
Image Credits: David Wolpoff
Hack Solarwinds once, and Russia gains access to many of its prized customers. This isn’t the first time a nation-state adversary has gone through the supply chain. Nor is it likely to be the last.
For security leaders, this is a good opportunity to reflect on their reliance and trust in technology solutions. These breaches are reminders of unseen risk debt: Organizations have a huge amount of potential harm built up through their providers that typically isn’t adequately hedged against.
People need to ask the question, “What happens when my MSSP, security vendor or any tech vendor is compromised?” Don’t look at the Solarwinds hack in isolation. Look at every one of your vendors that can push updates into your environment.
No single tool can be relied on to never fail.
You need to expect that FireEye, SolarWinds and every other vendor in your environment will eventually get compromised. When failures occur, you need to know: “Will the remainder of my plans be sufficient, and will my organization be resilient?”
What’s your backup plan when this fails? Will you even know?
If your security program is critically dependent on FireEye (Read: It’s the primary security platform), then your security program is dependent on FireEye implementing, executing and auditing its own program, and you and your management need to be okay with that.
Often, organizations purchase a single security solution to cover multiple functions, like their VPN, firewall, monitoring solution and network segmentation device. But then you have a single point of failure. If the box stops working (or is hacked), everything fails.
From a structural standpoint, it’s hard to have something like SolarWinds be a point of compromise and not have wide-reaching effects. But if you trusted Solarwind’s Orion platform to talk to and integrate with everything in your environment, then you took the risk that a breach like this wouldn’t happen. When I think about utilizing any tool (or service) one question I always ask is, “When this thing fails, or is hacked, how will I know and what will I do?”
Sometimes the answer might be as simple as, “That’s an insurance-level event,” but more often I’m thinking about other ways to get some signal to the defenders. In this case, when Solarwinds is the vector, will something else in my stack still give me an indication that my network is spewing traffic to Russia?
Architecting a resilient security program isn’t easy; in fact, it’s a really hard problem to solve. No product or vendor is perfect, that’s been proven time and again. You need to have controls layered on top of each other. Run through “what happens” scenarios. Organizations focusing on defense in depth, and defending forward, will be in a more resilient position. How many failures does it take for a hacker to get to the goods? It should take more than one mishap for critical data to end up in Russia’s hands.
It’s critical to think in terms of probability and likelihood and put controls in place to prevent accidental changes to baseline security. Least privilege should be the default, and lots of segmenting should prevent rapid lateral motion. Monitoring and alerting should trigger responses, and if any wild deviations occur, the fail safes should activate. Run a red-team security program, see how well you stack up and learn from your mistakes.
Much was made of the security impacts of the FireEye breach. In reality, Russia already has tools commensurate to those taken from FireEye. So while pundits might like to make a big story out of the tools themselves, this is not likely to be reminiscent of other leaks, such as those of NSA tools in 2017.
The exploits released from the NSA were remarkable and immediately useful for adversaries to use, and those exploits were responsible for temporarily increased risk the industry experienced after the Shadow Brokers hack — it wasn’t the rootkits and malware (which were what was stolen at FireEye). In the FireEye case, since it appears there were no zero-days or exploits taken, I don’t expect that breach to cause significant shockwaves.
Breaches of this magnitude are going to happen. If they’re something your organization needs to be resilient against, then it’s best to be prepared for them.
Dozens of medical imaging devices built by General Electric are secured with hardcoded default passwords that can’t be easily changed, but could be exploited to access sensitive patient scans, according to new findings by security firm CyberMDX.
The researchers said that an attacker would only need to be on the same network to exploit a vulnerable device, such as by tricking an employee into opening an email with malware. From there, the attacker could use those unchanged hardcoded passwords to obtain whatever patient data was left on the device or disrupt the device from operating properly.
CyberMDX said X-ray machines, CT and MRI scanners, and ultrasound and mammography devices are among the affected devices.
GE uses hardcoded passwords to remotely maintain the devices. But Elad Luz, head of research at CyberMDX, said some customers were not aware that their devices had vulnerable devices. Luz described the passwords as “hardcoded,” because although they can be changed, customers have to rely on a GE engineer to change the passwords on-site.
The vulnerability has also prompted an alert by Homeland Security’s cybersecurity advisory unit, CISA. Customers of affected devices should contact GE to change the passwords.
Hannah Huntly, a spokesperson for GE Healthcare, said in a statement: “We are not aware of any incident where this potential vulnerability has been exploited in a clinical situation. We have conducted a full risk assessment and concluded that there is no patient safety concern. Maintaining the safety, quality, and security of our devices is our highest priority.”
It’s the latest find by the New York-based healthcare cybersecurity startup. Last year the startup also reported vulnerabilities in other GE equipment, which the company later admitted could have led to patient injury after initially clearing the device for use.
CyberMDX, which works primarily to secure medical devices and improve hospital network security through its cyber intelligence platform while conducting security research on the side, raised $20 million earlier this year, just a month into the COVID-19 pandemic.