When we last heard from BigID at the end of 2020, the company was announcing a $70 million Series D at a $1 billion valuation. Today, it announced a $30 million extension on that deal valuing the company at $1.25 billion just 4 months later.
This chunk of money comes from private equity firm Advent International, and brings the total raised to over $200 million across 4 rounds, according to the company. The late stage startup is attracting all of this capital by building a security and privacy platform. When I spoke to CEO Dimitri Sirota in September 2019 at the time of the $50 million Series C, he described the company’s direction this way:
“We’ve separated the product into some constituent parts. While it’s still sold as a broad-based [privacy and security] solution, it’s much more of a platform now in the sense that there’s a core set of capabilities that we heard over and over that customers want.”
Sirota says he has been putting the money to work, and as the economy improves he is seeing more traction for the product set. “Since December, we’ve added employees as we’ve seen broader economic recovery and increased demand. In tandem, we have been busy building a whole host of new products and offerings that we will announce over the coming weeks that will be transformational for BigID,” he said.
He also said that as with previous rounds, he didn’t go looking for the additional money, but decided to take advantage of the new funds at a higher valuation with a firm that he believes can add value overall. What’s more, the funds should allow the company to expand in ways it might have held off on.
“It was important to us that this wouldn’t be a distraction and that we could balance any funding without the need to over-capitalize, which is becoming a bigger issue in today’s environment. In the end, we took what we thought could bring forward some additional product modules and add a sales team focused on smaller commercial accounts,” Sirota said.
Ashwin Krishnan, a principal on Advent’s technology team in New York says that BigID was clearly aligned with two trends his firm has been following. That includes the explosion of data being collected and the increasing focus on managing and securing that data with the goal of ultimately using it to make better decisions.
“When we met with Dimitri and the BigID team, we immediately knew we had found a company with a powerful platform that solves the most challenging problem at the center of these trends and the data question,”Krishnan said.
Past investors in the company include Boldstart Ventures, Bessemer Venture Partners and Tiger Global. Strategic investors include Comcast Ventures, Salesforce Ventures and SAP.io.
Google’s historical collection of location data has got it into hot water in Australia where a case brought by the country’s Competition and Consumer Commission (ACCC) has led to a federal court ruling that the tech giant misled consumers by operating a confusing dual-layer of location settings in what the regulator describes as a “world-first enforcement action”.
The case relates to personal location data collected by Google through Android mobile devices between January 2017 and December 2018.
Per the ACCC, the court ruled that “when consumers created a new Google Account during the initial set-up process of their Android device, Google misrepresented that the ‘Location History’ setting was the only Google Account setting that affected whether Google collected, kept or used personally identifiable data about their location”.
“In fact, another Google Account setting titled ‘Web & App Activity’ also enabled Google to collect, store and use personally identifiable location data when it was turned on, and that setting was turned on by default,” it wrote.
The Court also ruled that Google misled consumers when they later accessed the ‘Location History’ setting on their Android device during the same time period to turn that setting off because it did not inform them that by leaving the ‘Web & App Activity’ setting switched on, Google would continue to collect, store and use their personally identifiable location data.
“Similarly, between 9 March 2017 and 29 November 2018, when consumers later accessed the ‘Web & App Activity’ setting on their Android device, they were misled because Google did not inform them that the setting was relevant to the collection of personal location data,” the ACCC added.
Similar complaints about Google’s location data processing being deceptive — and allegations that it uses manipulative tactics in order to keep tracking web users’ locations for ad-targeting purposes — have been raised by consumer agencies in Europe for years. And in February 2020 the company’s lead data regulator in the region finally opened an investigation. However that probe remains ongoing.
Whereas the ACCC said today that it will be seeking “declarations, pecuniary penalties, publications orders, and compliance orders” following the federal court ruling. Although it added that the specifics of its enforcement action will be determined “at a later date”. So it’s not clear exactly when Google will be hit with an order — nor how large a fine it might face.
The tech giant may also seek to appeal the court ruling.
Google said today it’s reviewing its legal options and considering a “possible appeal” — highlighting the fact the Court did not agree wholesale with the ACCC’s case because it dismissed some of the allegations (related to certain statements Google made about the methods by which consumers could prevent it from collecting and using their location data, and the purposes for which personal location data was being used by Google).
Here’s Google’s statement in full:
“The court rejected many of the ACCC’s broad claims. We disagree with the remaining findings and are currently reviewing our options, including a possible appeal. We provide robust controls for location data and are always looking to do more — for example we recently introduced auto delete options for Location History, making it even easier to control your data.”
While Mountain View denies doing anything wrong in how it configures location settings — while simultaneously claiming it’s always looking to improve the controls it offers its users — Google’s settings and defaults have, nonetheless, got it into hot water with regulators before.
Back in 2019 France’s data watchdog, the CNIL, fined it $57M over a number of transparency and consent failures under the EU’s General Data Protection Regulation. That remains the largest GDPR penalty issued to a tech giant since the regulation came into force a little under three years ago — although France has more recently sanctioned Google $120M under different EU laws for dropping tracking cookies without consent.
Australia, meanwhile, has forged ahead with passing legislation this year that directly targets the market power of Google (and Facebook) — passing a mandatory news media bargaining code in February which aims to address the power imbalance between platform giants and publishers around the reuse of journalism content.
Facebook is to be sued in Europe over the major leak of user data that dates back to 2019 but which only came to light recently after information on 533M+ accounts was found posted for free download on a hacker forum.
Today Digital Rights Ireland (DRI) announced it’s commencing a “mass action” to sue Facebook, citing the right to monetary compensation for breaches of personal data that’s set out in the European Union’s General Data Protection Regulation (GDPR).
Article 82 of the GDPR provides for a ‘right to compensation and liability’ for those affected by violations of the law. Since the regulation came into force, in May 2018, related civil litigation has been on the rise in the region.
The Ireland-based digital rights group is urging Facebook users who live in the European Union or European Economic Area to check whether their data was breach — via the haveibeenpwned website (which lets you check by email address or mobile number) — and sign up to join the case if so.
Information leaked via the breach includes Facebook IDs, location, mobile phone numbers, email address, relationship status and employer.
Facebook has been contacted for comment on the litigation.
The tech giant’s European headquarters is located in Ireland — and earlier this week the national data watchdog opened an investigation, under EU and Irish data protection laws.
A mechanism in the GDPR for simplifying investigation of cross-border cases means Ireland’s Data Protection Commission (DPC) is Facebook’s lead data regulator in the EU. However it has been criticized over its handling of and approach to GDPR complaints and investigations — including the length of time it’s taking to issue decisions on major cross-border cases. And this is particularly true of Facebook.
With the three-year anniversary of the GDPR fast approaching, the DPC has multiple open investigations into various aspects of Facebook’s business but has yet to issue a single decision against the company.
(The closest it’s come is a preliminary suspension order issued last year, in relation to Facebook’s EU to US data transfers. However that complaint long predates GDPR; and Facebook immediately filed to block the order via the courts. A resolution is expected later this year after the litigant filed his own judicial review of the DPC’s processes).
Since May 2018 the EU’s data protection regime has — at least on paper — baked in fines of up to 4% of a company’s global annual turnover for the most serious violations.
Again, though, the sole GDPR fine issued to date by the DPC against a tech giant (Twitter) is very far off that theoretical maximum. Last December the regulator announced a €450k (~$547k) sanction against Twitter — which works out to around just 0.1% of the company’s full-year revenue.
That penalty was also for a data breach — but one which, unlike the Facebook leak, had been publicly disclosed when Twitter found it in 2019. So Facebook’s failure to disclose the vulnerability it discovered and claimed to fix by September 2019 — which led to the leak of 533M accounts now — suggests it should face a higher sanction from the DPC than Twitter received.
However even if Facebook ends up with a more substantial GDPR penalty for this breach the watchdog’s caseload backlog and plodding procedural pace makes it hard to envisage a swift resolution to an investigation that’s only now a few days old.
Judging by past performance it’ll be years before the DPC decides on this 2019 Facebook leak — which likely explains why the DRI sees value in instigating class-action style litigation in parallel to the regulatory investigation.
“Compensation is not the only thing that makes this mass action worth joining. It is important to send a message to large data controllers that they must comply with the law and that there is a cost to them if they do not,” DRI writes on its website.
It also submitted a complaint about the Facebook breach to the DPC earlier this month, writing then that it was “also consulting with its legal advisors on other options including a mass action for damages in the Irish Courts”.
It’s clear that the GDPR enforcement gap is creating a growing opportunity for litigation funders to step in in Europe and take a punt on suing for data-related compensation damages — with a number of other mass actions announced last year.
In the case of DRI its focus is evidently on seeking to ensure that digital rights are upheld. But it told RTE that it believes compensation claims which force tech giants to pay money to users whose privacy rights have been violated is the best way to make them legally compliant.
Facebook, meanwhile, has sought to play down the breach it failed to disclose — claiming it’s ‘old data’ — a deflection that ignores the fact that dates of birth don’t change (nor do most people routinely change their mobile number or email address).
Plenty of the ‘old’ data exposed in this latest massive Facebook data leak will be very handy for spammers and fraudsters to target Facebook users — and also now for litigators to target Facebook for data-related damages.
Senator Ron Wyden (D-OR) has proposed a draft bill that would limit the types of information that could be bought and sold by tech companies abroad, and the countries it could be legally sold in. The legislation is imaginative and not highly specific, but it indicates growing concern at the federal level over the international data trade.
“Shady data brokers shouldn’t get rich selling Americans’ private data to foreign countries that could use it to threaten our national security,” said Sen. Wyden in a statement accompanying the bill. They probably shouldn’t get rich selling Americans’ private data at all, but national security is a good way to grease the wheels.
The Protecting Americans’ Data From Foreign Surveillance Act would be a first step toward categorizing and protecting consumer data as a commodity that’s traded on the global market. Right now there are few if any controls over what data specific to a person — buying habits, movements, political party — can be sold abroad.
This means that, for instance, an American data broker could sell the preferred brands and home addresses of millions of Americans to, say, a Chinese bank doing investment research. Some of this trade is perfectly innocuous, even desirable in order to promote global commerce, but at what point does it become dangerous or exploitative?
There isn’t any official definition of what should and shouldn’t be sold to whom, the way we limit sales of certain intellectual property or weapons. The proposed law would first direct the secretary of Commerce to identify the data we should be protecting and to whom it should be protected against.
The general shape of protected data would be that which “if exported by third parties, could harm U.S. national security.” The countries that would be barred from receiving it would be those with inadequate data protection and export controls, recent intelligence operations against the U.S. or laws that allow the government to compel such information to be handed over to them. Obviously this is aimed at the likes of China and Russia, though ironically the U.S. fits the bill pretty well itself.
There would be exceptions for journalism and First Amendment-protected speech, and for encrypted data — for example storing encrypted messages on servers in one of the targeted countries. The law would also create penalties for executives “who knew or should have known” that their company was illegally exporting data, and creates pathways for people harmed or detained in a foreign country owing to illegally exported data. That might be if, say, another country used an American facial recognition service to spot, stop and arrest someone before they left.
If this all sounds a little woolly, it is — but that’s more or less on purpose. It is not for Congress to invent such definitions as are necessary for a law like this one; that duty falls to expert agencies, which must conduct studies and produce reports that Congress can refer to. This law represents the first handful of steps along those lines: getting the general shape of things straight and giving fair warning that certain classes of undesirable data commerce will soon be illegal — with an emphasis on executive responsibility, something that should make tech companies take notice.
The legislation would need to be sensitive to existing arrangements by which companies spread out data storage and processing for various economic and legal reasons. Free movement of data is to a certain extent necessary for globe-spanning businesses that must interact with one another constantly, and to hobble those established processes with red tape or fees might be disastrous to certain locales or businesses. Presumably this would all come up during the studies, but it serves to demonstrate that this is a very complex, not to say delicate, digital ecosystem the law would attempt to modify.
We’re in the early stages of this type of regulation, and this bill is just getting started in the legislative process, so expect a few months at the very least before we hear anything more on this one.
A cross-party group of 40 MEPs in the European parliament has called on the Commission to strengthen an incoming legislative proposal on artificial intelligence to include an outright ban on the use of facial recognition and other forms of biometric surveillance in public places.
They have also urged EU lawmakers to outlaw automated recognition of people’s sensitive characteristics (such as gender, sexuality, race/ethnicity, health status and disability) — warning that such AI-fuelled practices pose too great a rights risk and can fuel discrimination.
The Commission is expected to presented its proposal for a framework to regulate ‘high risk’ applications of AI next week — but a copy of the draft leaked this week (via Politico). And, as we reported earlier, this leaked draft does not include a ban on the use of facial recognition or similar biometric remote identification technologies in public places, despite acknowledging the strength of public concern over the issue.
“Biometric mass surveillance technology in publicly accessible spaces is widely being criticised for wrongfully reporting large numbers of innocent citizens, systematically discriminating against under-represented groups and having a chilling effect on a free and diverse society. This is why a ban is needed,” the MEPs write now in a letter to the Commission which they’ve also made public.
They go on to warn over the risks of discrimination through automated inference of people’s sensitive characteristics — such as in applications like predictive policing or the indiscriminate monitoring and tracking of populations via their biometric characteristics.
“This can lead to harms including violating rights to privacy and data protection; suppressing free speech; making it harder to expose corruption; and have a chilling effect on everyone’s autonomy, dignity and self-expression – which in particular can seriously harm LGBTQI+ communities, people of colour, and other discriminated-against groups,” the MEPs write, calling on the Commission to amend the AI proposal to outlaw the practice in order to protect EU citizens’ rights and the rights of communities who faced a heightened risk of discrimination (and therefore heightened risk from discriminatory tools supercharged with AI).
“The AI proposal offers a welcome opportunity to prohibit the automated recognition of gender, sexuality, race/ethnicity, disability and any other sensitive and protected characteristics,” they add.
The leaked draft of the Commission’s proposal does tackle indiscriminate mass surveillance — proposing to prohibit this practice, as well as outlawing general purpose social credit scoring systems.
However the MEPs want lawmakers to go further — warning over weaknesses in the wording of the leaked draft and suggesting changes to ensure that the proposed ban covers “all untargeted and indiscriminate mass surveillance, no matter how many people are exposed to the system”.
They also express alarm at the proposal having an exemption on the prohibition on mass surveillance for public authorities (or commercial entities working for them) — warning that this risks deviating from existing EU legislation and from interpretations by the bloc’s top court in this area.
“We strongly protest the proposed second paragraph of this Article 4 which would exempt public authorities and even private actors acting on their behalf ‘in order to safeguard public security’,” they write. “Public security is precisely what mass surveillance is being justified with, it is where it is practically relevant, and it is where the courts have consistently annulled legislation on indiscriminate bulk processing of personal data (e.g. the Data Retention Directive). This carve-out needs to be deleted.”
“This second paragraph could even be interpreted to deviate from other secondary legislation which the Court of Justice has so far interpreted to ban mass surveillance,” they continue. “The proposed AI regulation needs to make it very clear that its requirements apply in addition to those resulting from the data protection acquis and do not replace it. There is no such clarity in the leaked draft.”
The Commission has been contacted for comment on the MEPs’ calls but is unlikely to do so ahead of the official reveal of the draft AI regulation — which is expected around the middle of next week.
It remains to be seen whether the AI proposal will undergo any significant amendments between now and then. But MEPs have fired a swift warning shot that fundamental rights must and will be a key feature of the co-legislative debate — and that lawmakers’ claims of a framework to ensure ‘trustworthy’ AI won’t look credible if the rules don’t tackle unethical technologies head on.
Data is the most valuable asset for any business in 2021. If your business is online and collecting customer personal information, your business is dealing in data, which means data privacy compliance regulations will apply to everyone — no matter the company’s size.
Small startups might not think the world’s strictest data privacy laws — the California Consumer Privacy Act (CCPA) and Europe’s General Data Protection Regulation (GDPR) — apply to them, but it’s important to enact best data management practices before a legal situation arises.
Data compliance is not only critical to a company’s daily functions; if done wrong or not done at all, it can be quite costly for companies of all sizes.
For example, failing to comply with the GDPR can result in legal fines of €20 million or 4% of annual revenue. Under the CCPA, fines can also escalate quickly, to the tune of $2,500 to $7,500 per person whose data is exposed during a data breach.
If the data of 1,000 customers is compromised in a cybersecurity incident, that would add up to $7.5 million. The company can also be sued in class action claims or suffer reputational damage, resulting in lost business costs.
It is also important to recognize some benefits of good data management. If a company takes a proactive approach to data privacy, it may mitigate the impact of a data breach, which the government can take into consideration when assessing legal fines. In addition, companies can benefit from business insights, reduced storage costs and increased employee productivity, which can all make a big impact on the company’s bottom line.
Data compliance is not only critical to a company’s daily functions; if done wrong or not done at all, it can be quite costly for companies of all sizes. For example, Vodafone Spain was recently fined $9.72 million under GDPR data protection failures, and enforcement trackers show schools, associations, municipalities, homeowners associations and more are also receiving fines.
GDPR regulators have issued $332.4 million in fines since the law was enacted almost two years ago and are being more aggressive with enforcement. While California’s attorney general started CCPA enforcement on July 1, 2020, the newly passed California Privacy Rights Act (CPRA) only recently created a state agency to more effectively enforce compliance for any company storing information of residents in California, a major hub of U.S. startups.
That is why in this age, data privacy compliance is key to a successful business. Unfortunately, many startups are at a disadvantage for many reasons, including:
In business today, many believe that consumer privacy and business results are mutually exclusive — to excel in one area is to lack in the other. Consumer privacy is seen by many in the technology industry as an area to be managed.
But the truth is, the companies who champion privacy will be better positioned to win in all areas. This is especially true as the digital industry continues to undergo tectonic shifts in privacy — both in government regulation and browser updates.
By the end of 2022, all major browsers will have phased out third-party cookies — the tracking codes placed on a visitor’s computer generated by another website other than your own. Additionally, mobile device makers are limiting identifiers allowed on their devices and applications. Across industry verticals, the global enterprise ecosystem now faces a critical moment in which digital advertising will be forever changed.
Up until now, consumers have enjoyed a mostly free internet experience, but as publishers adjust to a cookie-less world, they could see more paywalls and less free content.
They may also see a decrease in the creation of new free apps, mobile gaming, and other ad-supported content unless businesses find new ways to authenticate users and maintain a value exchange of free content for personalized advertising.
When consumers authenticate themselves to brands and sites, they create revenue streams for publishers as well as the opportunity to receive discounts, first-looks, and other specially tailored experiences from brands.
To protect consumer data, companies need to architect internal systems around data custodianship versus acting from a sense of data entitlement. While this is a challenging and massive ongoing evolution, the benefits of starting now are enormous.
Putting privacy front and center creates a sustainable digital ecosystem that enables better advertising and drives business results. There are four steps to consider when building for tomorrow’s privacy-centric world:
As we collectively look to redesign how companies interact with and think about consumers, we should first recognize that putting people first means putting transparency first. When people trust a brand or publishers’ intentions, they are more willing to share their data and identity.
This process, where consumers authenticate themselves — or actively share their phone number, email or other form of identity — in exchange for free content or another form of value, allows brands and publishers to get closer to them.
Facebook’s lead data supervisor in the European Union has opened an investigation into whether the tech giant violated data protection rules vis-a-vis the leak of data reported last week.
Here’s the Irish Data Protection Commission’s statement:
“The Data Protection Commission (DPC) today launched an own-volition inquiry pursuant to section 110 of the Data Protection Act 2018 in relation to multiple international media reports, which highlighted that a collated dataset of Facebook user personal data had been made available on the internet. This dataset was reported to contain personal data relating to approximately 533 million Facebook users worldwide. The DPC engaged with Facebook Ireland in relation to this reported issue, raising queries in relation to GDPR compliance to which Facebook Ireland furnished a number of responses.
The DPC, having considered the information provided by Facebook Ireland regarding this matter to date, is of the opinion that one or more provisions of the GDPR and/or the Data Protection Act 2018 may have been, and/or are being, infringed in relation to Facebook Users’ personal data.
Accordingly, the Commission considers it appropriate to determine whether Facebook Ireland has complied with its obligations, as data controller, in connection with the processing of personal data of its users by means of the Facebook Search, Facebook Messenger Contact Importer and Instagram Contact Importer features of its service, or whether any provision(s) of the GDPR and/or the Data Protection Act 2018 have been, and/or are being, infringed by Facebook in this respect.”
The move comes after the European Commission intervened to apply pressure on Ireland’s data protection commissioner. Justice commissioner, Didier Reynders, tweeted on Monday that he had spoken with Helen Dixon about the Facebook data leak.
“The Commission continues to follow this case closely and is committed to supporting national authorities,” he added, going on to urge Facebook to “cooperate actively and swiftly to shed light on the identified issues”.
Facebook has been contacted for comment.
Today I spoke with Helen Dixon @DPCIreland about the #FacebookLeak. The Commission continues to follow this case closely and is committed to supporting national authorities. We also call on @Facebook to cooperate actively and swiftly to shed light on the identified issues.
— Didier Reynders (@dreynders) April 12, 2021
A spokeswoman for the Commission confirmed the virtual meeting between Reynders and Dixon, saying: “Dixon informed the Commissioner about the issues at stake and the different tracks of work to clarify the situation.
“They both urge Facebook to cooperate swiftly and to share the necessary information. It is crucial to shed light on this leak that has affected millions of European citizens.”
“It is up to the Irish data protection authority to assess this case. The Commission remains available if support is needed. The situation will also have to be further analyzed for the future. Lessons should be learned,” she added.
The revelation that a vulnerability in Facebook’s platform enabled unidentified ‘malicious actors’ to extract the personal data (including email addresses, mobile phone numbers and more) of more than 500 million Facebook accounts up until September 2019 — when Facebook claims it fixed the issue — only emerged in the wake of the data being found for free download on a hacker forum earlier this month.
All 533,000,000 Facebook records were just leaked for free.
This means that if you have a Facebook account, it is extremely likely the phone number used for the account was leaked.
— Alon Gal (Under the Breach) (@UnderTheBreach) April 3, 2021
Despite the European Union’s data protection framework (the GDPR) baking in a regime of data breach notifications — with the risk of hefty fines for compliance failure — Facebook did not inform its lead EU data supervisory when it found and fixed the issue. Ireland’s Data Protection Commission (DPC) was left to find out in the press, like everyone else.
Nor has Facebook individually informed the 533M+ users that their information was taken without their knowledge or consent, saying last week it has no plans to do so — despite the heightened risk for affected users of spam and phishing attacks.
Privacy experts have, meanwhile, been swift to point out that the company has still not faced any regulatory sanction under the GDPR — with a number of investigations ongoing into various Facebook businesses and practices and no decisions yet issued in those cases by Ireland’s DPC.
Last month the European Parliament adopted a resolution on the implementation of the GDPR which expressed “great concern” over the functioning of the mechanism — raising particular concern over the Irish data protection authority by writing that it “generally closes most cases with a settlement instead of a sanction and that cases referred to Ireland in 2018 have not even reached the stage of a draft decision pursuant to Article 60(3) of the GDPR”.
The latest Facebook data scandal further amps up the pressure on the DPC — providing further succour to critics of the GDPR who argue the regulation is unworkable under the current foot-dragging enforcement structure, given the major bottlenecks in Ireland (and Luxembourg) where many tech giants choose to locate regional HQ.
— Max Schrems (@maxschrems) April 10, 2021
On Thursday Reynders made his concern over Ireland’s response to the Facebook data leak public, tweeting to say the Commission had been in contact with the DPC.
He does have reason to be personally concerned. Earlier last week Politico reported that Reynders’ own digits had been among the cache of leaked data, along with those of the Luxembourg prime minister Xavier Bettel — and “dozens of EU officials”. However the problem of weak GDPR enforcement affects everyone across the bloc — some 446M people whose rights are not being uniformly and vigorously upheld.
“A strong enforcement of GDPR is of key importance,” Reynders also remarked on Twitter, urging Facebook to “fully cooperate with Irish authorities”.
Last week Italy’s data protection commission also called on Facebook to immediately offer a service for Italian users to check whether they had been affected by the breach. But Facebook made no public acknowledgment or response to the call. Under the GDPR’s one-stop-shop mechanism the tech giant can limit its regulatory exposure by direct dealing only with its lead EU data supervisor in Ireland.
A two-year Commission review of how the data protection regime is functioning, which reported last summer, already drew attention to problems with patchy enforcement. So a lack of progress on unblocking GDPR bottlenecks is a growing problem for the Commission — which is in the midst of proposing a package of additional digital regulations. That makes the enforcement point a very pressing one as EU lawmakers are being asked how new digital rules will be upheld if existing ones keep being trampled on?
It’s certainly notable that the EU’s executive has proposed a different, centralized enforcement structure for incoming pan-EU legislation targeted at digital services and tech giants. Albeit, getting agreement from all the EU’s institutions and elected representatives on how to reshape platform oversight looks challenging.
And in the meanwhile the data leaks continue: Motherboard reported Friday on another alarming leak of Facebook data it found being made accessible via a bot on the Telegram messaging platform that gives out the names and phone numbers of users who have liked a Facebook page (in exchange for a fee unless the page has had less than 100 likes).
The publication said this data appears to be separate to the 533M+ scraped dataset — after it ran checks against the larger dataset via the breach advice site, haveibeenpwned. It also asked Alon Gal, the person who discovered the aforementioned leaked Facebook dataset being offered for free download online, to compare data obtained via the bot and he did not find any matches.
We contacted Facebook about the source of this leaked data and will update this report with any response.
In his tweet about the 500M+ Facebook data leak last week, Reynders made reference to the Europe Data Protection Board (EDPB), a steering body comprised of representatives from Member State data protection agencies which works to ensure a consistent application of the GDPR.
However the body does not lead on GDPR enforcement — so it’s not clear why he would invoke it. Optics is one possibility, if he was trying to encourage a perception that the EU has vigorous and uniform enforcement structures where people’s data is concerned.
“Under the GDPR, enforcement and the investigation of potential violations lies with the national supervisory authorities. The EDPB does not have investigative powers per se and is not involved in investigations at the national level. As such, the EDPB cannot comment on the processing activities of specific companies,” an EDPB spokeswoman told us when we enquired about Reynders’ remarks.
But she also noted the Commission attends plenary meetings of the EDPB — adding it’s possible there will be an exchange of views among members about the Facebook leak case in the future, as attending supervisory authorities “regularly exchange information on cases at the national level”.
When Microsoft announced it was acquiring Nuance Communications this morning for $19.7 billion, you could be excused for doing a Monday morning double take at the hefty price tag.
That’s surely a lot of money for a company on a $1.4 billion run rate, but Microsoft, which has already partnered with the speech-to-text market leader on several products over the last couple of years, saw a company firmly embedded in healthcare and it decided to go all in.
And $20 billion is certainly all in, even for a company the size of Microsoft. But 2020 forced us to change the way we do business from restaurants to retailers to doctors. In fact, the pandemic in particular changed the way we interact with our medical providers. We learned very quickly that you don’t have to drive to an office, wait in waiting room, then in an exam room, all to see the doctor for a few minutes.
Instead, we can get on the line, have a quick chat and be on our way. It won’t work for every condition of course — there will always be times the physician needs to see you — but for many meetings such as reviewing test results or for talk therapy, telehealth could suffice.
Microsoft CEO Satya Nadella says that Nuance is at the center of this shift, especially with its use of cloud and artificial intelligence, and that’s why the company was willing to pay the amount it did to get it.
“AI is technology’s most important priority, and healthcare is its most urgent application. Together, with our partner ecosystem, we will put advanced AI solutions into the hands of professionals everywhere to drive better decision-making and create more meaningful connections, as we accelerate growth of Microsoft Cloud in Healthcare and Nuance,” Nadella said in a post announcing the deal.
Microsoft sees this deal doubling what was already a considerable total addressable market to nearly $500 billion. While TAMs always tend to run high, that is still a substantial number.
It also fits with Gartner data, which found that by 2022, 75% of healthcare organizations will have a formal cloud strategy in place. The AI component only adds to that number and Nuance brings 10,000 existing customers to Microsoft including some of the biggest healthcare organizations in the world.
Brent Leary, founder and principal analyst at CRM Essentials, says the deal could provide Microsoft with a ton of health data to help feed the underlying machine learning models and make them more accurate over time.
“There is going be a ton of health data being captured by the interactions coming through telemedicine interactions, and this could create a whole new level of health intelligence,” Leary told me.
That of course could drive a lot of privacy concerns where health data is involved, and it will be up to Microsoft, which just experienced a major breach on its Exchange email server products last month, to assure the public that their sensitive health data is being protected.
Leary says that ensuring data privacy is going to be absolutely key to the success of the deal. “The potential this move has is pretty powerful, but it will only be realized if the data and insights that could come from it are protected and secure — not only protected from hackers but also from unethical use. Either could derail what could be a game changing move,” he said.
Microsoft also seemed to recognize that when it wrote, “Nuance and Microsoft will deepen their existing commitments to the extended partner ecosystem, as well as the highest standards of data privacy, security and compliance.”
We are clearly on the edge of a sea change when it comes to how we interact with our medical providers in the future. COVID pushed medicine deeper into the digital realm in 2020 out of simple necessity. It wasn’t safe to go into the office unless absolutely necessary.
The Nuance acquisition, which is expected to close some time later this year, could help Microsoft shift deeper into the market. It could even bring Teams into it as a meeting tool, but it’s all going to depend on the trust level people have with this approach, and it will be up to the company to make sure that both healthcare providers and the people they serve have that.
Security researchers say APKPure, a widely popular app for installing older or discontinued Android apps from outside of Google’s app store, contained malicious adware that flooded the victim’s device with unwanted ads.
Kaspersky Lab said that it alerted APKPure on Thursday that its most recent app version, 3.17.18, contained malicious code that siphoned off data from a victim’s device without their knowledge, and pushed ads to the device’s lock screen and in the background to generate fraudulent revenue for the adware operators.
But the researchers said that the malicious code had the capacity to download other malware, potentially putting affected victims at further risk.
The researchers said the APKPure developers likely introduced the malicious code, known as a software development kit or SDK, from an unverified source. APKPure removed the malicious code and pushed out a new version, 3.17.19, and the developers no longer list the malicious version on its site.
APKPure was set up in 2014 to allow Android users access to a vast bank of Android apps and games, including old versions, as well as app versions from other regions that are no longer on Android’s official app store Google Play. It later launched an Android app, which also has to be installed outside Google Play, serving as its own app store to allow users to download older apps directly to their Android devices.
APKPure is ranked as one of the most popular sites on the internet.
But security experts have long warned against installing apps outside of the official app stores as quality and security vary wildly as much of the Android malware requires victims to install malicious apps from outside the app store. Google scans all Android apps that make it into Google Play, but some have slipped through the cracks before.
TechCrunch contacted APKPure for comment but did not hear back.
The question of whether Facebook will face any regulatory sanction over the latest massive historical platform privacy fail to come to light remains unclear. But the timeline of the incident looks increasingly awkward for the tech giant.
While it initially sought to play down the data breach revelations published by Business Insider at the weekend by suggesting that information like people’s birth dates and phone numbers was “old”, in a blog post late yesterday the tech giant finally revealed that the data in question had in fact been scraped from its platform by malicious actors “in 2019” and “prior to September 2019”.
That new detail about the timing of this incident raises the issue of compliance with Europe’s General Data Protection Regulation (GDPR) — which came into application in May 2018.
Under the EU regulation data controllers can face fines of up to 2% of their global annual turnover for failures to notify breaches, and up to 4% of annual turnover for more serious compliance violations.
The European framework looks important because Facebook indemnified itself against historical privacy issues in the US when it settled with the FTC for $5BN back in July 2019 — although that does still mean there’s a period of several months (June to September 2019) which could fall outside that settlement.
Not only is @Facebook past the indemnification period of the FTC settlement (June 12 2019), they also may have violated the terms of the settlement requiring them to report breaches of covered information (ht @JustinBrookman ) https://t.co/182LEf4rNO pic.twitter.com/utCnQ4USHI
— ashkan soltani (@ashk4n) April 7, 2021
Yesterday, in its own statement responding to the breach revelations, Facebook’s lead data supervisor in the EU said the provenance of the newly published dataset wasn’t entirely clear, writing that it “seems to comprise the original 2018 (pre-GDPR) dataset” — referring to an earlier breach incident Facebook disclosed in 2018 which related to a vulnerability in its phone lookup functionality that it had said occurred between June 2017 and April 2018 — but also writing that the newly published dataset also looked to have been “combined with additional records, which may be from a later period”.
Facebook followed up the Irish Data Protection Commission (DPC)’s statement by confirming that suspicion — admitting that the data had been extracted from its platform in 2019, up until September of that year.
Another new detail that emerged in Facebook’s blog post yesterday was the fact users’ data was scraped not via the aforementioned phone lookup vulnerability — but via another method altogether: A contact importer tool vulnerability.
This route allowed an unknown number of “malicious actors” to use software to imitate Facebook’s app and upload large sets of phone numbers to see which ones matched Facebook users.
In this way a spammer (for example), could upload a database of potential phone numbers and link them to not only names but other data like birth date, email address, location — all the better to phish you with.
In its PR response to the breach, Facebook quickly claimed it had fixed this vulnerability in August 2019. But, again, that timing places the incident squarely in the period of GDPR being active.
As a reminder, Europe’s data protection framework bakes in a data breach notification regime that requires data controllers to notify a relevant supervisory authority if they believe a loss of personal data is likely to constitute a risk to users’ rights and freedoms — and to do so without undue delay (ideally within 72 hours of becoming aware of it).
Yet Facebook made no disclosure at all of this incident to the DPC. Indeed, the regulator made it clear yesterday that it had to proactively seek information from Facebook in the wake of BI’s report. That’s the opposite of how EU lawmakers intended the regulation to function.
Data breaches, meanwhile, are broadly defined under the GDPR. It could mean personal data being lost or stolen and/or accessed by unauthorized third parties. It can also relate to deliberate or accidental action or inaction by a data controller which exposes personal data.
Legal risk attached to the breach likely explains why Facebook has studiously avoided describing this latest data protection failure, in which the personal information of more than half a billion users was posted for free download on an online forum, as a ‘breach’.
And, indeed, why it’s sought to downplay the significance of the leaked information — dubbing people’s personal information “old data”. (Even as few people regularly change their mobile numbers, email address, full names and biographical information and so on, and no one (legally) gets a new birth date… )
Its blog post instead refers to data being scraped; and to scraping being “a common tactic that often relies on automated software to lift public information from the internet that can end up being distributed in online forums” — tacitly implying that the personal information leaked via its contact importer tool was somehow public.
The self-serving suggestion being peddled here by Facebook is that hundreds of millions of users had both published sensitive stuff like their mobile phone numbers on their Facebook profiles and left default settings on their accounts — thereby making this personal information ‘publicly available for scraping/no longer private/uncovered by data protection legislation’.
This is an argument as obviously absurd as it is viciously hostile to people’s rights and privacy. It’s also an argument that EU data protection regulators must quickly and definitively reject or be complicit in allowing Facebook (ab)use its market power to torch the very fundamental rights that regulators’ sole purpose is to defend and uphold.
Even if some Facebook users affected by this breach had their information exposed via the contact importer tool because they had not changed Facebook’s privacy-hostile defaults that still raises key questions of GPDR compliance — because the regulation also requires data controllers to adequately secure personal data and apply privacy by design and default.
Facebook allowing hundreds of millions of accounts to have their info freely pillaged by spammers (or whoever) doesn’t sound like good security or default privacy.
In short, it’s the Cambridge Analytica scandal all over again.
Facebook is trying to get away with continuing to be terrible at privacy and data protection because it’s been so terrible at it in the past — and likely feels confident in keeping on with this tactic because it’s faced relatively little regulatory sanction for an endless parade of data scandals. (A one-time $5BN FTC fine for a company than turns over $85BN+ in annual revenue is just another business expense.)
We asked Facebook why it failed to notify the DPC about this 2019 breach back in 2019, when it realized people’s information was once again being maliciously extracted from its platform — or, indeed, why it hasn’t bothered to tell affected Facebook users themselves — but the company declined to comment beyond what it said yesterday.
Then it told us it would not be commenting on its communications with regulators.
Under the GDPR, if a breach poses a high risk to users’ rights and freedoms a data controller is required to notify affected individuals — with the rational being that prompt notification of a threat can help people take steps to protect themselves from the risks of their data being breached, such as fraud and ID theft.
Yesterday Facebook also said it does not have plans to notify users either.
Perhaps the company’s trademark ‘thumbs up’ symbol would be more aptly expressed as a middle finger raised at everyone else.
Apple is sharing more details today about its upcoming App Tracking Transparency feature, which will allow users to control, on an app-by-app level, whether their data is shared for ad-targeting purposes.
In a sense, anyone using the current version of iOS can see App Tracking Transparency in action, since iOS already includes a Tracking menu in the Privacy settings, and some apps have already started asking users for permission to track them.
But when iOS 14.5 (currently in developer beta) is released to the general public sometime in early spring, Apple will actually start enforcing its new rules, meaning that iPhone users will probably start seeing a lot more requests. Those requests will appear at various points during the usage of an app, but they’ll all carry a standardized message asking whether the app can “track your activity across other companies’ apps and websites,” followed by a customized explanation from the developer.
Once an app has asked for this permission, it will also show up in the Tracking menu, where users can toggle app tracking on and off at any time. They can also enable app tracking across all apps or opt out of these requests entirely with a single toggle.
One point worth emphasizing — something already stated on Apple’s developer website but not entirely clear in media reports (including our own)— is that these rules aren’t limited to the IDFA identifier. Yes, IDFA is what Apple controls directly, but a company spokesperson said that when a user opts out of tracking, Apple will also expect developers to stop using any other identifiers (such as hashed email addresses) to track users for ad targeting purposes, and not to share that information with data brokers.
This does not, however, stop developers from tracking users across multiple apps if all those apps are operated by a single company.
The Apple spokesperson also said that Apple’s own apps will abide by these rules — you won’t see any requests from Apple, however, since it doesn’t track users across third-party apps for ad targeting purposes. (As previously noted, there’s a separate Personalized Ads option that determines whether Apple can use its own first-party data to target ads.)
Facebook has been particularly vocal in criticizing the change, arguing that this will hurt small businesses who use targeting to run effective ad campaigns, and that the change benefits Apple’s bottom line.
Apple has pushed back against criticism in privacy-focused speeches, as well as in a report called A Day in the Life of Your Data, which lays out how users are actually tracked and targeted. In fact, the report has just been updated with more information about ad auctions, ad attribution and Apple’s own advertising products.
European regulators have questions about a Facebook data breach, Clubhouse adds payments and a robotics company has SPAC plans. This is your Daily Crunch for April 6, 2021.
The big story: Facebook faces questions over data breach
A data breach involving personal data (such as email addresses and phone numbers) of more than 500 million Facebook accounts came to light over the weekend thanks to a story in Business Insider. Although Facebook said the breach was related to a vulnerability that was “found and fixed” in August 2019, the Irish Data Protection Commission — Facebook’s lead data regulator in the European Union — suggested that it’s seeking the “full facts” in the matter.
“The newly published dataset seems to comprise the original 2018 (pre-GDPR) dataset and combined with additional records, which may be from a later period,” said deputy commissioner Graham Doyle in a statement. “A significant number of the users are EU users. Much of the data appears to been data scraped some time ago from Facebook public profiles.”
In addition, it looks like EU regulators may also look into Facebook’s acquisition of customer service company Kustomer.
The tech giants
Apple launches an app for testing devices that work with ‘Find My’ — Find My Certification Asst. is designed for use by Made for iPhone Licensees who need to test their accessories’ interoperability with Apple’s Find My network.
Google Cloud joins the FinOps Foundation — The FinOps Foundation is a relatively new open-source foundation that aims to bring together companies in the “cloud financial management” space to establish best practices and standards.
Facebook confirms ‘test’ of Venmo-like QR codes for person-to-person payments in US — The feature will allow a user to scan a friend’s code with their smartphone’s camera to send or request money.
Startups, funding and venture capital
Clubhouse launches payments so creators can make money — It’s like a virtual tip jar, or a Clubhouse-branded version of Venmo.
Robotic exoskeleton maker Sarcos announces SPAC plans — The deal could potentially value the robotic exoskeleton maker and blank check company at a combined $1.3 billion.
Hipmunk’s founders launch Flight Penguin to bring back Hipmunk-style flight search — I’ve missed Hipmunk.
Advice and analysis from Extra Crunch
Giving EV batteries a second life for sustainability and profit — Automakers and startups are eying ways to reuse batteries before they’re sent for recycling.
Will Topps’ SPAC-led debut expand the bustling NFT market? — Topps and its products are popular with the same set of folks who are very excited about creating rare digital items on particular blockchains.
LG’s exit from the smartphone market comes as no surprise — Why didn’t it happen sooner?
(Extra Crunch is our membership program, which helps founders and startup teams get ahead. You can sign up here.)
GM to build an electric Chevrolet Silverado pickup truck with more than 400 miles of range — GM is positioning the full-sized pickup for both consumer and commercial markets.
Putting Belfast on the TechCrunch map — TechCrunch’s European Cities Survey 2021 — This is the follow-up to the huge survey of investors we’ve done over the last six or more months, largely in capital cities.
The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.
Encrypted chat app Signal is adding payments to the services it provides, a long-expected move and one the company is taking its time on. A U.K.-only beta program will allow users to trade the cryptocurrency MobileCoin quickly, easily, and most importantly, privately.
If you’re in the U.K., or have some way to appear to be, you’ll notice a new Signal Payments feature in the app when you update. All you need to do to use it is link a MobileCoin wallet after you buy some on the cryptocurrency exchange FTX, the only one that lists it right now.
Once you link up, you’ll be able to instantly send MOB to anyone else with a linked wallet, pretty much as easily as you’d send a chat. (No word on when the beta will expand to other countries or currencies.)
Just as Signal doesn’t have any kind of access to the messages you send or calls you make, your payments are totally private. MobileCoin, which Signal has been working with for a couple years now, was built from the ground up for speed and privacy, using a zero-knowledge proof system and other innovations to make it as easy as Venmo but as secure as … well, Signal. You can read more about their approach in this paper (PDF).
MobileCoin just snagged a little over $11 million in funding last month as rumors swirled that this integration was nearing readiness. Further whispers propelled the value of MOB into the stratosphere as well, nice for those holding it but not for people who want to use it to pay someone back for a meal. All of a sudden you’ve given your friend a Benjamin (or perhaps now, in the U.K., a Turing) for no good reason, or that the sandwich has depreciated precipitously since lunchtime.
There’s no reason you have to hold the currency, of course, but swapping it for stable or fiat currencies every time seems a chore. Speaking to Wired, Signal co-founder Moxie Marlinspike envisioned an automatic trade-out system, though he is rarely so free with information like that if it is something under active development.
While there is some risk that getting involved with cryptocurrency, with the field’s mixed reputation, may dilute or pollute the goodwill Signal has developed as a secure and disinterested service provider, the team there seems to think it’s inevitable. After all, if popular payment services are being monitored the same way your email and social media are, perhaps we ought to nip this one in the bud and go end-to-end encrypted as quickly as possible.
The European Union may investigate Facebook’s $1BN acquisition of customer service platform Kustomer after concerns were referred to it under EU merger rules.
A spokeswoman for the Commission confirmed it received a request to refer the proposed acquisition from Austria under Article 22 of the EU’s Merger Regulation — a mechanism which allows Member States to flag a proposed transaction that’s not notifiable under national filing thresholds (e.g. because the turnover of one of the companies is too low for a formal notification).
The Commission spokeswoman said the case was notified in Austria on March 31.
“Following the receipt of an Article 22 request for referral, the Commission has to transmit the request for referral to other Member States without delay, who will have the right to join the original referral request within 15 working days of being informed by the Commission of the original request,” she told us, adding: “Following the expiry of the deadline for other Member States to join the referral, the Commission will have 10 working days to decide whether to accept or reject the referral.”
We’ll know in a few weeks whether or not the European Commission will take a look at the acquisition — an option that could see the transaction stalled for months, delaying Facebook’s plans for integrating Kustomer’s platform into its empire.
Facebook and Kustomer have been contacted for comment on the development.
The tech giant’s planned purchase of the customer relations management platform was announced last November and quickly raised concerns over what Facebook might do with any personal data held by Kustomer — which could include sensitive information, given sectors served by the platform include healthcare, government and financial services, among others.
Back in February, the Irish Council for Civil Liberties (ICCL) wrote to the Commission and national and EU data protection agencies to raise concerns about the proposed acquisition — urging scrutiny of the “data processing consequences”, and highlighting how Kustomer’s terms allow it to process user data for very wide-ranging purposes.
“Facebook is acquiring this company. The scope of ‘improving our Services’ [in Kustomer’s terms] is already broad, but is likely to grow broader after Kustomer is acquired,” the ICCL warned. “‘Our Services’ may, for example, be taken to mean any Facebook services or systems or projects.”
“The settled caselaw of the European Court of Justice, and the European data protection board, that ‘improving our services’ and similarly vague statements do not qualify as a ‘processing purpose’,” it added.
The ICCL also said it had written to Facebook asking for confirmation of the post-acquisition processing purposes for which people’s data will be used.
Johnny Ryan, senior fellow at the ICCL, confirmed to TechCrunch it has not had any response from Facebook to those questions.
We’ve also asked Facebook to confirm what it will do with any personal data held on users by Kustomer once it owns the company — and will update this report with any response.
In a separate (recent) episode — involving Google — its acquisition of wearable maker Fitbit went through months of competition scrutiny in the EU and was only cleared by regional regulators after the tech giant made a number of concessions, including committing not to use Fitbit data for ads for ten years.
Until now Facebook’s acquisitions have generally flown under regulators’ radar, including, around a decade ago, when it was sewing up the social space by buying up rivals Instagram and WhatsApp.
Several years later it was forced to pay a fine in the EU over a ‘misleading’ filing — after it combined WhatsApp and Facebook data, despite having told regulators it could not do so.
With so many data scandals now inextricably attached to Facebook, the tech giant is saddled with customer mistrust by default and is facing far greater scrutiny of how it operates — which is now threatening to inject friction into its plans to expand its b2b offering by acquiring a CRM player. So after ‘move fast and break things’ Facebook is having to move slower because of its reputation for breaking stuff.
As governments scrambled to lock down their populations after the COVID-19 pandemic was declared last March, some countries had plans underway to reopen. By June, Jamaica became one of the first countries to open its borders.
Tourism represents about one-fifth of Jamaica’s economy. In 2019 alone, four million travelers visited Jamaica, bringing thousands of jobs to its three million residents. But as COVID-19 stretched into the summer, Jamaica’s economy was in free fall, and tourism was its only way back — even if that meant at the expense of public health.
The Jamaican government contracted with Amber Group, a technology company headquartered in Kingston, to build a border entry system allowing residents and travelers back onto the island. The system was named JamCOVID and was rolled out as an app and a website to allow visitors to get screened before they arrive. To cross the border, travelers had to upload a negative COVID-19 test result to JamCOVID before boarding their flight from high-risk countries, including the United States.
Amber Group’s CEO Dushyant Savadia boasted that his company developed JamCOVID in “three days” and that it effectively donated the system to the Jamaican government, which in turn pays Amber Group for additional features and customizations. The rollout appeared to be a success, and Amber Group later secured contracts to roll out its border entry system to at least four other Caribbean islands.
But last month TechCrunch revealed that JamCOVID exposed immigration documents, passport numbers, and COVID-19 lab test results on close to half a million travelers — including many Americans — who visited the island over the past year. Amber Group had set the access to the JamCOVID cloud server to public, allowing anyone to access its data from their web browser.
Whether the data exposure was caused by human error or negligence, it was an embarrassing mistake for a technology company — and, by extension, the Jamaican government — to make.
And that might have been the end of it. Instead, the government’s response became the story.
By the end of the first wave of coronavirus, contact tracing apps were still in their infancy and few governments had plans in place to screen travelers as they arrived at their borders. It was a scramble for governments to build or acquire technology to understand the spread of the virus.
As part of an investigation into a broad range of these COVID-19 apps and services, TechCrunch found that JamCOVID was storing data on an exposed, passwordless server.
This wasn’t the first time TechCrunch found security flaws or exposed data through our reporting. It also was not the first pandemic-related security scare. Israeli spyware maker NSO Group left real location data on an unprotected server that it used for demonstrating its new contact tracing system. Norway was one of the first countries with a contact tracing app, but pulled it after the country’s privacy authority found the continuous tracking of citizens’ location was a privacy risk.
Just as we have with any other story, we contacted who we thought was the server’s owner. We alerted Jamaica’s Ministry of Health to the data exposure on the weekend of February 13. But after we provided specific details of the exposure to ministry spokesperson Stephen Davidson, we did not hear back. Two days later, the data was still exposed.
After we spoke to two American travelers whose data was spilling from the server, we narrowed down the owner of the server to Amber Group. We contacted its chief executive Savadia on February 16, who acknowledged the email but did not comment, and the server was secured about an hour later.
We ran our story that afternoon. After we published, the Jamaican government issued a statement claiming the lapse was “discovered on February 16” and was “immediately rectified,” neither of which were true.
Instead, the government responded by launching a criminal investigation into whether there was any “unauthorized” access to the unprotected data that led to our first story, which we perceived to be a thinly veiled threat directed at this publication. The government said it had contacted its overseas law enforcement partners.
When reached, a spokesperson for the FBI declined to say whether the Jamaican government had contacted the agency.
Things didn’t get much better for JamCOVID. In the days that followed the first story, the government engaged a cloud and cybersecurity consultant, Escala 24×7, to assess JamCOVID’s security. The results were not disclosed, but the company said it was confident there was “no current vulnerability” in JamCOVID. Amber Group also said that the lapse was a “completely isolated occurrence.”
A week went by and TechCrunch alerted Amber Group to two more security lapses. After the attention from the first report, a security researcher who saw the news of the first lapse found exposed private keys and passwords for JamCOVID’s servers and databases hidden on its website, and a third lapse that spilled quarantine orders for more than half a million travelers.
Amber Group and the government claimed it faced “cyberattacks, hacking and mischievous players.” In reality, the app was just not that secure.
The security lapses come at a politically inconvenient time for the Jamaican government, as it attempts to launch a national identification system, or NIDS, for the second time. NIDS will store biographic data on Jamaican nationals, including their biometrics, such as their fingerprints.
The repeat effort comes two years after the government’s first law was struck down by Jamaica’s High Court as unconstitutional.
Critics have cited the JamCOVID security lapses as a reason to drop the proposed national database. A coalition of privacy and rights groups cited the recent issues with JamCOVID for why a national database is “potentially dangerous for Jamaicans’ privacy and security.” A spokesperson for Jamaica’s opposition party told local media that there “wasn’t much confidence in NIDS in the first place.”
It’s been more than a month since we published the first story and there are many unanswered questions, including how Amber Group secured the contract to build and run JamCOVID, how the cloud server became exposed, and if security testing was conducted before its launch.
TechCrunch emailed both the Jamaican prime minister’s office and Jamaica’s national security minister Matthew Samuda to ask how much, if anything, the government donated or paid to Amber Group to run JamCOVID and what security requirements, if any, were agreed upon for JamCOVID. We did not get a response.
Amber Group also has not said how much it has earned from its government contracts. Amber Group’s Savadia declined to disclose the value of the contracts to one local newspaper. Savadia did not respond to our emails with questions about its contracts.
Following the second security lapse, Jamaica’s opposition party demanded that the prime minister release the contracts that govern the agreement between the government and Amber Group. Prime Minister Andrew Holness said at a press conference that the public “should know” about government contracts but warned “legal hurdles” may prevent disclosure, such as for national security reasons or when “sensitive trade and commercial information” might be disclosed.
That came days after local newspaper The Jamaica Gleaner had a request to obtain contracts revealing the salaries state officials denied by the government under a legal clause that prevents the disclosure of an individual’s private affairs. Critics argue that taxpayers have a right to know how much government officials are paid from public funds.
Jamaica’s opposition party also asked what was done to notify victims.
Government minister Samuda initially downplayed the security lapse, claiming just 700 people were affected. We scoured social media for proof but found nothing. To date, we’ve found no evidence that the Jamaican government ever informed travelers of the security incident — either the hundreds of thousands of affected travelers whose information was exposed, or the 700 people that the government claimed it notified but has not publicly released.
TechCrunch emailed the minister to request a copy of the notice that the government allegedly sent to victims, but we did not receive a response. We also asked Amber Group and Jamaica’s prime minister’s office for comment. We did not hear back.
Many of the victims of the security lapse are from the United States. Neither of the two Americans we spoke to in our first report were notified of the breach.
Spokespeople for the attorneys general of New York and Florida, whose residents’ information was exposed, told TechCrunch that they had not heard from either the Jamaican government or the contractor, despite state laws requiring data breaches to be disclosed.
The reopening of Jamaica’s borders came at a cost. The island saw over a hundred new cases of COVID-19 in the month that followed, the majority arriving from the United States. From June to August, the number of new coronavirus cases went from tens to dozens to hundreds each day.
To date, Jamaica has reported over 39,500 cases and 600 deaths caused by the pandemic.
Prime Minister Holness reflected on the decision to reopen its borders last month in parliament to announce the country’s annual budget. He said the country’s economic decline last was “driven by a massive 70% contraction in our tourist industry.” More than 525,000 travelers — both residents and tourists — have arrived in Jamaica since the borders opened, Holness said, a figure slightly more than the number of travelers’ records found on the exposed JamCOVID server in February.
Holness defended reopening the country’s borders.
“Had we not done this the fall out in tourism revenues would have been 100% instead of 75%, there would be no recovery in employment, our balance of payment deficit would have worsened, overall government revenues would have been threatened, and there would be no argument to be made about spending more,” he said.
Both the Jamaican government and Amber Group benefited from opening the country’s borders. The government wanted to revive its falling economy, and Amber Group enriched its business with fresh government contracts. But neither paid enough attention to cybersecurity, and victims of their negligence deserve to know why.
Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more.