When it comes to meeting compliance standards, many startups are dominating the alphabet. From GDPR and CCPA to SOC 2, ISO27001, PCI DSS and HIPAA, companies have been charging toward meeting the compliance standards required to operate their businesses.
Today, every healthcare founder knows their product must meet HIPAA compliance, and any company working in the consumer space would be well aware of GDPR, for example.
But a mistake many high-growth companies make is that they treat compliance as a catchall phrase that includes security. Thinking this could be an expensive and painful error. In reality, compliance means that a company meets a minimum set of controls. Security, on the other hand, encompasses a broad range of best practices and software that help address risks associated with the company’s operations.
It makes sense that startups want to tackle compliance first. Being compliant plays a big role in any company’s geographical expansion to regulated markets and in its penetration to new industries like finance or healthcare. So in many ways, achieving compliance is a part of a startup’s go-to-market kit. And indeed, enterprise buyers expect startups to check the compliance box before signing on as their customer, so startups are rightfully aligning around their buyers’ expectations.
One of the best ways startups can begin tackling security is with an early security hire.
With all of this in mind, it’s not surprising that we’ve witnessed a trend where startups achieve compliance from the very early days and often prioritize this motion over developing an exciting feature or launching a new campaign to bring in leads, for instance.
Compliance is an important milestone for a young company and one that moves the cybersecurity industry forward. It forces startup founders to put security hats on and think about protecting their company, as well as their customers. At the same time, compliance provides comfort to the enterprise buyer’s legal and security teams when engaging with emerging vendors. So why is compliance alone not enough?
First, compliance doesn’t mean security (although it is a step in the right direction). It is more often than not that young companies are compliant while being vulnerable in their security posture.
What does it look like? For example, a software company may have met SOC 2 standards that require all employees to install endpoint protection on their devices, but it may not have a way to enforce employees to actually activate and update the software. Furthermore, the company may lack a centrally managed tool for monitoring and reporting to see if any endpoint breaches have occurred, where, to whom and why. And, finally, the company may not have the expertise to quickly respond to and fix a data breach or attack.
Therefore, although compliance standards are met, several security flaws remain. The end result is that startups can suffer security breaches that end up costing them a bundle. For companies with under 500 employees, the average security breach costs an estimated $7.7 million, according to a study by IBM, not to mention the brand damage and lost trust from existing and potential customers.
Second, an unforeseen danger for startups is that compliance can create a false sense of safety. Receiving a compliance certificate from objective auditors and renowned organizations could give the impression that the security front is covered.
Once startups start gaining traction and signing upmarket customers, that sense of security grows, because if the startup managed to acquire security-minded customers from the F-500, being compliant must be enough for now and the startup is probably secure by association. When charging after enterprise deals, it’s the buyer’s expectations that push startups to achieve SOC 2 or ISO27001 compliance to satisfy the enterprise security threshold. But in many cases, enterprise buyers don’t ask sophisticated questions or go deeper into understanding the risk a vendor brings, so startups are never really called to task on their security systems.
Third, compliance only deals with a defined set of knowns. It doesn’t cover anything that is unknown and new since the last version of the regulatory requirements were written.
For example, APIs are growing in use, but regulations and compliance standards have yet to catch up with the trend. So an e-commerce company must be PCI-DSS compliant to accept credit card payments, but it may also leverage multiple APIs that have weak authentication or business logic flaws. When the PCI standard was written, APIs weren’t common, so they aren’t included in the regulations, yet now most fintech companies rely heavily on them. So a merchant may be PCI-DSS compliant, but use nonsecure APIs, potentially exposing customers to credit card breaches.
Startups are not to blame for the mix-up between compliance and security. It is difficult for any company to be both compliant and secure, and for startups with limited budget, time or security know-how, it’s especially challenging. In a perfect world, startups would be both compliant and secure from the get-go; it’s not realistic to expect early-stage companies to spend millions of dollars on bulletproofing their security infrastructure. But there are some things startups can do to become more secure.
One of the best ways startups can begin tackling security is with an early security hire. This team member might seem like a “nice to have” that you could put off until the company reaches a major headcount or revenue milestone, but I would argue that a head of security is a key early hire because this person’s job will be to focus entirely on analyzing threats and identifying, deploying and monitoring security practices. Additionally, startups would benefit from ensuring their technical teams are security-savvy and keep security top of mind when designing products and offerings.
Another tactic startups can take to bolster their security is to deploy the right tools. The good news is that startups can do so without breaking the bank; there are many security companies offering open-source, free or relatively affordable versions of their solutions for emerging companies to use, including Snyk, Auth0, HashiCorp, CrowdStrike and Cloudflare.
A full security rollout would include software and best practices for identity and access management, infrastructure, application development, resiliency and governance, but most startups are unlikely to have the time and budget necessary to deploy all pillars of a robust security infrastructure.
Luckily, there are resources like Security 4 Startups that offer a free, open-source framework for startups to figure out what to do first. The guide helps founders identify and solve the most common and important security challenges at every stage, providing a list of entry-level solutions as a solid start to building a long-term security program. In addition, compliance automation tools can help with continuous monitoring to ensure these controls stay in place.
For startups, compliance is critical for establishing trust with partners and customers. But if this trust is eroded after a security incident, it will be nearly impossible to regain it. Being secure, not only compliant, will help startups take trust to a whole other level and not only boost market momentum, but also make sure their products are here to stay.
So instead of equating compliance with security, I suggest expanding the equation to consider that compliance and security equal trust. And trust equals business success and longevity.
Australian security software house Click Studios has told customers not to post emails sent by the company about its data breach, which allowed malicious hackers to push a malicious update to its flagship enterprise password manager Passwordstate to steal customer passwords.
Last week, the company told customers to “commence resetting all passwords” stored in its flagship password manager after the hackers pushed the malicious update to customers over a 28-hour window between April 20-22. The malicious update was designed to contact the attacker’s servers to retrieve malware designed to steal and send the password manager’s contents back to the attackers.
In an email to customers, Click Studios did not say how the attackers compromised the password manager’s update feature, but included a link to a security fix.
But news of the breach only became public after Danish cybersecurity firm CSIS Group published a blog post with details of the attack hours after Click Studios emailed its customers.
Click Studios claims Passwordstate is used by “more than 29,000 customers,” including in the Fortune 500, government, banking, defense and aerospace, and most major industries.
In an update on its website, Click Studios said in a Wednesday advisory that customers are “requested not to post Click Studios correspondence on Social Media.” The email adds: “It is expected that the bad actor is actively monitoring Social Media, looking for information they can use to their advantage, for related attacks.”
“It is expected the bad actor is actively monitoring social media for information on the compromise and exploit. It is important customers do not post information on Social Media that can be used by the bad actor. This has happened with phishing emails being sent that replicate Click Studios email content,” the company said.
Besides a handful of advisories published by the company since the breach was discovered, the company has refused to comment or respond to questions.
It’s also not clear if the company has disclosed the breach to U.S. and EU authorities where the company has customers, but where data breach notification rules obligate companies to disclose incidents. Companies can be fined up to 4% of their annual global revenue for falling foul of Europe’s GDPR rules.
Click Studios chief executive Mark Sandford has not responded to repeated requests (from TechCrunch) for comment. Instead, TechCrunch received the same canned autoresponse from the company’s support email saying that the company’s staff are “focused only on assisting customers technically.”
TechCrunch emailed Sandford again on Thursday for comment on the latest advisory, but did not hear back.
Apple has spent years reinforcing macOS with new security features to make it tougher for malware to break in. But a newly discovered vulnerability broke through most of macOS’ newer security protections with a double-click of a malicious app, a feat not meant to be allowed under Apple’s watch.
Worse, evidence shows a notorious family of Mac malware has already been exploiting this vulnerability for months before it was subsequently patched by Apple this week.
Over the years, Macs have adapted to catch the most common types of malware by putting technical obstacles in their way. macOS flags potentially malicious apps masquerading as documents that have been downloaded from the internet. And if macOS hasn’t reviewed the app — a process Apple calls notarization — or if it doesn’t recognize its developer, the app won’t be allowed to run without user intervention.
But security researcher Cedric Owens said the bug he found in mid-March bypasses those checks and allows a malicious app to run.
Owens told TechCrunch that the bug allowed him to build a potentially malicious app to look like a harmless document, which when opened bypasses macOS’ built-in defenses when opened.
“All the user would need to do is double click — and no macOS prompts or warnings are generated,” he told TechCrunch. Owens built a proof-of-concept app disguised as a harmless document that exploits the bug to launch the Calculator app, a way of demonstrating that the bug works without dropping malware. But a malicious attacker could exploit this vulnerability to remotely access a user’s sensitive data simply by tricking a victim into opening a spoofed document, he explained.
The proof-of-concept app disguised as a harmless document running on an unpatched macOS machine. (Image: supplied)
Fearing the potential for attackers to abuse this vulnerability, Owens reported the bug to Apple.
Apple told TechCrunch it fixed the bug in macOS 11.3. Apple also patched earlier macOS versions to prevent abuse, and pushed out updated rules to XProtect, macOS’ in-built anti-malware engine, to block malware from exploiting the vulnerability.
Owens asked Mac security researcher Patrick Wardle to investigate how — and why — the bug works. In a technical blog post today, Wardle explained that the vulnerability triggers due to a logic bug in macOS’ underlying code. The bug meant that macOS was misclassifying certain app bundles and skipping security checks, allowing Owens’ proof-of-concept app to run unimpeded.
In simple terms, macOS apps aren’t a single file but a bundle of different files that the app needs to work, including a property list file that tells the application where the files it depends on are located. But Owens found that taking out this property file and building the bundle with a particular structure could trick macOS into opening the bundle — and running the code inside — without triggering any warnings.
Wardle described the bug as rendering macOS’ security features as “wholly moot.” He confirmed that Apple’s security updates have fixed the bug. “The update will now result in the correct classification of applications as bundles and ensure that untrusted, unnotarized applications will (yet again) be blocked, and thus the user protected,” he told TechCrunch.
With knowledge of how the bug works, Wardle asked Mac security company Jamf to see if there was any evidence that the bug had been exploited prior to Owens’ discovery. Jamf detections lead Jaron Bradley confirmed that a sample of the Shlayer malware family exploiting the bug was captured in early January, several months prior to Owens’ discovery. Jamf also published a technical blog post about the malware.
“The malware we uncovered using this technique is an updated version of Shlayer, a family of malware that was first discovered in 2018. Shlayer is known to be one of the most abundant pieces of malware on macOS so we’ve developed a variety of detections for its many variants, and we closely track its evolution,” Bradley told TechCrunch. “One of our detections alerted us to this new variant, and upon closer inspection we discovered its use of this bypass to allow it to be installed without an end user prompt. Further analysis leads us to believe that the developers of the malware discovered the zero-day and adjusted their malware to use it, in early 2021.”
Shlayer is an adware that intercepts encrypted web traffic — including HTTPS-enabled sites — and injects its own ads, making fraudulent ad money for the operators.
“It’s often installed by tricking users into downloading fake application installers or updaters,” said Bradley. “The version of Shlayer that uses this technique does so to evade built-in malware scanning, and to launch without additional ‘Are you sure’ prompts to the user,” he said.
“The most interesting thing about this variant is that the author has taken an old version of it and modified it slightly in order to bypass security features on macOS,” said Bradley.
Wardle has also published a Python script that will help users detect any past exploitation.
It’s not the first time Shlayer has evaded macOS’ defenses. Last year, Wardle working with security researcher Peter Dantini found a sample of Shlayer that had been accidentally notarized by Apple, a process where developers submit their apps to Apple for security checks so the apps can run on millions of Macs unhindered.
If the definition of insanity is doing the same thing over and over and expecting a different outcome, then one might say the cybersecurity industry is insane.
Criminals continue to innovate with highly sophisticated attack methods, but many security organizations still use the same technological approaches they did 10 years ago. The world has changed, but cybersecurity hasn’t kept pace.
Distributed systems, with people and data everywhere, mean the perimeter has disappeared. And the hackers couldn’t be more excited. The same technology approaches, like correlation rules, manual processes, and reviewing alerts in isolation, do little more than remedy symptoms while hardly addressing the underlying problem.
Credentials are supposed to be the front gates of the castle, but as the SOC is failing to change, it is failing to detect. The cybersecurity industry must rethink its strategy to analyze how credentials are used and stop breaches before they become bigger problems.
Compromised credentials have long been a primary attack vector, but the problem has only grown worse in the mid-pandemic world. The acceleration of remote work has increased the attack footprint as organizations struggle to secure their network while employees work from unsecured connections. In April 2020, the FBI said that cybersecurity attacks reported to the organization grew by 400% compared to before the pandemic. Just imagine where that number is now in early 2021.
It only takes one compromised account for an attacker to enter the active directory and create their own credentials. In such an environment, all user accounts should be considered as potentially compromised.
Nearly all of the hundreds of breach reports I’ve read have involved compromised credentials. More than 80% of hacking breaches are now enabled by brute force or the use of lost or stolen credentials, according to the 2020 Data Breach Investigations Report. The most effective and commonly-used strategy is credential stuffing attacks, where digital adversaries break in, exploit the environment, then move laterally to gain higher-level access.
A security lapse at online grocery delivery startup Mercato exposed tens of thousands of customer orders, TechCrunch has learned.
A person with knowledge of the incident told TechCrunch that the incident happened in January after one of the company’s cloud storage buckets, hosted on Amazon’s cloud, was left open and unprotected.
The company fixed the data spill, but has not yet alerted its customers.
Mercato was founded in 2015 and helps over a thousand smaller grocers and specialty food stores get online for pickup or delivery, without having to sign up for delivery services like Instacart or Amazon Fresh. Mercato operates in Boston, Chicago, Los Angeles, and New York, where the company is headquartered.
TechCrunch obtained a copy of the exposed data and verified a portion of the records by matching names and addresses against known existing accounts and public records. The data set contained more than 70,000 orders dating between September 2015 and November 2019, and included customer names and email addresses, home addresses, and order details. Each record also had the user’s IP address of the device they used to place the order.
The data set also included the personal data and order details of company executives.
It’s not clear how the security lapse happened since storage buckets on Amazon’s cloud are private by default, or when the company learned of the exposure.
Companies are required to disclose data breaches or security lapses to state attorneys-general, but no notices have been published where they are required by law, such as California. The data set had more than 1,800 residents in California, more than three times the number needed to trigger mandatory disclosure under the state’s data breach notification laws.
It’s also not known if Mercato disclosed the incident to investors ahead of its $26 million Series A raise earlier this month. Velvet Sea Ventures, which led the round, did not respond to emails requesting comment.
In a statement, Mercato chief executive Bobby Brannigan confirmed the incident but declined to answer our questions, citing an ongoing investigation.
“We are conducting a complete audit using a third party and will be contacting the individuals who have been affected. We are confident that no credit card data was accessed because we do not store those details on our servers. We will continually inform all authoritative bodies and stakeholders, including investors, regarding the findings of our audit and any steps needed to remedy this situation,” said Brannigan.
Know something, say something. Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more.
Manhunt, a gay dating app that claims to have 6 million male members, has confirmed it was hit by a data breach in February after a hacker gained access to the company’s accounts database.
In a notice filed with the Washington attorney general’s office, Manhunt said the hacker “gained access to a database that stored account credentials for Manhunt users,” and “downloaded the usernames, email addresses and passwords for a subset of our users in early February 2021.
The notice did not say how the passwords were scrambled, if at all, to prevent them from being read by humans. Passwords scrambled using weak algorithms can sometimes be decoded into plain text, allowing malicious hackers to break into their accounts.
Following the breach, Manhunt force-reset account passwords began alerting users in mid-March. Manhunt did not say what percentage of its users had their data stolen or how the data breach happened, but said that more than 7,700 Washington state residents were affected.
The company’s attorneys did not reply to an email requesting comment.
But questions remain about how Manhunt handled the breach. In March, the company tweeted that, “At this time, all Manhunt users are required to update their password to ensure it meets the updated password requirements.” The tweet did not say that user accounts had been stolen.
Manhunt was launched in 2001 by Online-Buddies Inc., which also offered gay dating app Jack’d before it was sold to Perry Street in 2019 for an undisclosed sum. Just months before the sale, Jack’d had a security lapse that exposed users’ private photos and location data.
Dating sites store some of the most sensitive information on their users, and are frequently a target of malicious hackers. In 2015, Ashley Madison, a dating site that encouraged users to have an affair, was hacked, exposing names, and postal and email addresses. Several people died by suicide after the stolen data was posted online. A year later, dating site AdultFriendFinder was hacked, exposing more than 400 million user accounts.
In 2018, same-sex dating app Grindr made headlines for sharing users’ HIV status with data analytics firms.
In other cases, poor security — in some cases none at all — led to data spills involving some of the most sensitive data. In 2019, Rela, a popular dating app for gay and queer women in China, left a server unsecured with no password, allowing anyone to access sensitive data — including sexual orientation and geolocation — on more than 5 million app users. Months later, Jewish dating app JCrush exposed around 200,000 user records.
Know something, say something. Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more.
Facebook’s lead data supervisor in the European Union has opened an investigation into whether the tech giant violated data protection rules vis-a-vis the leak of data reported last week.
Here’s the Irish Data Protection Commission’s statement:
“The Data Protection Commission (DPC) today launched an own-volition inquiry pursuant to section 110 of the Data Protection Act 2018 in relation to multiple international media reports, which highlighted that a collated dataset of Facebook user personal data had been made available on the internet. This dataset was reported to contain personal data relating to approximately 533 million Facebook users worldwide. The DPC engaged with Facebook Ireland in relation to this reported issue, raising queries in relation to GDPR compliance to which Facebook Ireland furnished a number of responses.
The DPC, having considered the information provided by Facebook Ireland regarding this matter to date, is of the opinion that one or more provisions of the GDPR and/or the Data Protection Act 2018 may have been, and/or are being, infringed in relation to Facebook Users’ personal data.
Accordingly, the Commission considers it appropriate to determine whether Facebook Ireland has complied with its obligations, as data controller, in connection with the processing of personal data of its users by means of the Facebook Search, Facebook Messenger Contact Importer and Instagram Contact Importer features of its service, or whether any provision(s) of the GDPR and/or the Data Protection Act 2018 have been, and/or are being, infringed by Facebook in this respect.”
The move comes after the European Commission intervened to apply pressure on Ireland’s data protection commissioner. Justice commissioner, Didier Reynders, tweeted on Monday that he had spoken with Helen Dixon about the Facebook data leak.
“The Commission continues to follow this case closely and is committed to supporting national authorities,” he added, going on to urge Facebook to “cooperate actively and swiftly to shed light on the identified issues”.
Facebook has been contacted for comment.
Today I spoke with Helen Dixon @DPCIreland about the #FacebookLeak. The Commission continues to follow this case closely and is committed to supporting national authorities. We also call on @Facebook to cooperate actively and swiftly to shed light on the identified issues.
— Didier Reynders (@dreynders) April 12, 2021
A spokeswoman for the Commission confirmed the virtual meeting between Reynders and Dixon, saying: “Dixon informed the Commissioner about the issues at stake and the different tracks of work to clarify the situation.
“They both urge Facebook to cooperate swiftly and to share the necessary information. It is crucial to shed light on this leak that has affected millions of European citizens.”
“It is up to the Irish data protection authority to assess this case. The Commission remains available if support is needed. The situation will also have to be further analyzed for the future. Lessons should be learned,” she added.
The revelation that a vulnerability in Facebook’s platform enabled unidentified ‘malicious actors’ to extract the personal data (including email addresses, mobile phone numbers and more) of more than 500 million Facebook accounts up until September 2019 — when Facebook claims it fixed the issue — only emerged in the wake of the data being found for free download on a hacker forum earlier this month.
All 533,000,000 Facebook records were just leaked for free.
This means that if you have a Facebook account, it is extremely likely the phone number used for the account was leaked.
— Alon Gal (Under the Breach) (@UnderTheBreach) April 3, 2021
Despite the European Union’s data protection framework (the GDPR) baking in a regime of data breach notifications — with the risk of hefty fines for compliance failure — Facebook did not inform its lead EU data supervisory when it found and fixed the issue. Ireland’s Data Protection Commission (DPC) was left to find out in the press, like everyone else.
Nor has Facebook individually informed the 533M+ users that their information was taken without their knowledge or consent, saying last week it has no plans to do so — despite the heightened risk for affected users of spam and phishing attacks.
Privacy experts have, meanwhile, been swift to point out that the company has still not faced any regulatory sanction under the GDPR — with a number of investigations ongoing into various Facebook businesses and practices and no decisions yet issued in those cases by Ireland’s DPC.
Last month the European Parliament adopted a resolution on the implementation of the GDPR which expressed “great concern” over the functioning of the mechanism — raising particular concern over the Irish data protection authority by writing that it “generally closes most cases with a settlement instead of a sanction and that cases referred to Ireland in 2018 have not even reached the stage of a draft decision pursuant to Article 60(3) of the GDPR”.
The latest Facebook data scandal further amps up the pressure on the DPC — providing further succour to critics of the GDPR who argue the regulation is unworkable under the current foot-dragging enforcement structure, given the major bottlenecks in Ireland (and Luxembourg) where many tech giants choose to locate regional HQ.
— Max Schrems (@maxschrems) April 10, 2021
On Thursday Reynders made his concern over Ireland’s response to the Facebook data leak public, tweeting to say the Commission had been in contact with the DPC.
He does have reason to be personally concerned. Earlier last week Politico reported that Reynders’ own digits had been among the cache of leaked data, along with those of the Luxembourg prime minister Xavier Bettel — and “dozens of EU officials”. However the problem of weak GDPR enforcement affects everyone across the bloc — some 446M people whose rights are not being uniformly and vigorously upheld.
“A strong enforcement of GDPR is of key importance,” Reynders also remarked on Twitter, urging Facebook to “fully cooperate with Irish authorities”.
Last week Italy’s data protection commission also called on Facebook to immediately offer a service for Italian users to check whether they had been affected by the breach. But Facebook made no public acknowledgment or response to the call. Under the GDPR’s one-stop-shop mechanism the tech giant can limit its regulatory exposure by direct dealing only with its lead EU data supervisor in Ireland.
A two-year Commission review of how the data protection regime is functioning, which reported last summer, already drew attention to problems with patchy enforcement. So a lack of progress on unblocking GDPR bottlenecks is a growing problem for the Commission — which is in the midst of proposing a package of additional digital regulations. That makes the enforcement point a very pressing one as EU lawmakers are being asked how new digital rules will be upheld if existing ones keep being trampled on?
It’s certainly notable that the EU’s executive has proposed a different, centralized enforcement structure for incoming pan-EU legislation targeted at digital services and tech giants. Albeit, getting agreement from all the EU’s institutions and elected representatives on how to reshape platform oversight looks challenging.
And in the meanwhile the data leaks continue: Motherboard reported Friday on another alarming leak of Facebook data it found being made accessible via a bot on the Telegram messaging platform that gives out the names and phone numbers of users who have liked a Facebook page (in exchange for a fee unless the page has had less than 100 likes).
The publication said this data appears to be separate to the 533M+ scraped dataset — after it ran checks against the larger dataset via the breach advice site, haveibeenpwned. It also asked Alon Gal, the person who discovered the aforementioned leaked Facebook dataset being offered for free download online, to compare data obtained via the bot and he did not find any matches.
We contacted Facebook about the source of this leaked data and will update this report with any response.
In his tweet about the 500M+ Facebook data leak last week, Reynders made reference to the Europe Data Protection Board (EDPB), a steering body comprised of representatives from Member State data protection agencies which works to ensure a consistent application of the GDPR.
However the body does not lead on GDPR enforcement — so it’s not clear why he would invoke it. Optics is one possibility, if he was trying to encourage a perception that the EU has vigorous and uniform enforcement structures where people’s data is concerned.
“Under the GDPR, enforcement and the investigation of potential violations lies with the national supervisory authorities. The EDPB does not have investigative powers per se and is not involved in investigations at the national level. As such, the EDPB cannot comment on the processing activities of specific companies,” an EDPB spokeswoman told us when we enquired about Reynders’ remarks.
But she also noted the Commission attends plenary meetings of the EDPB — adding it’s possible there will be an exchange of views among members about the Facebook leak case in the future, as attending supervisory authorities “regularly exchange information on cases at the national level”.
A court in Houston has authorized an FBI operation to “copy and remove” backdoors from hundreds of Microsoft Exchange email servers in the United States, months after hackers used four previously undiscovered vulnerabilities to attack thousands of networks.
The Justice Department announced the operation on Tuesday, which it described as “successful.”
In March, Microsoft discovered a new China state-sponsored hacking group — Hafnium — targeting Exchange servers run from company networks. The four vulnerabilities when chained together allowed the hackers to break into a vulnerable Exchange server and steal its contents. Microsoft fixed the vulnerabilities but the patches did not close the backdoors from the servers that had already been breached. Within days, other hacking groups began hitting vulnerable servers with the same flaws to deploy ransomware.
The number of infected servers dropped as patches were applied. But hundreds of Exchange servers remained vulnerable because the backdoors are difficult to find and eliminate, the Justice Department said in a statement.
“This operation removed one early hacking group’s remaining web shells which could have been used to maintain and escalate persistent, unauthorized access to U.S. networks,” the statement said. “The FBI conducted the removal by issuing a command through the web shell to the server, which was designed to cause the server to delete only the web shell (identified by its unique file path).”
The FBI said it’s attempting to inform owners via email of servers from which it removed the backdoors.
Assistant attorney general John C. Demers said the operation “demonstrates the Department’s commitment to disrupt hacking activity using all of our legal tools, not just prosecutions.”
The Justice Department also said the operation only removed the backdoors, but did not patch the vulnerabilities exploited by the hackers to begin with or remove any malware left behind.
It’s believed this is the first known case of the FBI effectively cleaning up private networks following a cyberattack. In 2016, the Supreme Court moved to allow U.S. judges to issue search and seizure warrants outside of their district. Critics opposed the move at the time, fearing the FBI could ask a friendly court to authorized cyber-operations for anywhere in the world.
Other countries, like France, have used similar powers before to hijack a botnet and remotely shutting it down.
Neither the FBI nor the Justice Department commented by press time.
Risk and compliance startup LogicGate has confirmed a data breach. But unless you’re a customer, you probably didn’t hear about it.
An email sent by LogicGate to customers earlier this month said on February 23 an unauthorized third-party obtained credentials to its Amazon Web Services-hosted cloud storage servers storing customer backup files for its flagship platform Risk Cloud, which helps companies to identify and manage their risk and compliance with data protection and security standards. LogicGate says its Risk Cloud can also help find security vulnerabilities before they are exploited by malicious hackers.
The credentials “appear to have been used by an unauthorized third party to decrypt particular files stored in AWS S3 buckets in the LogicGate Risk Cloud backup environment,” the email read.
“Only data uploaded to your Risk Cloud environment on or prior to February 23, 2021, would have been included in that backup file. Further, to the extent you have stored attachments in the Risk Cloud, we did not identify decrypt events associated with such attachments,” it added.
LogicGate did not say how the AWS credentials were compromised. An email update sent by LogicGate last Friday said the company anticipates finding the root cause of the incident by this week.
But LogicGate has not made any public statement about the breach. It’s also not clear if LogicGate contacted all of its customers or only those whose data was accessed. LogicGate counts Capco, SoFi, and Blue Cross Blue Shield of Kansas City as customers.
We sent a list of questions, including how many customers were affected and if the company has alerted U.S. state authorities as required by state data breach notification laws. When reached, LogicGate chief executive Matt Kunkel confirmed the breach but declined to comment citing an ongoing investigation. “We believe it’s best to communicate developers directly to our customers,” he said.
Kunkel would not say, when asked, if the attacker also exfiltrated the decrypted customer data from its servers.
Data breach notification laws vary by state, but companies that fail to report security incidents can face heavy fines. Under Europe’s GDPR rules, companies can face fines of up to 4% of their annual turnover for violations.
In December, LogicGate secured $8.75 million in fresh funding, totaling more than $40 million since it launched in 2015.
Are you a LogicGate customer? Send tips securely over Signal and WhatsApp to +1 646-755-8849. You can also send files or documents using our SecureDrop. Learn more.
The question of whether Facebook will face any regulatory sanction over the latest massive historical platform privacy fail to come to light remains unclear. But the timeline of the incident looks increasingly awkward for the tech giant.
While it initially sought to play down the data breach revelations published by Business Insider at the weekend by suggesting that information like people’s birth dates and phone numbers was “old”, in a blog post late yesterday the tech giant finally revealed that the data in question had in fact been scraped from its platform by malicious actors “in 2019” and “prior to September 2019”.
That new detail about the timing of this incident raises the issue of compliance with Europe’s General Data Protection Regulation (GDPR) — which came into application in May 2018.
Under the EU regulation data controllers can face fines of up to 2% of their global annual turnover for failures to notify breaches, and up to 4% of annual turnover for more serious compliance violations.
The European framework looks important because Facebook indemnified itself against historical privacy issues in the US when it settled with the FTC for $5BN back in July 2019 — although that does still mean there’s a period of several months (June to September 2019) which could fall outside that settlement.
Not only is @Facebook past the indemnification period of the FTC settlement (June 12 2019), they also may have violated the terms of the settlement requiring them to report breaches of covered information (ht @JustinBrookman ) https://t.co/182LEf4rNO pic.twitter.com/utCnQ4USHI
— ashkan soltani (@ashk4n) April 7, 2021
Yesterday, in its own statement responding to the breach revelations, Facebook’s lead data supervisor in the EU said the provenance of the newly published dataset wasn’t entirely clear, writing that it “seems to comprise the original 2018 (pre-GDPR) dataset” — referring to an earlier breach incident Facebook disclosed in 2018 which related to a vulnerability in its phone lookup functionality that it had said occurred between June 2017 and April 2018 — but also writing that the newly published dataset also looked to have been “combined with additional records, which may be from a later period”.
Facebook followed up the Irish Data Protection Commission (DPC)’s statement by confirming that suspicion — admitting that the data had been extracted from its platform in 2019, up until September of that year.
Another new detail that emerged in Facebook’s blog post yesterday was the fact users’ data was scraped not via the aforementioned phone lookup vulnerability — but via another method altogether: A contact importer tool vulnerability.
This route allowed an unknown number of “malicious actors” to use software to imitate Facebook’s app and upload large sets of phone numbers to see which ones matched Facebook users.
In this way a spammer (for example), could upload a database of potential phone numbers and link them to not only names but other data like birth date, email address, location — all the better to phish you with.
In its PR response to the breach, Facebook quickly claimed it had fixed this vulnerability in August 2019. But, again, that timing places the incident squarely in the period of GDPR being active.
As a reminder, Europe’s data protection framework bakes in a data breach notification regime that requires data controllers to notify a relevant supervisory authority if they believe a loss of personal data is likely to constitute a risk to users’ rights and freedoms — and to do so without undue delay (ideally within 72 hours of becoming aware of it).
Yet Facebook made no disclosure at all of this incident to the DPC. Indeed, the regulator made it clear yesterday that it had to proactively seek information from Facebook in the wake of BI’s report. That’s the opposite of how EU lawmakers intended the regulation to function.
Data breaches, meanwhile, are broadly defined under the GDPR. It could mean personal data being lost or stolen and/or accessed by unauthorized third parties. It can also relate to deliberate or accidental action or inaction by a data controller which exposes personal data.
Legal risk attached to the breach likely explains why Facebook has studiously avoided describing this latest data protection failure, in which the personal information of more than half a billion users was posted for free download on an online forum, as a ‘breach’.
And, indeed, why it’s sought to downplay the significance of the leaked information — dubbing people’s personal information “old data”. (Even as few people regularly change their mobile numbers, email address, full names and biographical information and so on, and no one (legally) gets a new birth date… )
Its blog post instead refers to data being scraped; and to scraping being “a common tactic that often relies on automated software to lift public information from the internet that can end up being distributed in online forums” — tacitly implying that the personal information leaked via its contact importer tool was somehow public.
The self-serving suggestion being peddled here by Facebook is that hundreds of millions of users had both published sensitive stuff like their mobile phone numbers on their Facebook profiles and left default settings on their accounts — thereby making this personal information ‘publicly available for scraping/no longer private/uncovered by data protection legislation’.
This is an argument as obviously absurd as it is viciously hostile to people’s rights and privacy. It’s also an argument that EU data protection regulators must quickly and definitively reject or be complicit in allowing Facebook (ab)use its market power to torch the very fundamental rights that regulators’ sole purpose is to defend and uphold.
Even if some Facebook users affected by this breach had their information exposed via the contact importer tool because they had not changed Facebook’s privacy-hostile defaults that still raises key questions of GPDR compliance — because the regulation also requires data controllers to adequately secure personal data and apply privacy by design and default.
Facebook allowing hundreds of millions of accounts to have their info freely pillaged by spammers (or whoever) doesn’t sound like good security or default privacy.
In short, it’s the Cambridge Analytica scandal all over again.
Facebook is trying to get away with continuing to be terrible at privacy and data protection because it’s been so terrible at it in the past — and likely feels confident in keeping on with this tactic because it’s faced relatively little regulatory sanction for an endless parade of data scandals. (A one-time $5BN FTC fine for a company than turns over $85BN+ in annual revenue is just another business expense.)
We asked Facebook why it failed to notify the DPC about this 2019 breach back in 2019, when it realized people’s information was once again being maliciously extracted from its platform — or, indeed, why it hasn’t bothered to tell affected Facebook users themselves — but the company declined to comment beyond what it said yesterday.
Then it told us it would not be commenting on its communications with regulators.
Under the GDPR, if a breach poses a high risk to users’ rights and freedoms a data controller is required to notify affected individuals — with the rational being that prompt notification of a threat can help people take steps to protect themselves from the risks of their data being breached, such as fraud and ID theft.
Yesterday Facebook also said it does not have plans to notify users either.
Perhaps the company’s trademark ‘thumbs up’ symbol would be more aptly expressed as a middle finger raised at everyone else.
Facebook’s lead data protection regulator in the European Union is seeking answers from the tech giant over a major data breach reported on over the weekend.
The breach was reported on by Business Insider on Saturday which said personal data (including email addresses and mobile phone numbers) of more than 500M Facebook accounts had been posted to a low level hacking forum — making the personal information on hundreds of millions of Facebook users’ accounts freely available.
“The exposed data includes the personal information of over 533M Facebook users from 106 countries, including over 32M records on users in the US, 11M on users in the UK, and 6M on users in India,” Business Insider said, noting that the dump includes phone numbers, Facebook IDs, full names, locations, birthdates, bios, and some email addresses.
Facebook responded to the report of the data dump by saying it related to a vulnerability in its platform it had “found and fixed” in August 2019 — dubbing the info “old data” which it also claimed had been reported on in 2019. However as security experts were quick to point out, most people don’t change their mobile phone number often — so Facebook’s trigger reaction to downplay the breach looks like an ill-thought through attempt to deflect blame.
It’s also not clear whether all the data is all ‘old’, as Facebook’s initial response suggests.
This is old data that was previously reported on in 2019. We found and fixed this issue in August 2019. https://t.co/mPCttLkjzE
— Liz Bourgeois (@Liz_Shepherd) April 3, 2021
There’s plenty of reasons for Facebook to try to downplay yet another data scandal. Not least because, under European Union data protection rules, there are stiff penalties for companies that fail to promptly report significant breaches to relevant authorities. And indeed for breaches themselves — as the bloc’s General Data Protection Regulation (GDPR) bakes in an expectation of security by design and default.
By pushing the claim that the leaked data is “old” Facebook may be hoping to peddle the idea that it predates the GDPR coming into application (in May 2018).
However the Irish Data Protection Commission (DPC), Facebook’s lead data supervisor in the EU, told TechCrunch that it’s not abundantly clear whether that’s the case at this point.
“The newly published dataset seems to comprise the original 2018 (pre-GDPR) dataset and combined with additional records, which may be from a later period,” the DPC’s deputy commissioner, Graham Doyle said in a statement.
“A significant number of the users are EU users. Much of the data appears to been data scraped some time ago from Facebook public profiles,” he also said.
“Previous datasets were published in 2019 and 2018 relating to a large-scale scraping of the Facebook website which at the time Facebook advised occurred between June 2017 and April 2018 when Facebook closed off a vulnerability in its phone lookup functionality. Because the scraping took place prior to GDPR, Facebook chose not to notify this as a personal data breach under GDPR.”
Doyle said the regulator sought to establish “the full facts” about the breach from Facebook over the weekend and is “continuing to do so” — making it clear that there’s an ongoing lack of clarity on the issue, despite the breach itself being claimed as “old” by Facebook.
The DPC also made it clear that it did not receive any proactive communication from Facebook on the issue — despite the GDPR putting the onus on companies to proactively inform regulators about significant data protection issues. Rather the regulator had to approach Facebook — using a number of channels to try to obtain answers from the tech giant.
Through this approach the DPC said it learnt Facebook believes the information was scraped prior to the changes it made to its platform in 2018 and 2019 in light of vulnerabilities identified in the wake of the Cambridge Analytica data misuse scandal.
A huge database of Facebook phone numbers was found unprotected online back in September 2019.
Facebook had also earlier admitted to a vulnerability with a search tool it offered — revealing in April 2018 that somewhere between 1BN and 2BN users had had their public Facebook information scraped via a feature which allowed people to look up users by inputting a phone number or email — which is one potential source for the cache of personal data.
Last year Facebook also filed a lawsuit against two companies it accused of engaging in an international data scraping operation.
But the fallout from its poor security design choices continue to dog Facebook years after its ‘fix’.
More importantly, the fallout from the massive personal data spill continues to affect Facebook users whose information is now being openly offered for download on the Internet — opening them up to the risk of spam and phishing attacks and other forms of social engineering (such as for attempted identity theft).
There are still more questions than answers about how this “old” cache of Facebook data came to be published online for free on a hacker forum.
The DPC said it was told by Facebook that “the data at issue appears to have been collated by third parties and potentially stems from multiple sources”.
The company also claimed the matter “requires extensive investigation to establish its provenance with a level of confidence sufficient to provide your Office and our users with additional information” — which is a long way of suggesting that Facebook has no idea either.
“Facebook assures the DPC it is giving highest priority to providing firm answers to the DPC,” Doyle also said. “A percentage of the records released on the hacker website contain phone numbers and email address of users.
“Risks arise for users who may be spammed for marketing purposes but equally users need to be vigilant in relation to any services they use that require authentication using a person’s phone number or email address in case third parties are attempting to gain access.”
“The DPC will communicate further facts as it receives information from Facebook,” he added.
At the time of writing Facebook had not responded to a request for comment about the breach.
According to haveibeenpwned’s Troy Hunt, this latest Facebook data dump contains far more mobile phone numbers than email addresses.
He writes that he was sent the data a few weeks ago — initially getting 370M records and later “the larger corpus which is now in very broad circulation”.
“A lot of it is the same, but a lot of it is also different,” Hunt also notes, adding: “There is not one clear source of this data.”
When you think of the core members of the C-suite, you probably think of the usual characters: CEO, CFO, COO and maybe a CMO. Each of these roles is fairly well defined: The CEO controls strategy and ultimately answers to the board; the CFO manages budgets; the CMO gets people to buy more, more often; the COO keeps everything running smoothly. Regardless of the role, all share the same objective: maximize shareholder value.
But the information age is shaking up the C-suite’s composition. The cyber market is exploding in an attempt to secure the modern enterprise: multicloud environments, data generated and stored faster than anyone can keep up with and SaaS applications powering virtually every function across the org, in addition to new types of security postures that coincide with that trend. Whatever the driver, though, this all adds up to the fact that cyber strategy and company strategy are inextricably linked. Consequently, chief information security officers (CISOs) in the C-Suite will be just as common and influential as CFOs in maximizing shareholder value.
As investors seek outsized returns, they need to be more engaged with the CISO beyond the traditional security topics.
It’s the early ’90s. A bank heist. A hacker. St. Petersburg and New York City. Offshore bank accounts. Though it sounds like the synopsis of the latest psychological thriller, this is the context for the appointment of the first CISO in 1994.
A hacker in Russia stole $10 million from Citi clients’ accounts by typing away at a keyboard in a dimly lit apartment across the Atlantic. Steve Katz, a security executive, was poached from JP Morgan to join Citi as part of the C-suite to respond to the crisis. His title? CISO.
After he joined, he was told two critical things: First, he would have a blank check to set up a security program to prevent this from happening again, and second, Citi would publicize the hack one month after he started. Katz flew over 200,000 miles during the next few months, visiting corporate treasurers and heads of finance to reassure them their funds were secure. While the impetus for the first CISO was a literal bank heist, the $10 million stolen pales in comparison to what CISOs are responsible for protecting today.
Clothing giant FatFace had a data breach, but doesn’t want you to tell anyone about it.
The company sent an email to customers this week disclosing that it first detected a breach on January 17. A hacker made off with the customer’s name, email and postal address, and the last four-digits of their credit card. “Full payment card information was not compromised,” the notice reiterated.
But despite going out to thousands of customers, the email said to “keep this email and the information included within it strictly private and confidential,” an entirely unenforceable request.
Under the U.K. data protection laws, a company must disclose a data breach within 72 hours of becoming aware of an incident, but there are no legal requirements on the customer to keep the information confidential. It didn’t take long for the company to face flack from the public. The company didn’t have much to say in response, asking instead to “DM us with any questions.”
Through a spokesperson at a crisis communications firm, FatFat said: “The notification email was marked private and confidential due to the nature of the communication, which was intended for the individual concerned. Given its contents, we wanted to make this clear, which is why we marked it private and confidential.”
TechCrunch obtained a near-identical email sent to its staff from a former employee who asked not to be named. The email to employees was largely the same as the customer email, but warned that staff may have had their bank account information and their National Insurance numbers — the U.K. equivalent of Social Security — compromised.
FatFace confirmed “a select number of employees, former employees and customers and providing appropriate guidance and support,” but would not say specifically how many customers and employees were affected by the breach.
Researchers say a botnet targeting Windows devices is rapidly growing in size, thanks to a new infection technique that allows the malware to spread from computer to computer.
The Purple Fox malware was first spotted in 2018 spreading through phishing emails and exploit kits, a way for threat groups to infect machines using existing security flaws.
But researchers Amit Serper and Ophir Harpaz at security firm Guardicore, which discovered and revealed the new infection effort in a new blog post, say the malware now targets internet-facing Windows computers with weak passwords, giving the malware a foothold to spread more rapidly.
The malware does this by trying to guess weak Windows user account passwords by targeting the server message block, or SMB — a component that lets Windows talk with other devices, like printers and file servers. Once the malware gains access to a vulnerable computer, it pulls a malicious payload from a network of close to 2,000 older and compromised Windows web servers and quietly installs a rootkit, keeping the malware persistently anchored to the computer while also making it much harder to be detected or removed.
Once infected, the malware then closes the ports in the firewall it used to infect the computer to begin with, likely to prevent reinfection or other threat groups hijacking the already-hacked computer, the researchers said.
The malware then generates a list of internet addresses and scans the internet for vulnerable devices with weak passwords to infect further, creating a growing network of ensnared devices.
Botnets are formed when hundreds or thousands of hacked devices are enlisted into a network run by criminal operators, which are often then used to launch denial-of-network attacks to pummel organizations with junk traffic with the aim of knocking them offline. But with control of these devices, criminal operators can also use botnets to spread malware and spam, or to deploy file-encrypting ransomware on the infected computers.
But this kind of wormable botnet presents a greater risk as it spreads largely on its own.
Serper, Guardicore’s vice president of security research for North America, said the wormable infection technique is “cheaper” to run than its earlier phishing and exploit kit effort.
“The fact that it’s an opportunistic attack that constantly scans the internet and looks for more vulnerable machines means that the attackers can sort of ‘set it and forget it’,” he said.
It appears to be working. Purple Fox infections have rocketed by 600% since May 2020, according to data from Guardicore’s own network of internet sensors. The actual number of infections is likely to be far higher, amounting to more than 90,000 infections in the past year.
Guardicore published indicators of compromise to help networks identify if they have been infected. The researchers do not know what the botnet will be used for but warned that its growing size presents a risk to organizations.
“We assume that this is laying the groundwork for something in the future,” said Serper.