Apple has delayed plans to roll out its child sexual abuse (CSAM) detection technology that it chaotically announced last month, citing feedback from customers and policy groups.
That feedback, if you recall, has been largely negative. The Electronic Frontier Foundation said this week it had amassed more than 25,000 signatures from consumers. On top of that, close to 100 policy and rights groups, including the American Civil Liberties Union, also called on Apple to abandon plans to roll out the technology.
In a statement on Friday morning, Apple told TechCrunch:
“Last month we announced plans for features intended to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material. Based on feedback from customers, advocacy groups, researchers and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features.”
Apple’s so-called NeuralHash technology is designed to identify known CSAM on a user’s device without having to possess the image or knowing the contents of the image. Because a user’s photos stored in iCloud are encrypted so that even Apple can’t access the data, NeuralHash instead scans for known CSAM on a user’s device, which Apple claims is more privacy-friendly than the current blanket scanning that cloud providers use.
But security experts and privacy advocates have expressed concern that the system could be abused by highly resourced actors, like governments, to implicate innocent victims or to manipulate the system to detect other materials that authoritarian nation states find objectionable.
Within a few weeks of announcing the technology, researchers said they were able to create “hash collisions” using NeuralHash, effectively tricking the system into thinking two entirely different images were the same.
iOS 15 is expected out later in the next few weeks.
This report has been updated with more detail about NeuralHash and to clarify iCloud Photos are encrypted but not end-to-end encrypted.
The FBI has warned that the Chinese government is using both in-person and digital techniques to intimidate, silence and harass U.S.-based Uyghur Muslims.
The Chinese government has long been accused of human rights abuses over its treatment of the Uyghur population and other mostly Muslim ethnic groups in China’s Xinjiang region. More than a million Uyghurs have been detained in internment camps, according to a United Nations human rights committee, and many other Uyghurs have been targeted and hacked by state-backed cyberattacks. China has repeatedly denied the claims.
In recent months, the Chinese government has become increasingly aggressive in its efforts to shut down foreign critics, including those based in the United States and other Western democracies. These efforts have now caught the attention of the FBI.
In an unclassified bulletin, the FBI warned that officials are using transnational repression — a term that refers to foreign government transgression of national borders through physical and digital means to intimidate or silence members of diaspora and exile communities — in an attempt to compel compliance from U.S.-based Uyghurs and other Chinese refugees and dissidents, including Tibetans, Falun Gong members, and Taiwan and Hong Kong activists.
“Threatened consequences for non-compliance routinely include detainment of a U.S.-based person’s family or friends in China, seizure of China-based assets, sustained digital and in-person harassment, Chinese government attempts to force repatriation, computer hacking and digital attacks, and false representation online,” the FBI bulletin warns.
The bulletin was reported by video surveillance news site IPVM.
The FBI highlighted four instances of U.S.-based individuals facing harassment. In one case from June, the Chinese government imprisoned dozens of family members of six U.S.-based Uyghur journalists in retaliation for their continued reporting on China and its repression of Uyghurs for the U.S. government-funded news service Radio Free Asia. The bulletin said that between 2019 and March 2021, Chinese officials used WeChat to call and text a U.S.-based Uyghur to discourage her from publicly discussing Uyghur mistreatment. Members of this person’s family were later detained in Xinjiang detention camps.
“The Chinese government continues to conduct this activity, even as the U.S. government has sanctioned Chinese officials and increased public and diplomatic messaging to counter China’s human rights and democratic abuses in Xinjiang over the past year,” the FBI states. “This transnational repression activity violates US laws and individual rights.
The FBI has urged U.S. law enforcement personnel, as well as members of the public, to report any suspected incidents of Chinese government harassment.
The Federal Trade Commission has unanimously voted to ban the spyware maker SpyFone and its chief executive Scott Zuckerman from the surveillance industry, the first order of its kind, after the agency accused the company of harvesting mobile data on thousands of people and leaving it on the open internet.
The agency said SpyFone “secretly harvested and shared data on people’s physical movements, phone use, and online activities through a hidden device hack,” allowing the spyware purchaser to “see the device’s live location and view the device user’s emails and video chats.”
SpyFone is one of many so-called “stalkerware” apps that are marketed under the guise of parental control but are often used by spouses to spy on their partners. The spyware works by being surreptitiously installed on someone’s phone, often without their permission, to steal their messages, photos, web browsing history, and real-time location data. The FTC also charged that the spyware maker exposed victims to additional security risks because the spyware runs at the “root” level of the phone, which allows the spyware to access off-limits parts of the device’s operating system. A premium version of the app included a keylogger and “live screen viewing,” the FTC says.
But the FTC said that SpyFone’s “lack of basic security” exposed those victims’ data, because of an unsecured Amazon cloud storage server that was spilling the data its spyware was collecting from more than 2,000 victims’ phones. SpyFone said it partnered with a cybersecurity firm and law enforcement to investigate, but the FTC says it never did.
Practically, the ban means SpyFone and its CEO Zuckerman are banned from “offering, promoting, selling, or advertising any surveillance app, service, or business,” making it harder for the company to operate. But FTC Commissioner Rohit Chopra said in a separate statement that stalkerware makers should also face criminal sanctions under U.S. computer hacking and wiretap laws.
The FTC has also ordered the company to delete all the data it “illegally” collected, and, also for the first time, notify victims that the app had been secretly installed on their devices.
In a statement, the FTC’s consumer protection chief Samuel Levine said: “This case is an important reminder that surveillance-based businesses pose a significant threat to our safety and security.”
The EFF, which launched the Coalition Against Stalkerware two years ago, a coalition of companies that detects, combats and raises awareness of stalkerware, praised the FTC’s order. “With the FTC now turning its focus to this industry, victims of stalkerware can begin to find solace in the fact that regulators are beginning to take their concerns seriously,” said EFF’s Eva Galperin and Bill Budington in a blog post.
This is the FTC’s second order against a stalkerware maker. In 2019, the FTC settled with Retina-X after the company was hacked several times and eventually shut down.
Over the years, several other stalkerware makers were either hacked or inadvertently exposed their own systems, including mSpy, Mobistealth, and Flexispy. Another stalkerware maker, ClevGuard, left thousands of hacked victims’ phone data on an exposed cloud server.
If you or someone you know needs help, the National Domestic Violence Hotline (1-800-799-7233) provides 24/7 free, confidential support to victims of domestic abuse and violence. If you are in an emergency situation, call 911.
Did you receive a notification and want to tell your story? You can contact this reporter on Signal and WhatsApp at +1 646-755-8849 or firstname.lastname@example.org by email.
Corelight, a San Francisco-based startup that claims to offer the industry’s first open network detection and response (NDR) platform, has raised $75 million in Series D investment led by Energy Impact Partners.
The round — which also includes a strategic investment from Capital One Ventures, Crowdstrike Falcon Fund and Gaingels — brings Corelight’s total raised to $160 million, following a $50 million Series C in October 2019, a $25 million Series B in September 2018 and a $9.2 million Series A in July 2017.
While it’s raised plenty of capital in the past few years, the startup isn’t planning its exit just yet. Brian Dye, CEO of Corelight, tells TechCrunch that given Corelight’s market opportunity and performance — the startup claims to be the fastest-growing NDR player at scale — it plans to invest in growth and expects to raise additional capital in the future.
“Public listing time frames are always hard to forecast, and we view the private markets as attractive in the short term, so we expect to remain private for the next couple years and will look at market conditions then to decide our next step,” Dye said, adding that Corelight plans to use its latest investment to fuel the acceleration of its global market presence and to develop new data and cloud-based offerings.
“Aside from go-to-market expansion, we are investing to ensure that the insight we provide both continues to lead the industry and can be readily used by customers of all types,” he added.
Corelight, which competes with the likes of FireEye and STG-owned McAfee, was founded in 2013 when Dr. Vern Paxson, a professor of computer science at the University of California, Berkeley, joined forces with Robin Sommer and Seth Hall to build a network visibility solution on top of an open source framework called Zeek (formerly Bro).
Paxson began developing Zeek in 1995 when he was working at Lawrence Berkeley National Laboratory (LBNL). The software is now widely regarded as the gold standard for both network security monitoring and network traffic analysis and has been deployed by thousands of organizations around the world, including the U.S. Department of Energy, various agencies in the U.S. government and research universities like Indiana University, Ohio State and Stanford.
The U.S. Securities and Exchange Commission has fined several brokerage firms a total of $750,000 for exposing the sensitive personally identifiable information of thousands of customers and clients after hackers took over employee email accounts.
A total of eight entities belonging to three companies have been sanctioned by the SEC, including Cetera (Advisor Networks, Investment Services, Financial Specialists, Advisors and Investment Advisers), Cambridge Investment Research (Investment Research and Investment Research Advisors) and KMS Financial Services.
In a press release, the SEC announced that it had sanctioned the firms for failures in their cybersecurity policies and procedures that allowed hackers to gain unauthorized access to cloud-based email accounts, exposing the personal information of thousands of customers and clients at each firm.
In the case of Cetera, the SEC said that cloud-based email accounts of more than 60 employees were infiltrated by unauthorized third parties for more than three years, exposing at least 4,388 clients’ personal information.
The order states that none of the accounts featured the protections required by Cetera’s policies, and the SEC also charged two of the Cetera entities with sending breach notifications to clients containing “misleading language suggesting that the notifications were issued much sooner than they actually were after discovery of the incidents.”
The SEC’s order against Cambridge concludes that the personal information exposure of at least 2,177 Cambridge customers and clients was the result of lax cybersecurity practices at the firm.
“Although Cambridge discovered the first email account takeover in January 2018, it failed to adopt and implement firm-wide enhanced security measures for cloud-based email accounts of its representatives until 2021, resulting in the exposure and potential exposure of additional customer and client records and information,” the SEC said.
The order against KMS is similar; the SEC’s order states that the data of almost 5,000 customers and clients were exposed as a result of the company’s failure to adopt written policies and procedures requiring additional firm-wide security measures until May 2020.
“Investment advisers and broker-dealers must fulfill their obligations concerning the protection of customer information,” said Kristina Littman, chief of the SEC Enforcement Division’s Cyber Unit. “It is not enough to write a policy requiring enhanced security measures if those requirements are not implemented or are only partially implemented, especially in the face of known attacks.”
All of the parties agreed to resolve the charges and to not commit future violations of the charged provisions, without admitting or denying the SEC’s findings. As part of the settlements, Cetera will pay a penalty of $300,000, while Cambridge and KMS will pay fines of $250,000 and $200,000 respectively.
Cambridge told TechCrunch that it does not comment on regulatory matters, but said it has and does maintain a comprehensive information security group and procedures to ensure clients’ accounts are fully protected. Cetera and KMS have yet to respond.
This latest action by the SEC comes just weeks after the Commission ordered London-based publishing and education giant Pearson to pay a $1 million fine for misleading investors about a 2018 data breach at the company.
Apple’s plan to digitize your wallet is slowly taking shape. What started with boarding passes and venue tickets later became credit cards, subway tickets, and student IDs. Next on Apple’s list to digitize are driver’s licenses and state IDs, which it plans to support in its iOS 15 update expected out later this year.
But to get there it needs help from state governments, since it’s the states that issue driver’s licenses and other forms of state identification, and every state issues IDs differently. Apple said today it has so far secured two states, Arizona and Georgia, to bring digital driver’s license and state IDs.
Connecticut, Iowa, Kentucky, Maryland, Oklahoma, and Utah are expected to follow, but a timeline for rolling out wasn’t given.
Apple said in June that it would begin supporting digital licenses and IDs, and that the TSA would be the first agency to begin accepting a digital license from an iPhone at several airports, since only a state ID is required for traveling by air domestically within the United States. The TSA will allow you to present your digital wallet by tapping it on an identity reader. Apple says the feature is secure and doesn’t require handing over or unlocking your phone.
The digital license and ID data is stored on your iPhone but a driver’s license must be verified by the participating state. That has to happen at scale and speed to support millions of drivers and travelers while preventing fake IDs from making it through.
The goal of digitizing licenses and IDs is convenience, rather than fixing a problem. But the move hasn’t exactly drawn confidence from privacy experts, who bemoan Apple’s lack of transparency about how it built this technology and what it ultimately gets out of it.
Apple still has not said much about how the digital ID technology works, or what data the state obtains as part of the process to enroll a digital license. Apple is working on a new security verification feature that takes selfies to validate the user. It’s not to say these systems aren’t inherently problematic, but there are privacy questions that Apple will have to address down the line.
But the fragmented picture of digital licenses and IDs across the U.S. isn’t likely to get less murky overnight, even after Apple enters the picture. A recent public records request by MuckRock showed Apple was in contact with some states as early as 2019 about bringing digital licenses and IDs to iPhones, including California and Illinois, yet neither state has been announced by Apple today.
A cybersecurity company says a popular smart home security system has a pair of vulnerabilities that can be exploited to disarm the system altogether.
Rapid7 found the vulnerabilities in the Fortress S03, a home security system that relies on Wi-Fi to connect cameras, motion sensors and sirens to the internet, allowing owners to remotely monitor their home anywhere with a mobile app. The security system also uses a radio-controlled key fob to let homeowners arm or disarm their house from outside their front door.
But the cybersecurity company said the vulnerabilities include an unauthenticated API and an unencrypted radio signal that can be easily intercepted.
Rapid7 revealed details of the two vulnerabilities on Tuesday after not hearing from Fortress in three months, the standard window of time that security researchers give companies to fix bugs before details are made public. Rapid7 said its only acknowledgment of its email was when Fortress closed its support ticket a week later without commenting.
Fortress owner Michael Hofeditz opened but did not respond to several emails sent by TechCrunch with an email open tracker. An email from Bottone Reiling, a Massachusetts law firm representing Fortress, called the claims “false, purposely misleading and defamatory,” but did not provide specifics that it claims are false, or if Fortress has mitigated the vulnerabilities.
Rapid7 said that Fortress’ unauthenticated API can be remotely queried over the internet without the server checking if the request is legitimate. The researchers said by knowing a homeowner’s email address, the server would return the device’s unique IMEI, which in turn could be used to remotely disarm the system.
The other flaw takes advantage of the unencrypted radio signals sent between the security system and the homeowner’s key fob. That allowed Rapid7 to capture and replay the signals for “arm” and “disarm” because the radio waves weren’t scrambled properly.
Arvind Vishwakarma from Rapid7 said homeowners could add a plus-tagged email address with a long, unique string of letters and numbers in place of a password as a stand-in for a password. But there was little for homeowners to do for the radio signal bug until Fortress addresses it.
Fortress has not said if it has fixed or plans to fix the vulnerabilities. It’s not clear if Fortress is able to fix the vulnerabilities without replacing the hardware. It’s not known if Fortress builds the device itself or buys the hardware from another manufacturer.
One of the most unfortunate fault lines in climate change politics today is the lack of cooperation between environmentalists and the national security community. Left-wing climate activists don’t exactly hang out with more right-leaning military strategists, the former often seeing the latter as destructive anti-ecological marauders, while the latter often assume the former are unrealistic pests who would prioritize trees and dolphins over human safety.
Yet, climate change is forcing the two to work ever closer together, as uncomfortable as that might be.
In “All Hell Breaking Loose,” emeritus professor and prolific author Michael T. Klare has written a meta-assessment of the Pentagon’s strategic assessments from the last two decades on how climate will shape America’s security environment. Sober and repetitive but not grim, the book is an eye-opening look at how the defense community is coping with one of the most vexing global challenges today.
Climate change weakens the security environment in practically every domain, and in ways that might not be obvious to the non-defense specialist. For the U.S. Navy, which relies on coastal access to shipyards and ports, rising sea levels threaten to diminish and even occasionally demolish its mission readiness, such as when Atlantic hurricanes hit Virginia, one of the largest centers for naval infrastructure in the United States.
While perhaps obvious, it bears repeating that the U.S. military is as much a landlord as a fighting force, with hundreds of bases spread across the country and around the world. A large percentage of these installations face climate-related challenges that can affect mission readiness, and the cost to harden these facilities is likely to reach tens of billions of dollars — and perhaps even more.
Then there is the question of energy. The Pentagon is understandably one of the greatest users of energy in the world, requiring power for bases, jet fuel for planes, and energy for ships on a global scale. Procurement managers are obviously concerned about costs, but their real concern is availability — they need to have reliable fuel options in even the most chaotic environments. That critical priority is increasingly tenuous with climate change, as transit options for oil can be disrupted by everything from a bad storm to a ship stuck in the Suez Canal.
This is where the Pentagon’s mission and the interests of green-minded activists align heavily, if not perfectly. Klare provides examples of how the Pentagon is investing in areas like biofuels, decentralized grid technology, batteries and more as it looks to secure resiliency for its fighting forces. The Pentagon’s budgetary resources might be scorned by critics, but it’s uniquely positioned to pay the so-called green premiums for more reliable energy in ways that few institutions can realistically afford.
That political alignment continues when it comes to humanitarian response, although for vastly different reasons. One of the Pentagon’s chief concerns with global warming is that it will be increasingly waylaid from its highest priority missions — such as protecting against China, Russia, Iran and other long-time adversaries — into responding to humanitarian crises. As one of the only American institutions with the equipment and logistical know-how capable of deploying thousands of responders to disaster zones, the Pentagon is the go-to source for deployments. For Defense, the difficulty is that the armed forces aren’t trained for humanitarian missions — they’re trained for fighting wars. Attacking ISIS-K and managing a camp of climate refugees are decidedly different skills.
Climate activists are fighting for a more stable and equitable world, one that doesn’t lead to millions of climate refugees fleeing from famine and scorching temperatures. The Pentagon similarly wants to shore up fragile states in the hopes of avoiding deployments outside of its core mission. The two groups speak different languages and have different motivations, but the objectives are much the same.
The most interesting dynamic of climate change and national security is, of course, how the global strategic map changes. Russia is a major winner, and Klare provides an exacting account on how the Pentagon is securing the Arctic now that the ice has melted and shipping lanes have opened at the pole for much of the year and soon to be year round. For the first time, America has run training missions for its armed forces on how to operate in the Arctic and prepare for potential contingencies in the region.
Klare’s book is readable, and its subject is electrifyingly fascinating, but this is not a brilliantly written text by any stretch of the imagination. I dubbed it a meta-assessment because it absolutely reads as if it was written by a team of defense planning specialists in the E Ring. It’s a multi-hundred page think tank paper — and as a reader, you either have the stamina to read that or you don’t.
More caustically, the book’s research and primary citations center on the Pentagon’s assessment reports and Congressional testimony and some secondary reporting in newspapers and elsewhere. There are few to no mentions of direct interviews with the participants here, and that’s a major problem given the extremely political nature of climate change in modern U.S. discourse. Klare certainly observes the politics, but we don’t know what generals and the civilian defense leadership would really say if they didn’t have to sign off publicly on a government report. It’s a massive gulf — and begs the question of how much we really get a true picture of the Pentagon’s thinking with this volume.
Nonetheless, the book is an important contribution, and a reminder that the national security community — while protective of its interests — can also be an important vanguard for change on climate disruption. Activists and wonks should drop the animosity and talk to each other a bit more often, as there are alliances to be made.
All Hell Breaking Loose: The Pentagon’s Perspective on Climate Change by Michael T. Klare
Metropolitan Books, 2019, 304 pages
Linux is set for a big release this Sunday August 29, setting the stage for enterprise and cloud applications for months to come. The 5.14 kernel update will include security and performance improvements.
A particular area of interest for both enterprise and cloud users is always security and to that end, Linux 5.14 will help with several new capabilities. Mike McGrath, vice president, Linux Engineering at Red Hat told TechCrunch that the kernel update includes a feature known as core scheduling, which is intended to help mitigate processor-level vulnerabilities like Spectre and Meltdown, which first surfaced in 2018. One of the ways that Linux users have had to mitigate those vulnerabilities is by disabling hyper-threading on CPUs and therefore taking a performance hit.
“More specifically, the feature helps to split trusted and untrusted tasks so that they don’t share a core, limiting the overall threat surface while keeping cloud-scale performance relatively unchanged,” McGrath explained.
Another area of security innovation in Linux 5.14 is a feature that has been in development for over a year-and-a-half that will help to protect system memory in a better way than before. Attacks against Linux and other operating systems often target memory as a primary attack surface to exploit. With the new kernel, there is a capability known as memfd_secret () that will enable an application running on a Linux system to create a memory range that is inaccessible to anyone else, including the kernel.
“This means cryptographic keys, sensitive data and other secrets can be stored there to limit exposure to other users or system activities,” McGrath said.
At the heart of the open source Linux operating system that powers much of the cloud and enterprise application delivery is what is known as the Linux kernel. The kernel is the component that provides the core functionality for system operations.
The Linux 5.14 kernel release has gone through seven release candidates over the last two months and benefits from the contributions of 1,650 different developers. Those that contribute to Linux kernel development include individual contributors, as well large vendors like Intel, AMD, IBM, Oracle and Samsung. One of the largest contributors to any given Linux kernel release is IBM’s Red Hat business unit. IBM acquired Red Hat for $34 billion in a deal that closed in 2019.
“As with pretty much every kernel release, we see some very innovative capabilities in 5.14,” McGrath said.
While Linux 5.14 will be out soon, it often takes time until it is adopted inside of enterprise releases. McGrath said that Linux 5.14 will first appear in Red Hat’s Fedora community Linux distribution and will be a part of the future Red Hat Enterprise Linux 9 release. Gerald Pfeifer, CTO for enterprise Linux vendor SUSE, told TechCrunch that his company’s openSUSE Tumbleweed community release will likely include the Linux 5.14 kernel within ‘days’ of the official release. On the enterprise side, he noted that SUSE Linux Enterprise 15 SP4, due next spring, is scheduled to come with Kernel 5.14.
The new Linux update follows a major milestone for the open source operating system, as it was 30 years ago this past Wednesday that creator Linus Torvalds (pictured above) first publicly announced the effort. Over that time Linux has gone from being a hobbyist effort to powering the infrastructure of the internet.
McGrath commented that Linux is already the backbone for the modern cloud and Red Hat is also excited about how Linux will be the backbone for edge computing – not just within telecommunications, but broadly across all industries, from manufacturing and healthcare to entertainment and service providers, in the years to come.
The longevity and continued importance of Linux for the next 30 years is assured in Pfeifer’s view. He noted that over the decades Linux and open source have opened up unprecedented potential for innovation, coupled with openness and independence.
“Will Linux, the kernel, still be the leader in 30 years? I don’t know. Will it be relevant? Absolutely,” he said. “Many of the approaches we have created and developed will still be pillars of technological progress 30 years from now. Of that I am certain.”
The May 2021 executive order from the White House on improving U.S. cybersecurity includes a provision for a software bill of materials (SBOM), a formal record containing the details and supply chain relationships of various components used in building a software product.
An SBOM is the full list of every item that’s needed to build an application. It enumerates all parts, including open-source software (OSS) dependencies (direct), transitive OSS dependencies (indirect), open-source packages, vendor agents, vendor application programming interfaces (APIs) and vendor software development kits.
Software developers and vendors often create products by assembling existing open-source and commercial software components, the executive order notes. It’s useful to those who develop or manufacture software, those who select or purchase software and those who operate the software.
As the executive order describes, an SBOM enables software developers to make sure open-source and third-party components are up to date. Buyers can use an SBOM to perform vulnerability or license analysis, both of which can be used to evaluate risk in a product. And those who operate software can use SBOMs to quickly determine whether they are at potential risk of a newly discovered vulnerability.
“A widely used, machine-readable SBOM format allows for greater benefits through automation and tool integration,” the executive order says. “The SBOMs gain greater value when collectively stored in a repository that can be easily queried by other applications and systems. Understanding the supply chain of software, obtaining an SBOM and using it to analyze known vulnerabilities are crucial in managing risk.”
An SBOM is intrinsically hierarchical. The finished product sits at the top, and the hierarchy includes all of its dependencies providing a foundation for its functionality. Any one of these parts can be exploited in this hierarchical structure, leading to a ripple effect.
Not surprisingly, given the potential impact, there has been a lot of talk about the proposed SBOM provision since the executive order was announced. This is certainly true within the cybersecurity community. Anytime there are attacks such as the ones against Equifax or Solarwinds that involve software vulnerabilities being exploited, there is renewed interest in this type of concept.
Clearly, the intention of an SBOM is good. If software vendors are not upgrading dependencies to eliminate security vulnerabilities, the thinking is we need to be able to ask the vendors to share their lists of dependencies. That way, the fear of customer or public ridicule might encourage the software producers to do a better job of upgrading dependencies.
However, this is an old and outmoded way of thinking. Modern applications and microservices use many dependencies. It’s not uncommon for a small application to use tens of dependencies, which in turn might use other dependencies. Soon the list of dependencies used by a single application can run into the hundreds. And if a modern application consists of a few hundred microservices, which is not uncommon, the list of dependencies can run into the thousands.
If a software vendor were to publish such an extensive list, how will the end users of that software really benefit? Yes, we can also ask the software vendor to publish which of the dependencies is vulnerable, and let’s say that list runs into the hundreds. Now what?
Clearly, having to upgrade hundreds of vulnerable dependencies is not a trivial task. A software vendor would be constantly deciding between adding new functionality that generates revenue and allows the company to stay ahead of its competitors versus upgrading dependencies that don’t do either.
If the government formalizes an SBOM mandate and starts to financially penalize vendors that have vulnerable dependencies, it is clear that given the complexity associated with upgrading dependencies the software vendors might choose to pay fines rather than risk losing revenue or competitive advantage in the market.
Revenue drives market capitalization, which in turn drives executive and employee compensation. Fines, as small as they are, have negligible impact on the bottom line. In a purely economic sense, the choice is fairly obvious.
In addition, software vendors typically do not want to publish lists of all their dependencies because that provides a lot of information to hackers and other bad actors as well as to competitors. It’s bad enough that cybercriminals are able to find vulnerabilities on their own. Providing lists of dependencies gives them even more possible resources to discover weaknesses.
Customers and users of the software, for their part, don’t want to know all the dependencies. What would they gain from studying a list of hundreds of dependencies? Rather, software vendors and their customers want to know which dependencies, if any, make the application vulnerable. That really is the key question.
Prioritizing software composition analysis (SCA) ensures that when dependencies are analyzed in the context of an application, the dependencies that make an application vulnerable can be dramatically reduced.
Instead of publishing a list of 1,000 dependencies, or 100 that are vulnerable, organizations can publish a far more manageable list in the single digits. That is a problem that organizations can much more easily deal with. Sometimes a software vendor can fix an issue without having to upgrade the dependency. For example, it can make changes in the code, which is not always possible if we are merely looking for the list of vulnerable dependencies.
There is no reason to disdain the concept of SBOM outright. By all means, let’s make the software vendors responsible for being transparent about what goes into their software products. Plenty of organizations have paid a steep price because of software vulnerabilities that could have been prevented in the form of data breaches and other cybersecurity attacks.
Indeed, it’s heartening to see the federal government take cybersecurity so seriously and propose ways to enhance the protection of applications and data.
However, let’s make SBOM specific to the list of dependencies that actually make the application vulnerable. This serves both the vendor and its customers by cutting directly to the sources of vulnerabilities that can do damage. That way, we can address the issues at hand without creating unnecessary burdens.
The meeting, which also included attendees from the financial and education sectors, was held following months of high-profile cyberattacks against critical infrastructure and several U.S. government agencies, along with a glaring cybersecurity skills gap; according to data from CyberSeek, there are currently almost 500,000 cybersecurity jobs across the U.S that remain unfilled.
“Most of our critical infrastructure is owned and operated by the private sector, and the federal government can’t meet this challenge alone,” Biden said at the start of the meeting. “I’ve invited you all here today because you have the power, the capacity and the responsibility, I believe, to raise the bar on cybersecurity.”
In order to help the U.S. in its fight against a growing number of cyberattacks, Big Tech pledged to invest billions of dollars to strengthen cybersecurity defenses and to train skilled cybersecurity workers.
Apple has vowed to work with its 9,000-plus suppliers in the U.S. to drive “mass adoption” of multi-factor authentication and security training, according to the White House, as well as to establish a new program to drive continuous security improvements throughout the technology supply chain.
Google said it will invest more than $10 billion over the next five years to expand zero-trust programs, help secure the software supply chain and enhance open-source security. The search and ads giant has also pledged to train 100,000 Americans in fields like IT support and data analytics, learning in-demand skills including data privacy and security.
“Robust cybersecurity ultimately depends on having the people to implement it,” said Kent Walker, Google’s global affairs chief. “That includes people with digital skills capable of designing and executing cybersecurity solutions, as well as promoting awareness of cybersecurity risks and protocols among the broader population.”
And, Microsoft said it’s committing $20 billion to integrate cybersecurity by design and deliver “advanced security solutions.” It also announced that it will immediately make available $150 million in technical services to help federal, state and local governments with upgrading security protection, and will expand partnerships with community colleges and nonprofits for cybersecurity training.
Other attendees included Amazon Web Services (AWS), Amazon’s cloud computing arm, and IBM. The former has said it will make its security awareness training available to the public and equip all AWS customers with hardware multi-factor authentication devices, while IBM said it will help to train more than 150,000 people in cybersecurity skills over the next five years.
While many have welcomed Big Tech’s commitments, David Carroll, managing director at Nominet Cyber, told TechCrunch that these latest initiatives set a “powerful precedent” and show “the gloves are well and truly off” — but some within the cybersecurity industry remain skeptical.
“So 500,000 open cybersecurity jobs and almost that same amount or more looking for jobs,” said Khalilah Scott, founder of TechSecChix, a foundation for supporting women in technology, in a tweet. “Make it make sense.”
Cloud security startup Monad, which offers a platform for extracting and connecting data from various security tools, has launched from stealth with $17 million in Series A funding led by Index Ventures.
Monad was founded on the belief that enterprise cybersecurity is a growing data management challenge, as organizations try to understand and interpret the masses of information that’s siloed within disconnected logs and databases. Once an organization has extracted data from their security tools, Monad’s Security Data Platform enables them to centralize that data within a data warehouse of choice, and normalize and enrich the data so that security teams have the insights they need to secure their systems and data effectively.
“Security is fundamentally a big data problem,” said Christian Almenar, CEO and co-founder of Monad. “Customers are often unable to access their security data in the streamlined manner that DevOps and cloud engineering teams need to build their apps quickly while also addressing their most pressing security and compliance challenges. We founded Monad to solve this security data challenge and liberate customers’ security data from siloed tools to make it accessible via any data warehouse of choice.”
The startup’s Series A funding round, which was also backed by Sequoia Capital, brings its total amount of investment raised to $19 million and comes 12 months after its Sequoia-led seed round. The funds will enable Monad to scale its development efforts for its security data cloud platform, the startup said.
Monad was founded in May 2020 by security veterans Christian Almenar and Jacolon Walker. Almenar previously co-founded serverless security startup Intrinsic which was acquired by VMware in 2019, while Walker served as CISO and security engineer at OpenDoor, Collective Health, and Palantir.
The UK government has named the person it wants to take over as its chief data protection watchdog, with sitting commissioner Elizabeth Denham overdue to vacate the post: The Department of Digital, Culture, Media and Sport (DCMS) today said its preferred replacement is New Zealand’s privacy commissioner, John Edwards.
Edwards, who has a legal background, has spent more than seven years heading up the Office of the Privacy Commissioner In New Zealand — in addition to other roles with public bodies in his home country.
He is perhaps best known to the wider world for his verbose Twitter presence and for taking a public dislike to Facebook: In the wake of the 2018 Cambridge Analytica data misuse scandal Edwards publicly announced that he was deleting his account with the social media — accusing Facebook of not complying with the country’s privacy laws.
An anti-‘Big Tech’ stance aligns with the UK government’s agenda to tame the tech giants as it works to bring in safety-focused legislation for digital platforms and reforms of competition rules that take account of platform power.
— John Edwards (@JCE_PC) August 26, 2021
If confirmed in the role — the DCMS committee has to approve Edwards’ appointment; plus there’s a ceremonial nod needed from the Queen — he will be joining the regulatory body at a crucial moment as digital minister Oliver Dowden has signalled the beginnings of a planned divergence from the European Union’s data protection regime, post-Brexit, by Boris Johnson’s government.
Dial back the clock five years and prior digital minister, Matt Hancock, was defending the EU’s General Data Protection Regulation (GDPR) as a “decent piece of legislation” — and suggesting to parliament that there would be little room for the UK to diverge in data protection post-Brexit.
But Hancock is now out of government (aptly enough after a data leak showed him breaching social distancing rules by kissing his aide inside a government building), and the government mood music around data has changed key to something far more brash — with sitting digital minister Dowden framing unfettered (i.e. deregulated) data-mining as “a great opportunity” for the post-Brexit UK.
For months, now, ministers have been eyeing how to rework the UK’s current (legascy) EU-based data protection framework — to, essentially, reduce user rights in favor of soundbites heavy on claims of slashing ‘red tape’ and turbocharging data-driven ‘innovation’. Of course the government isn’t saying the quiet part out loud; its press releases talk about using “the power of data to drive growth and create jobs while keeping high data protection standards”. But those standards are being reframed as a fig leaf to enable a new era of data capture and sharing by default.
Dowden has said that the emergency data-sharing which was waived through during the pandemic — when the government used the pressing public health emergency to justify handing NHS data to a raft of tech giants — should be the ‘new normal’ for a post-Brexit UK. So, tl;dr, get used to living in a regulatory crisis.
A special taskforce, which was commissioned by the prime minister to investigate how the UK could reshape its data policies outside the EU, also issued a report this summer — in which it recommended scrapping some elements of the UK’s GDPR altogether — branding the regime “prescriptive and inflexible”; and advocating for changes to “free up data for innovation and in the public interest”, as it put it, including pushing for revisions related to AI and “growth sectors”.
The government is now preparing to reveal how it intends to act on its appetite to ‘reform’ (read: reduce) domestic privacy standards — with proposals for overhauling the data protection regime incoming next month.
Speaking to the Telegraph for a paywalled article published yesterday, Dowden trailed one change that he said he wants to make which appears to target consent requirements — with the minister suggesting the government will remove the legal requirement to gain consent to, for example, track and profile website visitors — all the while framing it as a pro-consumer move; a way to do away with “endless” cookie banners.
Only cookies that pose a ‘high risk’ to privacy would still require consent notices, per the report — whatever that means.
Oliver Dowden, the UK Minister for Digital, Culture, Media and Sport, says that the UK will break away from GDPR, and will no longer require cookie warnings, other than those posing a 'high risk'.https://t.co/2ucnppHrIm pic.twitter.com/RRUdpJumYa
— dan barker (@danbarker) August 25, 2021
“There’s an awful lot of needless bureaucracy and box ticking and actually we should be looking at how we can focus on protecting people’s privacy but in as light a touch way as possible,” the digital minister also told the Telegraph.
The draft of this Great British ‘light touch’ data protection framework will emerge next month, so all the detail is still to be set out. But the overarching point is that the government intends to redefine UK citizens’ privacy rights, using meaningless soundbites — with Dowden touting a plan for “common sense” privacy rules — to cover up the fact that it intends to reduce the UK’s currently world class privacy standards and replace them with worse protections for data.
If you live in the UK, how much privacy and data protection you get will depend upon how much ‘innovation’ ministers want to ‘turbocharge’ today — so, yes, be afraid.
It will then fall to Edwards — once/if approved in post as head of the ICO — to nod any deregulation through in his capacity as the post-Brexit information commissioner.
We can speculate that the government hopes to slip through the devilish detail of how it will torch citizens’ privacy rights behind flashy, distraction rhetoric about ‘taking action against Big Tech’. But time will tell.
Data protection experts are already warning of a regulatory stooge.
While the Telegraph suggests Edwards is seen by government as an ideal candidate to ensure the ICO takes a “more open and transparent and collaborative approach” in its future dealings with business.
In a particularly eyebrow raising detail, the newspaper goes on to report that government is exploring the idea of requiring the ICO to carry out “economic impact assessments” — to, in the words of Dowden, ensure that “it understands what the cost is on business” before introducing new guidance or codes of practice.
All too soon, UK citizens may find that — in the ‘sunny post-Brexit uplands’ — they are afforded exactly as much privacy as the market deems acceptable to give them. And that Brexit actually means watching your fundamental rights being traded away.
In a statement responding to Edwards’ nomination, Denham, the outgoing information commissioner, appeared to offer some lightly coded words of warning for government, writing [emphasis ours]: “Data driven innovation stands to bring enormous benefits to the UK economy and to our society, but the digital opportunity before us today will only be realised where people continue to trust their data will be used fairly and transparently, both here in the UK and when shared overseas.”
The lurking iceberg for government is of course that if wades in and rips up a carefully balanced, gold standard privacy regime on a soundbite-centric whim — replacing a pan-European standard with ‘anything goes’ rules of its/the market’s choosing — it’s setting the UK up for a post-Brexit future of domestic data misuse scandals.
You only have to look at the dire parade of data breaches over in the US to glimpse what’s coming down the pipe if data protection standards are allowed to slip. The government publicly bashing the privacy sector for adhering to lax standards it deregulated could soon be the new ‘get popcorn’ moment for UK policy watchers…
UK citizens will surely soon learn of unfair and unethical uses of their data under the ‘light touch’ data protection regime — i.e. when they read about it in the newspaper.
Such an approach will indeed be setting the country on a path where mistrust of digital services becomes the new normal. And that of course will be horrible for digital business over the longer run. But Dowden appears to lack even a surface understanding of Internet basics.
The UK is also of course setting itself on a direct collision course with the EU if it goes ahead and lowers data protection standards.
This is because its current data adequacy deal with the bloc — which allows for EU citizens’ data to continue flowing freely to the UK is precariously placed — was granted only on the basis that the UK was, at the time it was inked, still aligned with the GDPR.
So Dowden’s rush to rip up protections for people’s data presents a clear risk to the “significant safeguards” needed to maintain EU adequacy.
Back in June, when the Commission signed off on the UK’s adequacy deal, it clearly warned that “if anything changes on the UK side, we will intervene”. Moreover, the adequacy deal is also the first with a baked in sunset clause — meaning it will automatically expire in four years.
So even if the Commission avoids taking proactive action over slipping privacy standards in the UK there is a hard deadline — in 2025 — when the EU’s executive will be bound to look again in detail at exactly what Dowden & Co. have wrought. And it probably won’t be pretty.
The longer term UK ‘plan’ (if we can put it that way) appears to be to replace domestic economic reliance on EU data flows — by seeking out other jurisdictions that may be friendly to a privacy-light regime governing what can be done with people’s information.
Hence — also today — DCMS trumpeted an intention to secure what it billed as “new multi-billion pound global data partnerships” — saying it will prioritize striking ‘data adequacy’ “partnerships” with the US, Australia, the Republic of Korea, Singapore, and the Dubai International Finance Centre and Colombia.
Future partnerships with India, Brazil, Kenya and Indonesia will also be prioritized, it added — with the government department cheerfully glossing over the fact it’s UK citizens’ own privacy that is being deprioritized here.
“Estimates suggest there is as much as £11 billion worth of trade that goes unrealised around the world due to barriers associated with data transfers,” DCMS writes in an ebullient press release.
As it stands, the EU is of course the UK’s largest trading partner. And statistics from the House of Commons library on the UK’s trade with the EU — which you won’t find cited in the DCMS release — underline quite how tiny this potential Brexit ‘data bonanza’ is, given that UK exports to the EU stood at £294 billion in 2019 (43% of all UK exports).
So even the government’s ‘economic’ case to water down citizens’ privacy rights looks to be puffed up with the same kind of misleadingly vacuous nonsense as ministers’ reframing of a post-Brexit UK as ‘Global Britain’.
Everyone hates cookies banners, sure, but that’s a case for strengthening not weakening people’s privacy — for making non-tracking the default setting online and outlawing manipulative dark patterns so that Internet users don’t constantly have to affirm they want their information protected. Instead the UK may be poised to get rid of annoying cookie consent ‘friction’ by allowing a free for all on people’s data.
Just days after Elastic announced the acquisition of build.security, the company is making yet another security acquisition. As part of its second-quarter earnings announcement this afternoon, Elastic disclosed that it is acquiring Vancouver, Canada based security vendor CMD. Financial terms of the deal are not being publicly disclosed.
CMD‘s technology provides runtime security for cloud infrastructure, helping organizations gain better visibility into processes that are running. The startup was founded in 2016 and has raised $21.6 million in funding to date. The company’s last round was a $15 million Series B that was announced in 2019, led by GV.
Elastic CEO and co-founder Shay Banon told TechCrunch that his company will be welcoming the employees of CMD into his company, but did not disclose precisely how many would be coming over. CMD CEO and co-founder Santosh Krishan and his fellow co-founder Jake King will both be taking executive roles within Elastic.
Both build.security and CMD are set to become part of Elastic’s security organization. The two technologies will be integrated into the Elastic Stack platform that provides visibility into what an organization is running, as well as security insights to help limit risk. Elastic has been steadily growing its security capabilities in recent years, acquiring Endgame Security in 2019 for $234 million.
Banon explained that, as organizations increasingly move to the cloud and make use of Kubernetes, they are looking for more layers of introspection and protection for Linux. That’s where CMD’s technology comes in. CMD’s security service is built with an open source technology known as eBPF. With eBPF, it’s possible to hook into a Linux operating system for visibility and security control. Work is currently ongoing to extend eBPF for Windows workloads, as well.
CMD isn’t the only startup that has been building based on eBP. Isovalent, which announced a $29 million Series A round led by Andreessen Horowitz and Google in November 2020, is also active in the space. The Linux Foundation also recently announced the creation of an eBPF Foundation, with the participation of Facebook, Google, Microsoft, Netflix and Isovalent.
Fundamentally, Banon sees a clear alignment between what CMD was building and what Elastic aims to deliver for its users.
“We have a saying at Elastic – while you observe, why not protect?” Banon said. “With CMD if you look at everything that they do, they also have this deep passion and belief that it starts with observability. “
It will take time for Elastic to integrate the CMD technology into the Elastic Stack, though it won’t be too long. Banon noted that one of the benefits of acquiring a startup is that it’s often easier to integrate than a larger, more established vendor.
“With all of these acquisitions that we make we spend time integrating them into a single product line,” Banon said.
That means Elastic needs to take the technology that other companies have built and fold it into its stack and that sometimes can take time, Banon explained. He noted that it took two years to integrate the Endgame technology after that acquisition.
“Typically that lends itself to us joining forces with smaller companies with really innovative technology that can be more easily taken and integrated into our stack,” Banon said.