The average corporate security organization spends $18 million annually but is largely ineffective at preventing breaches, IP theft and data loss. Why? The fragmented approach we’re currently using in the security operations center (SOC) does not work.
Here’s a quick refresher on security operations and how we got where we are today: A decade ago, we protected our applications and websites by monitoring event logs — digital records of every activity that occurred in our cyber environment, ranging from logins to emails to configuration changes. Logs were audited, flags were raised, suspicious activities were investigated, and data was stored for compliance purposes.
The security-driven data stored in a data lake can be in its native format, structured or unstructured, and therefore dimensional, dynamic and heterogeneous, which gives data lakes their distinction and advantage over data warehouses.
As malicious actors and adversaries became more active, and their tactics, techniques and procedures (or TTP’s, in security parlance) grew more sophisticated, simple logging evolved into an approach called “security information and event management” (SIEM), which involves using software to provide real-time analysis of security alerts generated by applications and network hardware. SIEM software uses rule-driven correlation and analytics to turn raw event data into potentially valuable intelligence.
Although it was no magic bullet (it’s challenging to implement and make everything work properly), the ability to find the so-called “needle in the haystack” and identify attacks in progress was a huge step forward.
Today, SIEMs still exist, and the market is largely led by Splunk and IBM QRadar. Of course, the technology has advanced significantly because new use cases emerge constantly. Many companies have finally moved into cloud-native deployments and are leveraging machine learning and sophisticated behavioral analytics. However, new enterprise SIEM deployments are fewer, costs are greater, and — most importantly — the overall needs of the CISO and the hard-working team in the SOC have changed.
First, data has exploded and SIEM is too narrowly focused. The mere collection of security events is no longer sufficient because the aperture on this dataset is too narrow. While there is likely a massive amount of event data to capture and process from your events, you are missing out on vast amounts of additional information such as OSINT (open-source intelligence information), consumable external-threat feeds, and valuable information such as malware and IP reputation databases, as well as reports from dark web activity. There are endless sources of intelligence, far too many for the dated architecture of a SIEM.
Additionally, data exploded alongside costs. Data explosion + hardware + license costs = spiraling total cost of ownership. With so much infrastructure, both physical and virtual, the amount of information being captured has exploded. Machine-generated data has grown at 50x, while the average security budget grows 14% year on year.
The cost to store all of this information makes the SIEM cost-prohibitive. The average cost of a SIEM has skyrocketed to close to $1 million annually, which is only for license and hardware costs. The economics force teams in the SOC to capture and/or retain less information in an attempt to keep costs in check. This causes the effectiveness of the SIEM to become even further reduced. I recently spoke with a SOC team who wanted to query large datasets searching for evidence of fraud, but doing so in Splunk was cost-prohibitive and a slow, arduous process, leading the team to explore alternatives.
The shortcomings of the SIEM approach today are dangerous and terrifying. A recent survey by the Ponemon Institute surveyed almost 600 IT security leaders and found that, despite spending an average of $18.4 million annually and using an average of 47 products, a whopping 53% of IT security leaders “did not know if their products were even working.” It’s clearly time for change.
Ransomware attacks on the JBS beef plant, and the Colonial Pipeline before it, have sparked a now familiar set of reactions. There are promises of retaliation against the groups responsible, the prospect of company executives being brought in front of Congress in the coming months, and even a proposed executive order on cybersecurity that could take months to fully implement.
But once again, amid this flurry of activity, we must ask or answer a fundamental question about the state of our cybersecurity defense: Why does this keep happening?
I have a theory on why. In software development, there is a concept called “technical debt.” It describes the costs companies pay when they choose to build software the easy (or fast) way instead of the right way, cobbling together temporary solutions to satisfy a short-term need. Over time, as teams struggle to maintain a patchwork of poorly architectured applications, tech debt accrues in the form of lost productivity or poor customer experience.
Complexity is the enemy of security. Some companies are forced to put together as many as 50 different security solutions from up to 10 different vendors to protect their sprawling technology estates.
Our nation’s cybersecurity defenses are laboring under the burden of a similar debt. Only the scale is far greater, the stakes are higher and the interest is compounding. The true cost of this “cybersecurity debt” is difficult to quantify. Though we still do not know the exact cause of either attack, we do know beef prices will be significantly impacted and gas prices jumped 8 cents on news of the Colonial Pipeline attack, costing consumers and businesses billions. The damage done to public trust is incalculable.
How did we get here? The public and private sectors are spending more than $4 trillion a year in the digital arms race that is our modern economy. The goal of these investments is speed and innovation. But in pursuit of these ambitions, organizations of all sizes have assembled complex, uncoordinated systems — running thousands of applications across multiple private and public clouds, drawing on data from hundreds of locations and devices.
Complexity is the enemy of security. Some companies are forced to put together as many as 50 different security solutions from up to 10 different vendors to protect their sprawling technology estates — acting as a systems integrator of sorts. Every node in these fantastically complicated networks is like a door or window that might be inadvertently left open. Each represents a potential point of failure and an exponential increase in cybersecurity debt.
We have an unprecedented opportunity and responsibility to update the architectural foundations of our digital infrastructure and pay off our cybersecurity debt. To accomplish this, two critical steps must be taken.
First, we must embrace open standards across all critical digital infrastructure, especially the infrastructure used by private contractors to service the government. Until recently, it was thought that the only way to standardize security protocols across a complex digital estate was to rebuild it from the ground up in the cloud. But this is akin to replacing the foundations of a home while still living in it. You simply cannot lift-and-shift massive, mission-critical workloads from private data centers to the cloud.
There is another way: Open, hybrid cloud architectures can connect and standardize security across any kind of infrastructure, from private data centers to public clouds, to the edges of the network. This unifies the security workflow and increases the visibility of threats across the entire network (including the third- and fourth-party networks where data flows) and orchestrates the response. It essentially eliminates weak links without having to move data or applications — a design point that should be embraced across the public and private sectors.
The second step is to close the remaining loopholes in the data security supply chain. President Biden’s executive order requires federal agencies to encrypt data that is being stored or transmitted. We have an opportunity to take that a step further and also address data that is in use. As more organizations outsource the storage and processing of their data to cloud providers, expecting real-time data analytics in return, this represents an area of vulnerability.
Many believe this vulnerability is simply the price we pay for outsourcing digital infrastructure to another company. But this is not true. Cloud providers can, and do, protect their customers’ data with the same ferocity as they protect their own. They do not need access to the data they store on their servers. Ever.
To ensure this requires confidential computing, which encrypts data at rest, in transit and in process. Confidential computing makes it technically impossible for anyone without the encryption key to access the data, not even your cloud provider. At IBM, for example, our customers run workloads in the IBM Cloud with full privacy and control. They are the only ones that hold the key. We could not access their data even if compelled by a court order or ransom request. It is simply not an option.
Paying down the principal on any kind of debt can be daunting, as anyone with a mortgage or student loan can attest. But this is not a low-interest loan. As the JBS and Colonial Pipeline attacks clearly demonstrate, the cost of not addressing our cybersecurity debt spans far beyond monetary damages. Our food and fuel supplies are at risk, and entire economies can be disrupted.
I believe that with the right measures — strong public and private collaboration — we have an opportunity to construct a future that brings forward the combined power of security and technological advancement built on trust.
Facebook is facing a fresh pair of antitrust probes in Europe.
The UK’s Competition and Markets Authority (CMA) and the EU’s Competition Commission both announced formal investigations into the social media giant’s operations today — with what’s likely to have been co-ordinated timing.
The competition regulators will scrutinize how Facebook uses data from advertising customers and users of its single sign-on tool — specifically looking at whether it uses this data as an unfair lever against competitors in markets such as classified ads.
The pair also said they will seek to work closely together as their independent investigations progress.
With the UK outside the European trading bloc (post-Brexit), the national competition watchdog has a freer rein to pursue investigations that may be similar to or overlap with antitrust probes the EU is also undertaking.
And the two Facebook investigations do appear similar on the surface — with both broadly focused on how Facebook uses advertising data. (Though outcomes could of course differ.)
The danger for Facebook, here, is that a higher dimension of scrutiny will be applied to its business as a result of dual regulatory action — with the opportunity for joint working and cross-referencing of its responses (not to mention a little investigative competition between the UK and the EU’s agencies).
The CMA said it’s looking at whether Facebook has gained an unfair advantage over competitors in providing services for online classified ads and online dating through how it gathers and uses certain data.
Specifically, the UK’s regulator said it’s concerned that Facebook might have gained an unfair advantage over competitors providing services for online classified ads and online dating.
Facebook plays in both spaces of course, via Facebook Marketplace and Facebook Dating respectively.
In a statement on its action, CMA CEO, Andrea Coscelli, said: “We intend to thoroughly investigate Facebook’s use of data to assess whether its business practices are giving it an unfair advantage in the online dating and classified ad sectors. Any such advantage can make it harder for competing firms to succeed, including new and smaller businesses, and may reduce customer choice.”
The European Commission’s investigation will — similarly — focus on whether Facebook violated the EU’s competition rules by using advertising data gathered from advertisers in order to compete with them in markets where it is active.
Although it only cites classified ads as its example of the neighbouring market of particular concern for its probe.
The EU’s probe has another element, though, as it said it’s also looking at whether Facebook ties its online classified ads service to its social network in breach of the bloc’s competition rules.
In a separate (national) action, Germany’s competition authority opened a similar probe into Facebook tying Oculus to use of a Facebook account at the end of last year. So Facebook now has multiple antitrust probes on its plate in Europe, adding to its woes from the massive states antitrust lawsuit filed against it on home turf also back in December 2020.
“When advertising their services on Facebook, companies, which also compete directly with Facebook, may provide it commercially valuable data. Facebook might then use this data in order to compete against the companies which provided it,” the Commission noted in a press release.
“This applies in particular to online classified ads providers, the platforms on which many European consumers buy and sell products. Online classified ads providers advertise their services on Facebook’s social network. At the same time, they compete with Facebook’s own online classified ads service, ‘Facebook Marketplace’.”
The Commission added that a preliminary investigation it already undertook has raised concerns Facebook is distorting the market for online classified ads services. It will now take an in-depth look in order to make a full judgement on whether the social media behemoth is breaking EU competition rules.
Commenting in a statement, EVP Margrethe Vestager, who also heads up competition policy for the bloc, added: “Facebook is used by almost 3 billion people on a monthly basis and almost 7 million firms advertise on Facebook in total. Facebook collects vast troves of data on the activities of users of its social network and beyond, enabling it to target specific customer groups. We will look in detail at whether this data gives Facebook an undue competitive advantage in particular on the online classified ads sector, where people buy and sell goods every day, and where Facebook also competes with companies from which it collects data. In today’s digital economy, data should not be used in ways that distort competition.”
Reached for comment on the latest European antitrust probes, Facebook sent us this statement:
“We are always developing new and better services to meet evolving demand from people who use Facebook. Marketplace and Dating offer people more choices and both products operate in a highly competitive environment with many large incumbents. We will continue to cooperate fully with the investigations to demonstrate that they are without merit.”
Up til now, Facebook has been a bit of a blind spot for the Commission’s competition authority — with multiple investigations and enforcements chalked up by the bloc against other tech giants, such as (most notably) Google and Amazon.
But Vestager’s Facebook ‘dry patch’ has now formally come to an end.
Cookie pop-ups getting you down? Complaints that the web is ‘unusable’ in Europe because of frustrating and confusing ‘data choices’ notifications that get in the way of what you’re trying to do online certainly aren’t hard to find.
What is hard to find is the ‘reject all’ button that lets you opt out of non-essential cookies which power unpopular stuff like creepy ads. Yet the law says there should be an opt-out clearly offered. So people who complain that EU ‘regulatory bureaucracy’ is the problem are taking aim at the wrong target.
EU law on cookie consent is clear: Web users should be offered a simple, free choice — to accept or reject.
The problem is that most websites simply aren’t compliant. They choose to make a mockery of the law by offering a skewed choice: Typically a super simple opt-in (to hand them all your data) vs a highly confusing, frustrating, tedious opt-out (and sometimes even no reject option at all).
Make no mistake: This is ignoring the law by design. Sites are choosing to try to wear people down so they can keep grabbing their data by only offering the most cynically asymmetrical ‘choice’ possible.
However since that’s not how cookie consent is supposed to work under EU law sites that are doing this are opening themselves to large fines under the General Data Protection Regulation (GDPR) and/or ePrivacy Directive for flouting the rules.
See, for example, these two whopping fines handed to Google and Amazon in France at the back end of last year for dropping tracking cookies without consent…
While those fines were certainly head-turning, we haven’t generally seen much EU enforcement on cookie consent — yet.
This is because data protection agencies have mostly taken a softly-softly approach to bringing sites into compliance. But there are signs enforcement is going to get a lot tougher. For one thing, DPAs have published detailed guidance on what proper cookie compliance looks like — so there are zero excuses for getting it wrong.
Some agencies had also been offering compliance grace periods to allow companies time to make the necessary changes to their cookie consent flows. But it’s now a full three years since the EU’s flagship data protection regime (GDPR) came into application. So, again, there’s no valid excuse to still have a horribly cynical cookie banner. It just means a site is trying its luck by breaking the law.
There is another reason to expect cookie consent enforcement to dial up soon, too: European privacy group noyb is today kicking off a major campaign to clean up the trashfire of non-compliance — with a plan to file up to 10,000 complaints against offenders over the course of this year. And as part of this action it’s offering freebie guidance for offenders to come into compliance.
Today it’s announcing the first batch of 560 complaints already filed against sites, large and small, located all over the EU (33 countries are covered). noyb said the complaints target companies that range from large players like Google and Twitter to local pages “that have relevant visitor numbers”.
“A whole industry of consultants and designers develop crazy click labyrinths to ensure imaginary consent rates. Frustrating people into clicking ‘okay’ is a clear violation of the GDPR’s principles. Under the law, companies must facilitate users to express their choice and design systems fairly. Companies openly admit that only 3% of all users actually want to accept cookies, but more than 90% can be nudged into clicking the ‘agree’ button,” said noyb chair and long-time EU privacy campaigner, Max Schrems, in a statement.
“Instead of giving a simple yes or no option, companies use every trick in the book to manipulate users. We have identified more than fifteen common abuses. The most common issue is that there is simply no ‘reject’ button on the initial page,” he added. “We focus on popular pages in Europe. We estimate that this project can easily reach 10,000 complaints. As we are funded by donations, we provide companies a free and easy settlement option — contrary to law firms. We hope most complaints will quickly be settled and we can soon see banners become more and more privacy friendly.”
To scale its action, noyb developed a tool which automatically parses cookie consent flows to identify compliance problems (such as no opt out being offered at the top layer; or confusing button coloring; or bogus ‘legitimate interest’ opt-ins, to name a few of the many chronicled offences); and automatically create a draft report which can be emailed to the offender after it’s been reviewed by a member of the not-for-profit’s legal staff.
It’s an innovative, scalable approach to tackling systematically cynical cookie manipulation in a way that could really move the needle and clean up the trashfire of horrible cookie pop-ups.
noyb is even giving offenders a warning first — and a full month to clean up their ways — before it will file an official complaint with their relevant DPA (which could lead to an eye-watering fine).
Its first batch of complaints are focused on the OneTrust consent management platform (CMP), one of the most popular template tools used in the region — and which European privacy researchers have previously shown (cynically) provides its client base with ample options to set non-compliant choices like pre-checked boxes… Talk about taking the biscuit.
A noyb spokeswoman said it’s started with OneTrust because its tool is popular but confirmed the group will expand the action to cover other CMPs in the future.
The first batch of noyb’s cookie consent complaints reveal the rotten depth of dark patterns being deployed — with 81% of the 500+ pages not offering a reject option on the initial page (meaning users have to dig into sub-menus to try to find it); and 73% using “deceptive colors and contrasts” to try to trick users into clicking the ‘accept’ option.
noyb’s assessment of this batch also found that a full 90% did not provide a way to easily withdraw consent as the law requires.
Cookie compliance problems found in the first batch of sites facing complaints (Image credit: noyb)
It’s a snapshot of truly massive enforcement failure. But dodgy cookie consents are now operating on borrowed time.
Asked if it was able to work out how prevalent cookie abuse might be across the EU based on the sites it crawled, noyb’s spokeswoman said it was difficult to determine, owing to technical difficulties encountered through its process, but she said an initial intake of 5,000 websites was whittled down to 3,600 sites to focus on. And of those it was able to determine that 3,300 violated the GDPR.
That still left 300 — as either having technical issues or no violations — but, again, the vast majority (90%) were found to have violations. And with so much rule-breaking going on it really does require a systematic approach to fixing the ‘bogus consent’ problem — so noyb’s use of automation tech is very fitting.
More innovation is also on the way from the not-for-profit — which told us it’s working on an automated system that will allow Europeans to “signal their privacy choices in the background, without annoying cookie banners”.
At the time of writing it couldn’t provide us with more details on how that will work (presumably it will be some kind of browser plug-in) but said it will be publishing more details “in the next weeks” — so hopefully we’ll learn more soon.
A browser plug-in that can automatically detect and select the ‘reject all’ button (even if only from a subset of the most prevalent CMPs) sounds like it could revive the ‘do not track’ dream. At the very least, it would be a powerful weapon to fight back against the scourge of dark patterns in cookie banners and kick non-compliant cookies to digital dust.
When it comes to meeting compliance standards, many startups are dominating the alphabet. From GDPR and CCPA to SOC 2, ISO27001, PCI DSS and HIPAA, companies have been charging toward meeting the compliance standards required to operate their businesses.
Today, every healthcare founder knows their product must meet HIPAA compliance, and any company working in the consumer space would be well aware of GDPR, for example.
But a mistake many high-growth companies make is that they treat compliance as a catchall phrase that includes security. Thinking this could be an expensive and painful error. In reality, compliance means that a company meets a minimum set of controls. Security, on the other hand, encompasses a broad range of best practices and software that help address risks associated with the company’s operations.
It makes sense that startups want to tackle compliance first. Being compliant plays a big role in any company’s geographical expansion to regulated markets and in its penetration to new industries like finance or healthcare. So in many ways, achieving compliance is a part of a startup’s go-to-market kit. And indeed, enterprise buyers expect startups to check the compliance box before signing on as their customer, so startups are rightfully aligning around their buyers’ expectations.
One of the best ways startups can begin tackling security is with an early security hire.
With all of this in mind, it’s not surprising that we’ve witnessed a trend where startups achieve compliance from the very early days and often prioritize this motion over developing an exciting feature or launching a new campaign to bring in leads, for instance.
Compliance is an important milestone for a young company and one that moves the cybersecurity industry forward. It forces startup founders to put security hats on and think about protecting their company, as well as their customers. At the same time, compliance provides comfort to the enterprise buyer’s legal and security teams when engaging with emerging vendors. So why is compliance alone not enough?
First, compliance doesn’t mean security (although it is a step in the right direction). It is more often than not that young companies are compliant while being vulnerable in their security posture.
What does it look like? For example, a software company may have met SOC 2 standards that require all employees to install endpoint protection on their devices, but it may not have a way to enforce employees to actually activate and update the software. Furthermore, the company may lack a centrally managed tool for monitoring and reporting to see if any endpoint breaches have occurred, where, to whom and why. And, finally, the company may not have the expertise to quickly respond to and fix a data breach or attack.
Therefore, although compliance standards are met, several security flaws remain. The end result is that startups can suffer security breaches that end up costing them a bundle. For companies with under 500 employees, the average security breach costs an estimated $7.7 million, according to a study by IBM, not to mention the brand damage and lost trust from existing and potential customers.
Second, an unforeseen danger for startups is that compliance can create a false sense of safety. Receiving a compliance certificate from objective auditors and renowned organizations could give the impression that the security front is covered.
Once startups start gaining traction and signing upmarket customers, that sense of security grows, because if the startup managed to acquire security-minded customers from the F-500, being compliant must be enough for now and the startup is probably secure by association. When charging after enterprise deals, it’s the buyer’s expectations that push startups to achieve SOC 2 or ISO27001 compliance to satisfy the enterprise security threshold. But in many cases, enterprise buyers don’t ask sophisticated questions or go deeper into understanding the risk a vendor brings, so startups are never really called to task on their security systems.
Third, compliance only deals with a defined set of knowns. It doesn’t cover anything that is unknown and new since the last version of the regulatory requirements were written.
For example, APIs are growing in use, but regulations and compliance standards have yet to catch up with the trend. So an e-commerce company must be PCI-DSS compliant to accept credit card payments, but it may also leverage multiple APIs that have weak authentication or business logic flaws. When the PCI standard was written, APIs weren’t common, so they aren’t included in the regulations, yet now most fintech companies rely heavily on them. So a merchant may be PCI-DSS compliant, but use nonsecure APIs, potentially exposing customers to credit card breaches.
Startups are not to blame for the mix-up between compliance and security. It is difficult for any company to be both compliant and secure, and for startups with limited budget, time or security know-how, it’s especially challenging. In a perfect world, startups would be both compliant and secure from the get-go; it’s not realistic to expect early-stage companies to spend millions of dollars on bulletproofing their security infrastructure. But there are some things startups can do to become more secure.
One of the best ways startups can begin tackling security is with an early security hire. This team member might seem like a “nice to have” that you could put off until the company reaches a major headcount or revenue milestone, but I would argue that a head of security is a key early hire because this person’s job will be to focus entirely on analyzing threats and identifying, deploying and monitoring security practices. Additionally, startups would benefit from ensuring their technical teams are security-savvy and keep security top of mind when designing products and offerings.
Another tactic startups can take to bolster their security is to deploy the right tools. The good news is that startups can do so without breaking the bank; there are many security companies offering open-source, free or relatively affordable versions of their solutions for emerging companies to use, including Snyk, Auth0, HashiCorp, CrowdStrike and Cloudflare.
A full security rollout would include software and best practices for identity and access management, infrastructure, application development, resiliency and governance, but most startups are unlikely to have the time and budget necessary to deploy all pillars of a robust security infrastructure.
Luckily, there are resources like Security 4 Startups that offer a free, open-source framework for startups to figure out what to do first. The guide helps founders identify and solve the most common and important security challenges at every stage, providing a list of entry-level solutions as a solid start to building a long-term security program. In addition, compliance automation tools can help with continuous monitoring to ensure these controls stay in place.
For startups, compliance is critical for establishing trust with partners and customers. But if this trust is eroded after a security incident, it will be nearly impossible to regain it. Being secure, not only compliant, will help startups take trust to a whole other level and not only boost market momentum, but also make sure their products are here to stay.
So instead of equating compliance with security, I suggest expanding the equation to consider that compliance and security equal trust. And trust equals business success and longevity.
TikTok has a month to respond to concerns raised by European consumer protection agencies earlier this year, EU lawmakers said today.
The Commission has launched what it described as “a formal dialogue” with the video sharing platform over its commercial practices and policy.
Areas of specific concern include hidden marketing, aggressive advertising techniques targeted at children, and certain contractual terms in TikTok’s policies that could be considered misleading and confusing for consumers, per the Commission.
Commenting in a statement, justice commissioner Didier Reynders added: “The current pandemic has further accelerated digitalisation. This has brought new opportunities but it has also created new risks, in particular for vulnerable consumers. In the European Union, it is prohibited to target children and minors with disguised advertising such as banners in videos. The dialogue we are launching today should support TikTok in complying with EU rules to protect consumers.”
The background to this is that back in February the European Consumer Organisation (BEUC) sent the the Commission a report calling out a number of TikTok’s policies and practices — including what it said were unfair terms and copyright practices. It also flagged the risk of children being exposed to inappropriate content on the platform, and accused TikTok of misleading data processing and privacy practices.
Complaints were filed around the same time by consumer organisations in 15 EU countries — urging those national authorities to investigate the social media giant’s conduct.
The multi-pronged EU action means TikTok has not just the Commission looking at the detail of its small print but is facing questions from a network of national consumer protection authorities — which is being co-led by the Swedish Consumer Agency and the Irish Competition and Consumer Protection Commission (which handles privacy issues related to the platform).
Nonetheless, the BEUC queried why the Commission hasn’t yet launched a formal enforcement procedure.
“We hope that the authorities will stick to their guns in this ‘dialogue’ which we understand is not yet a formal launch of an enforcement procedure. It must lead to good results for consumers, tackling all the points that BEUC raised. BEUC also hopes to be consulted before an agreement is reached,” a spokesperson for the organization told us.
Also reached for comment, TikTok sent us this statement on the Commission’s action, attributed to its director of public policy, Caroline Greer:
“As part of our ongoing engagement with regulators and other external stakeholders over issues such as consumer protection and transparency, we are engaging in a dialogue with the Irish Consumer Protection Commission and the Swedish Consumer Agency and look forward to discussing the measures we’ve already introduced. In addition, we have taken a number of steps to protect our younger users, including making all under-16 accounts private-by-default, and disabling their access to direct messaging. Further, users under 18 cannot buy, send or receive virtual gifts, and we have strict policies prohibiting advertising directly appealing to those under the age of digital consent.”
The company told us it uses age verification for personalized ads — saying users must have verified that they are 13+ to receive these ads; as well as being over the age of digital consent in their respective EU country; and also having consented to receive targeted ads.
However TikTok’s age verification technology has been criticized as weak before now — and recent emergency child-safety-focused enforcement action by the Italian national data protection agency has led to TikTok having to pledge to strengthen its age verification processes in the country.
The Italian enforcement action also resulted in TikTok removing more than 500,000 accounts suspected of belonging to users aged under 13 earlier this month — raising further questions about whether it can really claim that under-13s aren’t routinely exposed to targeted ads on its platform.
In further background remarks it sent us, TikTok claimed it has clear labelling of sponsored content. But it also noted it’s made some recent changes — such as switching the label it applies on video advertising from ‘sponsored’ to ‘ad’ to make it clearer.
It also said it’s working on a toggle that aims to make it clearer to users when they may be exposed to advertising by other users by enabling the latter users to prominently disclose that their content contains advertising.
TikTok said the tool is currently in beta testing in Europe but it said it expects to move to general availability this summer and will also amend its ToS to require users to use this toggle whenever their content contains advertising. (But without adequate enforcement that may just end up as another overlooked and easily abused setting.)
The company recently announced a transparency center in Europe in a move that looks intended to counter some of the concerns being raised about its business in the region, as well as to prepare it for the increased oversight that’s coming down the pipe for all digital platforms operating in the EU — as the bloc works to update its digital rulebook.
Boston-based Filtered.ai has raised a $7 million round to accelerate its hiring cadence, and built out the go-to-market model for its its engineering and developer-focused hiring service, it recently announced.
TechCrunch caught up with the company to discuss not only why it decided to leave bootstrapping behind, but also to dig into how its service could widen the market for some technical roles.
The startup was born back in 2016 out of a need when its founder and CEO Paul Bilodeau started to work on it as an internal project while employed at a consultancy. Filtered later split from the consulting group in 2019, signing a term sheet to raise capital in March of 2020. Then COVID-19 arrived, and things got a bit turbulent.
But before we get lost in the money side of things, let’s talk about what the company does.
Filtered’s product is interesting as it could help shake up a hiring system for technical roles for startups that is rife with bias and wasted time. If you are friends with any developers, for example, or data scientists, you are aware of how not-good their hiring process can be.
To pick two issues: Resumes are often a pretty poor indicator of talent, and on-site whiteboard sessions are super unpopular. Filtered is taking on both by providing skills-based take-home tests with AI aboard to help detect fraud. The hiring company can play back those sessions to see how candidates approached problems. Filtered also allows companies to ask candidates open-ended interview questions via video, removing the need for formulaic phone screens that are only good for providing full-employment to junior HR staff.
Filtered claims that its system can get companies to the point of making offers more quickly.
That’s all well and good, but what TechCrunch was most curious about was what the startup’s service might manage when it comes to making hiring more equitable. If it’s more skills-focused than resume-centric, does that shake up who gets hired? It does, the company thinks. Once resumes lose some of their luster, and candidates are vetted on skills over keyword optimization in their applications, “diversity just happens,” Bilodeau explained.
Let’s get back to the money. The timing of Filtered’s anticipated venture capital round and the onset of the COVID-19 pandemic were unfortunately timed rather close to each other.
So, Bilodeau told TechCrunch in an interview that his startup effectively raised capital on a drip basis throughout 2020, until it finally closed its round in the fourth quarter of the year. That timing was somewhat fortuitous for its investors — Silicon Valley Data Capital and the AI Fund — as Filtered’s CEO said that that was the company’s best quarter in its history.
From bootstrapping to taking on capital, what changed at Filtered that led it to decide to raise external funding? Per Bilodeau, he didn’t want to raise money. And he said that crowing about fundraising news is somewhat nonsensical, likening it to sharing on LinkedIn that he took out a mortgage on a house.
But as Filtered wanted to hire proactively instead of when it closed a new deal, picking up new funds made sense. The startup also wanted to work more on its marketing efforts, shake up its pricing and move toward a land-and-expand model from an enterprise sales focus. More money would make all of that a bit easier, so it took on capital.
Looking ahead, we’re hoping that Filtered can somehow quantify the impact it has on hiring diverse folks for technical roles. If it’s material, that could be even more exciting than rapid revenue growth.
Europe’s lead data protection regulator has opened two investigations into EU institutions’ use of cloud services from U.S. cloud giants, Amazon and Microsoft, under so called Cloud II contracts inked earlier between European bodies, institutions and agencies and AWS and Microsoft.
A separate investigation has also been opened into the European Commission’s use of Microsoft Office 365 to assess compliance with earlier recommendations, the European Data Protection Supervisor (EDPS) said today.
Wojciech Wiewiórowski is probing the EU’s use of U.S. cloud services as part of a wider compliance strategy announced last October following a landmark ruling by the Court of Justice (CJEU) — aka, Schrems II — which struck down the EU-US Privacy Shield data transfer agreement and cast doubt upon the viability of alternative data transfer mechanisms in cases where EU users’ personal data is flowing to third countries where it may be at risk from mass surveillance regimes.
In October, the EU’s chief privacy regulator asked the bloc’s institutions to report on their transfers of personal data to non-EU countries. This analysis confirmed that data is flowing to third countries, the EDPS said today. And that it’s flowing to the U.S. in particular — on account of EU bodies’ reliance on large cloud service providers (many of which are U.S.-based).
That’s hardly a surprise. But the next step could be very interesting as the EDPS wants to determine whether those historical contracts (which were signed before the Schrems II ruling) align with the CJEU judgement or not.
Indeed, the EDPS warned today that they may not — which could thus require EU bodies to find alternative cloud service providers in the future (most likely ones located within the EU, to avoid any legal uncertainty). So this investigation could be the start of a regulator-induced migration in the EU away from U.S. cloud giants.
Commenting in a statement, Wiewiórowski said: “Following the outcome of the reporting exercise by the EU institutions and bodies, we identified certain types of contracts that require particular attention and this is why we have decided to launch these two investigations. I am aware that the ‘Cloud II contracts’ were signed in early 2020 before the ‘Schrems II’ judgement and that both Amazon and Microsoft have announced new measures with the aim to align themselves with the judgement. Nevertheless, these announced measures may not be sufficient to ensure full compliance with EU data protection law and hence the need to investigate this properly.”
Amazon and Microsoft have been contacted with questions regarding any special measures they have applied to these Cloud II contracts with EU bodies.
The EDPS said it wants EU institutions to lead by example. And that looks important given how, despite a public warning from the European Data Protection Board (EDPB) last year — saying there would be no regulatory grace period for implementing the implications of the Schrems II judgement — there hasn’t been any major data transfer fireworks yet.
The most likely reason for that is a fair amount of head-in-the-sand reaction and/or superficial tweaks made to contracts in the hopes of meeting the legal bar (but which haven’t yet been tested by regulatory scrutiny).
Final guidance from the EDPB is also still pending, although the Board put out detailed advice last fall.
The CJEU ruling made it plain that EU law in this area cannot simply be ignored. So as the bloc’s data regulators start scrutinizing contracts that are taking data out of the EU some of these arrangement are, inevitably, going to be found wanting — and their associated data flows ordered to stop.
To wit: A long-running complaint against Facebook’s EU-US data transfers — filed by the eponymous Max Schrems, a long-time EU privacy campaigners and lawyer, all the way back in 2013 — is slowing winding toward just such a possibility.
Last fall, following the Schrems II ruling, the Irish regulator gave Facebook a preliminary order to stop moving Europeans’ data over the pond. Facebook sought to challenge that in the Irish courts but lost its attempt to block the proceeding earlier this month. So it could now face a suspension order within months.
How Facebook might respond is anyone’s guess but Schrems suggested to TechCrunch last summer that the company will ultimately need to federate its service, storing EU users’ data inside the EU.
The Schrems II ruling does generally look like it will be good news for EU-based cloud service providers which can position themselves to solve the legal uncertainty issue (even if they aren’t as competitively priced and/or scalable as the dominant US-based cloud giants).
Fixing U.S. surveillance law, meanwhile — so that it gets independent oversight and accessible redress mechanisms for non-citizens in order to no longer be considered a threat to EU people’s data, as the CJEU judges have repeatedly found — is certainly likely to take a lot longer than ‘months’. If indeed the US authorities can ever be convinced of the need to reform their approach.
Still, if EU regulators finally start taking action on Schrems II — by ordering high profile EU-US data transfers to stop — that might help concentrate US policymakers’ minds toward surveillance reform. Otherwise local storage may be the new future normal.
Security researchers say at-home exercise giant Peloton and its closest rival Echelon were not stripping user-uploaded profile photos of their metadata, in some cases exposing users’ real-world location data.
Almost every file, photo or document contains metadata, which is data about the file itself, such as how big it is, when it was created, and by whom. Photos and video will often also include the location from where they were taken. That location data helps online services tag your photos or videos that you were at this restaurant or that other landmark.
But those online services — especially social platforms, where you see people’s profile photos — are supposed to remove location data from the file’s metadata so other users can’t snoop on where you’ve been, since location data can reveal where you live, work, where you go, and who you see.
Jan Masters, a security researcher at Pen Test Partners, found the metadata exposure as part of a wider look at Peloton’s leaky API. TechCrunch verified the bug by uploading a profile photo with GPS coordinates of our New York office, and checking the metadata of the file while it was on the server.
The bugs were privately reported to both Peloton and Echelon.
Peloton fixed its API issues earlier this month but said it needed more time to fix the metadata bug and to strip existing profile photos of any location data. A Peloton spokesperson confirmed the bugs were fixed last week. Echelon fixed its version of the bug earlier this month. But TechCrunch held this report until we had confirmation that both companies had fixed the bug and that metadata had been stripped from old profile photos.
It’s not known how long the bug existed or if anyone maliciously exploited it to scrape users’ personal information. Any copies, whether cached or scraped, could represent a significant privacy risk to users whose location identifies their home address, workplace, or other private location.
Parler infamously didn’t scrub metadata from user-uploaded photos, which exposed the locations of millions of users when archivists exploited weaknesses on the platform’s API to download its entire contents. Others have been slow to adopt metadata stripping, like Slack, even if it got there in the end.
This morning Datacy, a startup with its headquarters in Wilmington, Delaware, announced that it has closed $2.4 million in new funding to continue building its consumer-friendly data collection and monetization service.
The company is effectively an argument that the preceding sentence is possible. Datacy is a tool that allows individuals to collect their browsing data, manage it, have it anonymized and aggregated with others and then sold. The end-user gets 85% of the resulting revenue, while Datacy takes 15%.
Its model has found financial backing, with its new capital coming from Trend Forward Capital, Truesight Ventures, Redhawk VC, the Female Founders Alliance and others. The startup raised the funds using a convertible note that was capped at $9.5 million, though TechCrunch is not certain whether or not there were other terms associated with the fundraising mechanism.
Regardless, Datacy’s model fits into the modestly more privacy-forward stance that the technology world has taken in recent years; Apple is not the only company looking to make hay off of what some consider to be rising consumer interest in keeping their activities, and data, to themselves. But what Datacy wants to do is merge the consumer privacy impulse with profit.
According to company co-founder Paroma Indilo, her startup is not a cookie blocker. She told TechCrunch that if someone wants to block data collection, there are good tools for the task in the market already. What Datacy wants to do, she said, is evolve from its current status as a control platform to the way that data is shared and exchanged, built atop user consent. With monetization, we’d add.
It’s a better vision for the future than the hellscape adtech and data-vendor market that we’ve become accustomed to.
Today the startup has live beta users, allowing it to learn and collect initial data. The company is waiting to make the business side of its operation open to all until it has 50,000 users; Indilo told TechCrunch that individual data is not worth much, but in aggregate it can be worth quite a lot. So to see the startup wait to scale up its sales operations until it has a larger user base is reasonable.
It may not be too long until Datacy reaches that 50,000 user mark. From a current base of 10,000, and what Indilo described as 30% monthly growth via word of mouth, it could hit that mark in a half-year or so.
Datacy is one of those early-stage bets that has a lot of potential, but also a notable helping of risk. If it can attract the masses it needs to prove out the economics of its model, its payments to its user base could make growth a self-fulfilling destiny. But if its ability to garner more users slows, it could fail to reach sufficient scale for its model to work whatsoever.
So it’s a good use of venture capital, in other words. We’ll check back in with Datacy in a few months to see how close it is to its 50,000 user goal. And how its bet that consumers want their data back is playing out.