FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Democratic bill would suspend Section 230 protections when social networks boost anti-vax conspiracies

By Taylor Hatmaker

Two Democratic senators introduced a bill Thursday that would strip away the liability shield that social media platforms hold dear when those companies boost anti-vaccine conspiracies and other kinds of health misinformation.

The Health Misinformation Act, introduced by Senators Amy Klobuchar (D-MN) and Ben Ray Luján (D-NM), would create a new carve-out in Section 230 of the Communications Decency Act to hold platforms liable for algorithmically-promoted health misinformation and conspiracies. Platforms rely on Section 230 to protect them from legal liability for the vast amount of user-created content they host.

“For far too long, online platforms have not done enough to protect the health of Americans,” Klobuchar said. “These are some of the biggest, richest companies in the world and they must do more to prevent the spread of deadly vaccine misinformation.”

The bill would specifically alter Section 230’s language to revoke liability protections in the case of “health misinformation that is created or developed through the interactive computer service” if that misinformation is amplified through an algorithm. The proposed exception would only kick in during a declared national public health crisis, like the advent of Covid-19, and wouldn’t apply in normal times. The bill would task the Secretary of the Department of Health and Human Services (HHS) with defining health misinformation.

“Features that are built into technology platforms have contributed to the spread of misinformation and disinformation, with social media platforms incentivizing individuals to share content to get likes, comments, and other positive signals of engagement, which rewards engagement rather than accuracy,” the bill reads.

The bill also makes mention of the “disinformation dozen” — just twelve people, including anti-vaccine activist Robert F. Kennedy Jr. and a grab bag of other conspiracy theorists, who account for a massive swath of the anti-vax misinformation ecosystem. Many of the individuals on the list still openly spread their messaging through social media accounts on Twitter, Facebook and other platforms.

Section 230’s defenders generally view the idea of new carve-outs to the law as dangerous. Because Section 230 is such a foundational piece of the modern internet, enabling everything from Yelp and Reddit to the comment section below this post, they argue that the potential for unforeseen second order effects means the law should be left intact.

But some members of Congress — both Democrats and Republicans — see Section 230 as a valuable lever in their quest to regulate major social media companies. While the White House is pursuing its own path to craft consequences for overgrown tech companies through the Justice Department and the FTC, Biden’s office said earlier this week that the president is “reviewing” Section 230 as well. But as Trump also discovered, weakening Section 230 is a task that only Congress is positioned to accomplish — and even that is still a long shot.

While the new Democratic bill is narrowly targeted as far as proposed changes to Section 230 go, it’s also unlikely to attract bipartisan support.

Republicans are also interest in stripping away some of Big Tech’s liability protections, but generally hold the view that platforms remove too much content rather than too little. Republicans are also more likely to sow misinformation about the Covid-19 vaccines themselves, framing vaccination as a partisan issue. Whether the bill goes anywhere or not, it’s clear that an alarming portion of Americans have no intention of getting vaccinated — even with a much more contagious variant on the rise and colder months on the horizon.

“As COVID-19 cases rise among the unvaccinated, so has the amount of misinformation surrounding vaccines on social media,” Luján said of the proposed changes to Section 230. “Lives are at stake.”

DNSFilter secures $30M Series A to step up fight against DNS-based threats

By Carly Page

DNSFilter, an artificial intelligence startup that provides DNS protection to enterprises, has secured $30 million in Series A funding from Insight Partners.

DNSFilter, as its name suggests, offers DNS-based web content filtering and threat protection. Unlike the majority of its competitors, which includes the likes of Palo Alto Networks and Webroot, the startup uses proprietary AI technology to continuously scan billions of domains daily, identifying anomalies and potential vectors for malware, ransomware, phishing, and fraud. 

“Most of our competitors either rent or lease a database from some third party,” Ken Carnesi, co-founder and CEO of DNSFilter tells TechCrunch. “We do that in-house, and it’s through artificial intelligence that’s scanning these pages in real-time.” 

The company, which counts the likes of Lenovo, Newegg, and Nvidia among its 14,000 customers, claims this industry-first technology catches threats an average of five days before competitors and is capable of identifying 76% of domain-based threats. By the end of 2021, DNSFilter says it will block more than 1.1 million threats daily.

DNSFilter has seen rapid growth over the past 12 months as a result of the mass shift to remote working and the increase in cyber threats and ransomware attacks that followed. The startup saw eightfold growth in customer activity, doubled its global headcount to just over 50 employees, and partnered with Canadian software house N-Able to push into the lucrative channel market.  

“DNSFilter’s rapid growth and efficient customer acquisition are a testament to the benefits and ease of use compared to incumbents,” Thomas Krane, principal at Insight Partners, who has been appointed as a director on DNSFilter’s board. “The traditional model of top-down, hardware-centric network security is disappearing in favor of solutions that readily plug in at the device level and can cater to highly distributed workforces”

Prior to this latest funding round, which was also backed by Arthur Ventures (the lead investor in DNSFilter’s seed round), CrowdStrike co-founder and former chief technology officer  Dmitri Alperovitch also joined DNSFilter’s board of directors. 

Carnesi said the addition of Alperovitch to the board will help the company get its technology into the hands of enterprise customers. “He’s helping us to shape the product to be a good fit for enterprise organizations, which is something that we’re doing as part of this round — shifting focus to be primarily mid-market and enterprise,” he said.

The company also recently added former CrowdStrike vice president Jen Ayers as its chief operating officer. “She used to manage their entire managed threat hunting team, so she’s definitely coming on for the security side of things as we build out our domain intelligence team further,” Carnesi said.

With its newly-raised funds, DNSFilter will further expand its headcount, with plans to add more than 80 new employees globally over the next 12 months.

“There’s a lot more that we can do for security via DNS, and we haven’t really started on that yet,” Carnesi said. “We plan to do things that people won’t believe were possible via DNS.”

The company, which acquired Web Shrinker in 2018, also expects there to be more acquisitions on the cards going forward. “There are some potential companies that we’d be looking to acquire to speed up our advancement in certain areas,” Carnesi said.

Arctic Wolf secures $150M at Series F, tripling its valuation

By Carly Page

Arctic Wolf, a managed cybersecurity company that offers “security operations-as-a-concierge” service, has raised $150 million at Series F.

This round was led by Viking Global Investors, Owl Rock, and other existing investors, and lands less than a year after the company’s last round of investment when it became the first managed detection and response (MDR) companies to secure a valuation of over $1 billion. This latest round brings its total amount of funding raised to date to just shy of $500 million, and sees the company’s valuation soar from $1.3 billion to $4.3 billion.

“This is a recognition on our part, and our investors’ part, of the challenge that our industry is facing,” Arctic Wolf CEO Brian NeSmith told TechCrunch.

As a result of this challenging cybersecurity landscape, fueled by pandemic turbulence and a mass shift to remote working, Arctic Wolf has seen impressive growth over the last 12 months. The company, which provides round-the-clock security monitoring for small and mid-sized organizations through its cloud security operations platform, saw its revenues double on rapid platform adoption growth, with nearly 60% of its 3,000 customers using at least three of its security operations solutions. This, the company claims, makes it fastest-growing company at scale in the fastest-growing area of the cybersecurity market.

The company’s headcount has also increased dramatically: the company onboard approximately 400 employees over the past 12 months and plans to add 500 new roles in the coming year. 

The newly-raised funds will be used to keep its momentum going, NeSmith said, and to step up its mergers and acquisitions strategy. Arctic Wolf has made three acquisitions since it was founded 2012 — including cybersecurity vulnerability assessment startup RootSecure in 2018 — and it’s planning to increase this number significantly over the next 12 months.

“We’ve got letters of intent for a couple more, and I expect that over the next year we’ll probably do between 5 and 10 acquisitions,” said NeSmith.

With Series F funding under its belt, Arctic Wolf is now starting to think about its exit strategy. NeSmith tells TechCrunch that while the company is weighing up its options, an IPO is likely the next logical move for the company. 

“I think ultimately the exit is IPO. That’s the most likely outcome,” he says. “Frankly, from some of the companies I’ve seen IPO over the last 3-6 months, we could be a public company today. We’re a little more measured, so we want to realize that not being public is an end point, you’re just changing the way you run the company.”

Read more:

The Accellion data breach continues to get messier

By Carly Page

Morgan Stanley has joined the growing list of Accellion hack victims — more than six months after attackers first breached the vendor’s 20-year-old file-sharing product. 

The investment banking firm — which is no stranger to data breaches — confirmed in a letter this week that attackers stole personal information belonging to its customers by hacking into the Accellion FTA server of its third-party vendor, Guidehouse. In a letter sent to those affected, first reported by Bleeping Computer, Morgan Stanley admitted that threat actors stole an unknown number of documents containing customers’ addresses and Social Security numbers.

The documents were encrypted, but the letter said that the hackers also obtained the decryption key, though Morgan Stanley said the files did not contain passwords that could be used to access customers’ financial accounts.

“The protection of client data is of the utmost importance and is something we take very seriously,” a Morgan Stanley spokesperson told TechCrunch. “We are in close contact with Guidehouse and are taking steps to mitigate potential risks to clients.”

Just days before news of the Morgan Stanley data breach came to light, an Arkansas-based healthcare provider confirmed it had also suffered a data breach as a result of the Accellion attack. Just weeks before that, so did UC Berkely. While data breaches tend to grow past initially reported figures, the fact that organizations are still coming out as Accellion victims more than six months later shows that the business software provider still hasn’t managed to get a handle on it. 

The cyberattack was first uncovered on December 23, and Accellion initially claimed the FTA vulnerability was patched within 72 hours before it was later forced to explain that new vulnerabilities were discovered. Accellion’s next (and final) update came in March, when the company claimed that all known FTA vulnerabilities — which authorities say were exploited by the FIN11 and the Clop ransomware gang — have been remediated.

But incident responders said Accellion’s response to the incident wasn’t as smooth as the company let on, claiming the company was slow to raise the alarm in regards to the potential danger to FTA customers.

The Reserve Bank of New Zealand, for example, raised concerns about the timeliness of alerts it received from Accellion. In a statement, the bank said it was reliant on Accellion to alert it to any vulnerabilities in the system — but never received any warnings in December or January.

“In this instance, their notifications to us did not leave their system and hence did not reach the Reserve Bank in advance of the breach. We received no advance warning,” said RBNZ governor Adrian Orr.

This, according to a discovery made by KPMG International, was due to the fact that the email tool used by Accellion failed to work: “Software updates to address the issue were released by the vendor in December 2020 soon after it discovered the vulnerability. The email tool used by the vendor, however, failed to send the email notifications and consequently the Bank was not notified until 6 January 2021,” the KPMG’s assessment said. 

“We have not sighted evidence that the vendor informed the Bank that the System vulnerability was being actively exploited at other customers. This information, if provided in a timely manner is highly likely to have significantly influenced key decisions that were being made by the Bank at the time.”

In March, back when it was releasing updates about the ongoing breach, Accellion was keen to emphasize that it was planning to retire the 20-year-old FTA product in April and that it had been working for three years to transition clients onto its new platform, Kiteworks. A press release from the company in May says 75% of Accellion customers have already migrated to Kiteworks, a figure that also highlights the fact that 25% are still clinging to its now-retired FTA product. 

This, along with Accellion now taking a more hands-off approach to the incident, means that the list of victims could keep growing. It’s currently unclear how many the attack has claimed so far, though recent tallies put the list at around 300. This list includes Qualys, Bombardier, Shell, Singtel, the University of Colorado, the University of California, Transport for New South Wales, Office of the Washington State Auditor, grocery giant Kroger and law firm Jones Day.

“When a patch is issued for software that has been actively exploited, simply patching the software and moving on isn’t the best path,” Tim Mackey, principal security strategist at the Synopsys Cybersecurity Research Center, told TechCrunch. “Since the goal of patch management is protecting systems from compromise, patch management strategies should include reviews for indications of previous compromise.”

Accellion declined to comment.

LinkedIn formally joins EU Code on hate speech takedowns

By Natasha Lomas

Microsoft-owned LinkedIn has committed to doing more to quickly purge illegal hate speech from its platform in the European Union by formally signing up to a self-regulatory initiative that seeks to tackle the issue through a voluntary Code of Conduct.

In statement today, the European Commission announced that the professional social network has joined the EU’s Code of Conduct on Countering Illegal Hate Speech Online, with justice commissioner, Didier Reynders, welcoming LinkedIn’s (albeit tardy) participation, and adding in a statement that the code “is and will remain an important tool in the fight against hate speech, including within the framework established by digital services legislation”.

“I invite more businesses to join, so that the online world is free from hate,” Reynders added.

While LinkedIn’s name wasn’t formally associated with the voluntary Code before now it said it has “supported” the effort via parent company Microsoft, which was already signed up.

In a statement on its decision to formally join now, it also said:

“LinkedIn is a place for professional conversations where people come to connect, learn and find new opportunities. Given the current economic climate and the increased reliance jobseekers and professionals everywhere are placing on LinkedIn, our responsibility is to help create safe experiences for our members. We couldn’t be clearer that hate speech is not tolerated on our platform. LinkedIn is a strong part of our members’ professional identities for the entirety of their career — it can be seen by their employer, colleagues and potential business partners.”

In the EU ‘illegal hate speech’ can mean content that espouses racist or xenophobic views, or which seeks to incite violence or hatred against groups of people because of their race, skin color, religion or ethnic origin etc.

A number of Member States have national laws on the issue — and some have passed their own legislation specifically targeted at the digital sphere. So the EU Code is supplementary to any actual hate speech legislation. It is also non-legally binding.

The initiative kicked off back in 2016 — when a handful of tech giants (Facebook, Twitter, YouTube and Microsoft) agreed to accelerate takedowns of illegal speech (or well, attach their brand names to the PR opportunity associated with saying they would).

Since the Code became operational, a handful of other tech platforms have joined — with video sharing platform TikTok signing up last October, for example.

But plenty of digital services (notably messaging platforms) still aren’t participating. Hence the Commission’s call for more digital services companies to get on board.

At the same time, the EU is in the process of firming up hard rules in the area of illegal content.

Last year the Commission proposed broad updates (aka the Digital Services Act) to existing ecommerce rules to set operational ground rules that they said are intended to bring online laws in line with offline legal requirements — in areas such as illegal content, and indeed illegal goods. So, in the coming years, the bloc will get a legal framework that tackles — at least at a high level — the hate speech issue, not merely a voluntary Code. 

The EU also recently adopted legislation on terrorist content takedowns (this April) — which is set to start applying to online platforms from next year.

But it’s interesting to note that, on the perhaps more controversial issue of hate speech (which can deeply intersect with freedom of expression), the Commission wants to maintain a self-regulatory channel alongside incoming legislation — as Reynders’ remarks underline.

Brussels evidently sees value in having a mixture of ‘carrots and sticks’ where hot button digital regulation issues are concerned. Especially in the controversial ‘danger zone’ of speech regulation.

So, while the DSA is set to bake in standardized ‘notice and response’ procedures to help digital players swiftly respond to illegal content, by keeping the hate speech Code around it means there’s a parallel conduit where key platforms could be encouraged by the Commission to commit to going further than the letter of the law (and thereby enable lawmakers to sidestep any controversy if they were to try to push more expansive speech moderation measures into legislation).

The EU has — for several years — had a voluntary a Code of Practice on Online Disinformation too. (And a spokeswoman for LinkedIn confirmed it has been signed up to that since its inception, also through its parent company Microsoft.)

And while lawmakers recently announced a plan to beef that Code up — to make it “more binding”, as they oxymoronically put it — it certainly isn’t planning to legislate on that (even fuzzier) speech issue.

In further public remarks today on the hate speech Code, the Commission said that a fifth monitoring exercise in June 2020 showed that on average companies reviewed 90% of reported content within 24 hours and removed 71% of content that was considered to be illegal hate speech.

It added that it welcomed the results — but also called for signatories to redouble their efforts, especially around providing feedback to users and in how they approach transparency around reporting and removals.

The Commission has also repeatedly calls for platforms signed up to the disinformation Code to do more to tackle the tsunami of ‘fake news’ being fenced on their platforms, including — on the public health front — what they last year dubbed a coronavirus infodemic.

The COVID-19 crisis has undoubtedly contributed to concentrating lawmakers’ minds on the complex issue of how to effectively regulate the digital sphere and likely accelerated a number of EU efforts.

 

Edge Delta raises $15M Series A to take on Splunk

By Frederic Lardinois

Seattle-based Edge Delta, a startup that is building a modern distributed monitoring stack that is competing directly with industry heavyweights like Splunk, New Relic and Datadog, today announced that it has raised a $15 million Series A funding round led by Menlo Ventures and Tim Tully, the former CTO of Splunk. Previous investors MaC Venture Capital and Amity Ventures also participated in this round, which brings the company’s total funding to date to $18 million.

“Our thesis is that there’s no way that enterprises today can continue to analyze all their data in real time,” said Edge Delta co-founder and CEO Ozan Unlu, who has worked in the observability space for about 15 years already (including at Microsoft and Sumo Logic). “The way that it was traditionally done with these primitive, centralized models — there’s just too much data. It worked 10 years ago, but gigabytes turned into terabytes and now terabytes are turning into petabytes. That whole model is breaking down.”

Image Credits: Edge Delta

He acknowledges that traditional big data warehousing works quite well for business intelligence and analytics use cases. But that’s not real-time and also involves moving a lot of data from where it’s generated to a centralized warehouse. The promise of Edge Delta is that it can offer all of the capabilities of this centralized model by allowing enterprises to start to analyze their logs, metrics, traces and other telemetry right at the source. This, in turn, also allows them to get visibility into all of the data that’s generated there, instead of many of today’s systems, which only provide insights into a small slice of this information.

While competing services tend to have agents that run on a customer’s machine, but typically only compress the data, encrypt it and then send it on to its final destination, Edge Delta’s agent starts analyzing the data right at the local level. With that, if you want to, for example, graph error rates from your Kubernetes cluster, you wouldn’t have to gather all of this data and send it off to your data warehouse where it has to be indexed before it can be analyzed and graphed.

With Edge Delta, you could instead have every single node draw its own graph, which Edge Delta can then combine later on. With this, Edge Delta argues, its agent is able to offer significant performance benefits, often by orders of magnitude. This also allows businesses to run their machine learning models at the edge, as well.

Image Credits: Edge Delta

“What I saw before I was leaving Splunk was that people were sort of being choosy about where they put workloads for a variety of reasons, including cost control,” said Menlo Ventures’ Tim Tully, who joined the firm only a couple of months ago. “So this idea that you can move some of the compute down to the edge and lower latency and do machine learning at the edge in a distributed way was incredibly fascinating to me.”

Edge Delta is able to offer a significantly cheaper service, in large part because it doesn’t have to run a lot of compute and manage huge storage pools itself since a lot of that is handled at the edge. And while the customers obviously still incur some overhead to provision this compute power, it’s still significantly less than what they would be paying for a comparable service. The company argues that it typically sees about a 90 percent improvement in total cost of ownership compared to traditional centralized services.

Image Credits: Edge Delta

Edge Delta charges based on volume and it is not shy to compare its prices with Splunk’s and does so right on its pricing calculator. Indeed, in talking to Tully and Unlu, Splunk was clearly on everybody’s mind.

“There’s kind of this concept of unbundling of Splunk,” Unlu said. “You have Snowflake and the data warehouse solutions coming in from one side, and they’re saying, ‘hey, if you don’t care about real time, go use us.’ And then we’re the other half of the equation, which is: actually there’s a lot of real-time operational use cases and this model is actually better for those massive stream processing datasets that you required to analyze in real time.”

But despite this competition, Edge Delta can still integrate with Splunk and similar services. Users can still take their data, ingest it through Edge Delta and then pass it on to the likes of Sumo Logic, Splunk, AWS’s S3 and other solutions.

Image Credits: Edge Delta

“If you follow the trajectory of Splunk, we had this whole idea of building this business around IoT and Splunk at the Edge — and we never really quite got there,” Tully said. “I think what we’re winding up seeing collectively is the edge actually means something a little bit different. […] The advances in distributed computing and sophistication of hardware at the edge allows these types of problems to be solved at a lower cost and lower latency.”

The Edge Delta team plans to use the new funding to expand its team and support all of the new customers that have shown interest in the product. For that, it is building out its go-to-market and marketing teams, as well as its customer success and support teams.

 

An internal code repo used by New York State’s IT office was exposed online

By Zack Whittaker

A code repository used by the New York state government’s IT department was left exposed on the internet, allowing anyone to access the projects inside, some of which contained secret keys and passwords associated with state government systems.

The exposed GitLab server was discovered on Saturday by Dubai-based SpiderSilk, a cybersecurity company credited with discovering data spills at Samsung, Clearview AI and MoviePass.

Organizations use GitLab to collaboratively develop and store their source code — as well as the secret keys, tokens and passwords needed for the projects to work — on servers that they control. But the exposed server was accessible from the internet and configured so that anyone from outside the organization could create a user account and log in unimpeded, SpiderSilk ‘s chief security officer Mossab Hussin told TechCrunch.

When TechCrunch visited the GitLab server, the login page showed it was accepting new user accounts. It’s not known exactly how long the GitLab server was accessible in this way, but historic records from Shodan, a search engine for exposed devices and databases, shows the GitLab was first detected on the internet on March 18.

SpiderSilk shared several screenshots showing that the GitLab server contained secret keys and passwords associated with servers and databases belonging to New York State’s Office of Information Technology Services. Fearing the exposed server could be maliciously accessed or tampered with, the startup asked for help in disclosing the security lapse to the state.

TechCrunch alerted the New York governor’s office to the exposure a short time after the server was found. Several emails to the governor’s office with details of the exposed GitLab server were opened but were not responded to. The server went offline on Monday afternoon.

Scot Reif, a spokesperson for New York State’s Office of Information Technology Services, said the server was “a test box set up by a vendor, there is no data whatsoever, and it has already been decommissioned by ITS.” (Reif declared his response “on background” and attributable to a state official, which would require both parties agree to the terms in advance, but we are printing the reply as we were not given the opportunity to reject the terms.)

When asked, Reif would not say who the vendor was or if the passwords on the server were changed. Several projects on the server were marked “prod,” or common shorthand for “production,” a term for servers that are actively use. Reif also would not say if the incident was reported to the state’s Attorney General’s office. When reached, a spokesperson for the Attorney General did not comment by press time.

TechCrunch understands the vendor is Indotronix-Avani, a New York-based company with offices in India, and owned by venture capital firm Nigama Ventures. Several screenshots show some of the GitLab projects were modified by a project manager at Indotronix-Avani. The vendor’s website touts New York State on its website, along with other government customers, including the U.S. State Department and the U.S. Department of Defense.

Indotronix-Avani spokesperson Mark Edmonds did not respond to requests for comment.

Read more:

After raising $10M, Ryte launches ‘Carbon KPI’ to measure the CO2 footprint of web sites

By Mike Butcher

As we become more and more aware of the kind of impact we are having on this planet we call our home, just about everything is having its CO2 impact measured. Who knew, until recently, that streaming Netflix might have a measurable impact on the environment, for instance. But given vast swathes of the Internet are populated by Web sites, as well as streaming services, then they too must have some sort of impact.

It transpires that a new service has identified how to gauge that, and now it’s raised Venture capital to scale.

Ryte raised €8.5 million ($10M) in a previously undisclosed round led by Bayern Kapital out of Munich and Octopus Investments out of London earlier this year for its Website User Experience Platform.

It has now launched the ‘Ryte Website Carbon KPI’, which claims to be able to help make 5% of all websites carbon neutral by 2023.

Ryte says it worked with data scientists and environmental experts to develop the ability to accurately measure the carbon impact of client’s websites. According to carbon transition thinktank, the Shift Project, the carbon footprint of our gadgets, the internet, and the systems supporting them accounts for about 3.7% of global greenhouse emissions. And this trend is rising rapidly as the world digitizes itself, especially post-pandemic.

Ryte has now engaged its Data Scientist, Katharina Meraner, who has a PhD in climate science and global warming, and input from Climate Partner, to launch this new service.

Andy Bruckschloegl, CEO of Ryte said: “There are currently 189 million active websites. Our goal is to make 5% of all active websites, or 9.5 million websites, climate neutral by the end of 2023 with the help of our platform, strong partners, social media activities, and much more. Time is ticking and making websites carbon neutral is really easy compared to other industries and processes.”

Ryte says it is also collaborating with a reforestation project in San Jose, Nicaragua, to allow its customers to offset their remaining emissions through the purchase of climate certificates.

Using a proprietary algorithm, Ryte says it measures the code of the entire website, average page size, as well as monthly traffic by channel then produces a calculation of the amount of CO2 it uses up.

Admittedly there are similar services but these are ad-hoc and not connected to a platform. A simple Google search will bring us sites like Websitecarbon, Ecosistant, and academic papers. But as far as I can tell, a startup like this hasn’t put this kind of service into their platform yet.

“Teaming up with Ryte will help raise awareness on how information technology contributes to climate change – while at the same time providing tools to make a difference. Ryte’s industry-leading carbon calculator enables thousands of website owners to understand their carbon footprint, to offset unavoidable carbon emissions and thus lay a basis for a comprehensive climate action strategy,” commented Tristan A. Foerster, Co-CEO ClimatePartner.

Update: Google is delaying its deprecation of tracking cookies

By Natasha Lomas

Update: Google has now confirmed the delay, writing in a blog post that its engagement with UK regulators over the so-called “Privacy Sandbox” means support for tracking cookies won’t start being phased out in Chrome until the second half of 2023.

Our original report follows below… 

Adtech giant Google appears to be leaning toward postponing a long planned depreciation of third party tracking cookies.

The plan dates back to 2019 when it announced the long-term initiative that will make it harder for online marketers and advertisers to track web users, including by depreciating third party cookies in Chrome.

Then in January 2020 it said it would make the switch within two years. Which would mean by 2022.

Google confirmed to TechCrunch that it has a Privacy Sandbox announcement incoming today — set for 4pm BST/5pm CET — after we contacted it to ask for confirmation of information we’d heard, via our own sources.

We’ve been told Google’s new official timeline for implementation will be 2023.

However a spokesman for the tech giant danced around providing a direct confirmation — saying that “an update” is incoming shortly.

“We do have an announcement today that will shed some light on Privacy Sandbox updates,” the spokesman also told us.

He had responded to our initial email — which had asked Google to confirm that it will postpone the implementation of Privacy Sandbox to 2023; and for any statement on the delay — with an affirmation (“yep”) so, well, a delay looks likely. But we’ll see how exactly Google will spin that in a few minutes when it publishes the incoming Privacy Sandbox announcement.

Google has previously said it would depreciate support for third party cookies by 2022 — which naturally implies that the wider Privacy Sandbox stack of related adtech would also need to be in place by then.

Earlier this year it slightly hedged the 2022 timeline, saying in January that any changes would not be made before 2022.

The issue for Google is that regulatory scrutiny of its plan has stepped up — following antitrust complaints from the adtech industry which faces huge changes to how it can track and target Internet users.

In Europe, the UK’s Competition and Markets Authority has been working with the UK’s Information Commissioner’s Office to understand the competition and privacy implications of Google’s planned move. And, earlier this month, the CMA issued a notification of intention to accept proposed commitments from Google that would enable the regulator to block any depreciation of cookies if it’s not happy it can be done in a way that’s good for competition and privacy.

At the time we asked Google how the CMA’s involvement might impact the Privacy Sandbox timeline but the company declined to comment.

Increased regulatory oversight of Big Tech will have plenty of ramifications — most obviously it means the end of any chance for giants like Google to ‘move fast and break things’.

Zero trust unicorn Illumio closes $225M Series F led by Thoma Bravo

By Carly Page

Illumio, a self-styled zero trust unicorn, has closed a $225 million Series F funding round at a $2.75 billion valuation. 

The round was led by Thoma Bravo, which recently bought cybersecurity vendor Proofpoint by $12.3 billion, and supported by Franklin Templeton, Hamilton Lane, and Blue Owl Capital. 

The round lands more than two years after Illumio’s Series E funding round in which it raised $65 million, and fueled speculation of an impending IPO. The company’s founder, Andrew Rubin, still isn’t ready to be pressed on whether the company plans to go public, though he told TechCrunch: “If we do our job right, and if we make our customers successful, I’d like to think that would be part of our journey.”

Illumio’s latest funding round is well-timed. Not only does it come amid a huge rise in successful cyberattacks which show that some of the more traditional cybersecurity measures are no longer working, from the SolarWinds hack in early 2020 to the more recent attack on Colonial Pipeline, but it also comes just weeks after President Joe Biden issued an executive order pushing federal agencies to implement significant cybersecurity initiatives, including a zero trust architecture. 

“And just a couple of weeks ago, Anne Neuberger [deputy national security adviser for cybersecurity] put out a memo on White House stationary to all of corporate America saying we’re living through a ransomware pandemic, and here’s six things that we’re imploring you to do,” Rubin says. “One of them was to segment your network.”

Illumio focuses on protecting data centers and cloud networks through something it calls micro-segmentation, which it claims makes it easier to manage and guard against potential breaches, as well as to contain a breach if one occurs. This zero trust approach to security — a concept centered on the belief that businesses should not automatically trust anything inside or outside its perimeters — has never been more important for organizations, according to Illumio. 

“Cyber events are no longer constrained to cyber space,” says Rubin. “That’s why people are finally saying that, after 30 years of relying solely on detection to keep us safe, we cannot rely on it 100% of the time. Zero trust is now becoming the mantra.”

Illumio tells TechCrunch it will use the newly raised funds to make a “huge” investment in its field operations and channel partner network, and to invest in innovation, engineering and its product. 

The late-stage startup, which was founded in 2013 and is based in California, says more than 10% of Fortune 100 companies — including Morgan Stanley, BNP Paribas SA and Salesforce — now use its technology to protect their data centers, networks and other applications. It saw 100% international growth during the pandemic, and says it’s also broadening its customer base across more industries. 

The company has raised more now raised more $550 million from investors include Andreessen Horowitz, General Catalyst and Formation 8.

Extra Crunch roundup: SaaS founder salaries, break-even neobanks, Google Search tips

By Annie Siebert

Usually, a teacher who grades students on a curve is boosting the efforts of those who didn’t perform well on the test. In the case of cloud companies, however, it’s the other way around.

As of Q1 2021, startups in this sector have median Series A rounds around $8 million, reports PitchBook. With $100+ million Series D rounds becoming more common, company valuations are regularly boosted into the billions.

Andy Stinnes, a general partner at Cloud Apps Capital Partners, says founders who are between angel and Series A should seek out investors who are satisfied with $200,000 to $500,000 in ARR.


Full Extra Crunch articles are only available to members.
Use discount code ECFriday to save 20% off a one- or two-year subscription.


Usually a specialist firm, these VCs are open to betting on startups that haven’t yet found product-market fit.

“At this phase of development, you need a committed partner who has both the time and the experience to guide you,” says Stinnes.

These observations aren’t just for active investors: This post is also a framework for new and seasoned founders who are getting ready to knock on doors and ask strangers for money.

Thanks very much for reading Extra Crunch this week!

Walter Thompson
Senior Editor, TechCrunch
@yourprotagonist

Maybe neobanks will break even after all

Alex returned from a week of vacation with a dispatch about the profitability of neobanks Revolut, Chime and Monzo.

“In short, while American consumer fintech Chime has disclosed positive EBITDA — an adjusted profitability metric — many neobanks that we’ve seen numbers from have demonstrated a stark inability to paint a path to profitability,” he writes.

“That could be changing.”

How to land the top spot in Google Search with featured snippets in 2021

Image of colorful scraps of torn paper to represent snippets.

Image Credits: IngaNielsen / Getty Images

“Google search is not what it used to be,” Ryan Sammy, the director of strategy at growth-marketing agency Fractl, writes in a guest post. “We all want to be No. 1 on the search results page, but these days, getting to that position isn’t enough. It might be worth your while to instead go after the top featured snippet position.”

Sammy writes that earning the featured snippet spot is “one of the best things you can do for your SEO.” But how do you land your page in the coveted snippet perch?

 

What does Red Hat’s sale to IBM tell us about Couchbase’s valuation?

Image Credits: Getty Images

After noSQL provider Couchbase filed to go public, joining the ranks of the Great IPO Rush of 2021, Alex Wilhelm looked into its business model and financial performance, with a goal of better understanding the company — and market comps.

Alex used Red Hat, which recently sold to IBM for around $34 billion, as a comp, determining Couchbase “is worth around $900 million” if you use the Red Hat math.

“The Red Hat-Couchbase comparison is not perfect; 2019 is ages ago in technology time, the database company is smaller and other differences exist between the two companies,” Alex notes. “But Red Hat does allow us the confidence to state that Couchbase will be able to best its final private valuation in its public debut.”

How much to pay yourself as a SaaS founder

Piggy bank With a Money Carrot stick

Image Credits: AlenaPaulus (opens in a new window) / Getty Images

Anna Heim interviewed SaaS entrepreneurs and investors to find out how much early-stage founders should pay themselves.

Startups run by CEOs who take home a small salary tend to do better over the long run, but there are other points to consider, such as geography, marital status, and frankly, what quality of life you desire.

Waterly founder Chris Sosnowski raised his own pay to $14/hour last year; at his prior job, his salary topped $100,000.

“We had saved money up for over a year before we cut out my pay,” he told Anna. “I can live my life without entertainment … so that’s what we did for 2020.”

How much are you willing to sacrifice?

The early-stage venture capital market is weird and chaotic

Alex Wilhelm and Anna Heim had been hearing that Series A raises were coming later, while Series Bs were coming in quick succession after startups landed an A.

That piqued their curiosity, so they put feelers out to a bunch of investors to understand what’s going on in early-stage venture capital markets.

In the first of a two-part series, Alex and Anna examine why seed stage is so chaotic, why As are slow, and why Bs are fast. In their first dispatch, they looked at the U.S. market.

Have you worked with a talented individual or agency who helped you find and keep more users? Respond to our survey and help us find the best startup growth marketers!

Vantage raises $4M to help businesses understand their AWS costs

By Frederic Lardinois

Vantage, a service that helps businesses analyze and reduce their AWS costs, today announced that it has raised a $4 million seed round led by Andreessen Horowitz. A number of angel investors, including Brianne Kimmel, Julia Lipton, Stephanie Friedman, Calvin French Owen, Ben and Moisey Uretsky, Mitch Wainer and Justin Gage, also participated in this round.

Vantage started out with a focus on making the AWS console a bit easier to use — and help businesses figure out what they are spending their cloud infrastructure budgets on in the process. But as Vantage co-founder and CEO Ben Schaechter told me, it was the cost transparency features that really caught on with users.

“We were advertising ourselves as being an alternative AWS console with a focus on developer experience and cost transparency,” he said.”What was interesting is — even in the early days of early access before the formal GA launch in January — I would say more than 95% of the feedback that we were getting from customers was entirely around the cost features that we had in Vantage.”

Image Credits: Vantage

Like any good startup, the Vantage team looked at this and decided to double down on these features and highlight them in its marketing, though it kept the existing AWS Console-related tools as well. The reason the other tools didn’t quite take off, Schaechter believes, is because more and more, AWS users have become accustomed to infrastructure-as-code to do their own automatic provisioning. And with that, they spend a lot less time in the AWS Console anyway.

“But one consistent thing — across the board — was that people were having a really, really hard time twelve times a year, where they would get a shock AWS bill and had to figure out what happened. What Vantage is doing today is providing a lot of value on the transparency front there,” he said.

Over the course of the last few months, the team added a number of new features to its cost transparency tools, including machine learning-driven predictions (both on the overall account level and service level) and the ability to share reports across teams.

Image Credits: Vantage

While Vantage expects to add support for other clouds in the future, likely starting with Azure and then GCP, that’s actually not what the team is focused on right now. Instead, Schaechter noted, the team plans to add support for bringing in data from third-party cloud services instead.

“The number one line item for companies tends to be AWS, GCP, Azure,” he said. “But then, after that, it’s Datadog Cloudflare Sumo Logic, things along those lines. Right now, there’s no way to see, P&L or an ROI from a cloud usage-based perspective. Vantage can be the tool where that’s showing you essentially, all of your cloud costs in one space.”

That is likely the vision the investors bought in as well and even though Vantage is now going up against enterprise tools like Apptio’s Cloudability and VMware’s CloudHealth, Schaechter doesn’t seem to be all that worried about the competition. He argues that these are tools that were born in a time when AWS had only a handful of services and only a few ways of interacting with those. He believes that Vantage, as a modern self-service platform, will have quite a few advantages over these older services.

“You can get up and running in a few clicks. You don’t have to talk to a sales team. We’re helping a large number of startups at this stage all the way up to the enterprise, whereas Cloudability and Cloud Health are, in my mind, kind of antiquated enterprise offerings. No startup is choosing to use those at this point, as far as I know,” he said.

The team, which until now mostly consisted of Schaechter and his co-founder and CTO Brooke McKim, bootstrapped to company up to this point. Now they plan to use the new capital to build out its team (and the company is actively hiring right now), both on the development and go-to-market side.

The company offers a free starter plan for businesses that track up to $2,500 in monthly AWS cost, with paid plans starting at $30 per month for those who need to track larger accounts.

EU is now investigating Google’s adtech over antitrust concerns

By Natasha Lomas

EU antitrust authorities are finally taking a broad and deep look into Google’s adtech stack and role in the online ad market — confirming today that they’ve opened a formal investigation.

Google has already been subject to three major EU antitrust enforcements over the past five years — against Google Shopping (2017), Android (2018) and AdSense (2019). But the European Commission has, until now, avoided officially wading into the broader issue of its role in the adtech supply chain. (The AdSense investigation focused on Google’s search ad brokering business, though Google claims the latest probe represents that next stage of that 2019 enquiry, rather than stemming from a new complaint).

The Commission said that the new Google antitrust investigation will assess whether it has violated EU competition rules by “favouring its own online display advertising technology services in the so called ‘ad tech’ supply chain, to the detriment of competing providers of advertising technology services, advertisers and online publishers”.

Display advertising spending in the EU in 2019 was estimated to be approximately €20BN, per the Commission.

“The formal investigation will notably examine whether Google is distorting competition by restricting access by third parties to user data for advertising purposes on websites and apps, while reserving such data for its own use,” it added in a press release.

Earlier this month, France’s competition watchdog fined Google $268M in a case related to self-preferencing within the adtech market — which the watchdog found constituted an abuse by Google of a dominant position for ad servers for website publishers and mobile apps.

In that instance Google sought a settlement — proposing a number of binding interoperability agreements which the watchdog accepted. So it remains to be seen whether the tech giant may seek to push for a similar outcome at the EU level.

There is one cautionary signal in that respect in the Commission’s press release which makes a point of flagging up EU data protection rules — and highlighting the need to take into account the protection of “user privacy”.

That’s an interesting side-note for the EU’s antitrust division to include, given some of the criticism that France’s Google adtech settlement has attracted — for risking cementing abusive user exploitation (in the form of adtech privacy violations) into the sought for online advertising market rebalancing.

Or as Cory Doctorow neatly explains it in this Twitter thread: “The last thing we want is competition in practices that harm the public.”

Aka, unless competition authorities wise up to the data abuses being perpetuated by dominant tech platforms — such as through enlightened competition authorities engaging in close joint-working with privacy regulators (in the EU this is, at least, possible since there’s regulation in both areas) — there’s a very real risk that antitrust enforcement against Big (ad)Tech could simply supercharge the user-hostile privacy abuses that surveillance giants have only been able to get away with because of their market muscle.

So, tl;dr, ill-thought through antitrust enforcement actually risks further eroding web users’ rights… and that would indeed be a terrible outcome. (Unless you’re Google; then it would represent successfully playing one regulator off against another at the expense of users.)

The last thing we want is competition in practices that harm the public – we don't want companies to see who can commit the most extensive human rights abuses at the lowest costs. That's not something we want to render more efficient.https://t.co/qDPr6OtP90

12/

— Cory Doctorow (@doctorow) June 8, 2021

The need for competition and privacy regulators to work together to purge Big Tech market abuses has become an active debate in Europe — where a few pioneering regulators (like German’s FCO) are ahead of the pack.

The UK’s Competition and Markets Authority (CMA) and Information Commissioner’s Office (ICO) also recently put out a joint statement — laying out their conviction that antitrust and data protection regulators must work together to foster a thriving digital economy that’s healthy across all dimensions — i.e. for competitors, yes, but also for consumers.

A recent CMA proposed settlement related to Google’s planned replacement for tracking cookies — aka ‘Privacy Sandbox’, which has also been the target of antitrust complaints by publishers — was notable in baking in privacy commitments and data protection oversight by the ICO in addition to the CMA carrying out its competition enforcement role.

It’s fair to say that the European Commission has lagged behind such pioneers in appreciating the need for synergistic regulatory joint-working, with the EU’s antitrust chief roundly ignoring — for example — calls to block Google’s acquisition of Fitbit over the data advantage it would entrench, in favor of accepting a few ‘concessions’ to waive the deal through.

So it’s interesting to see the EU’s antitrust division here and now — at the very least — virtue signalling an awareness of the problem of regional regulators approaching competition and privacy as if they exist in firewalled silos.

Whether this augurs the kind of enlightened regulatory joint working — to achieve holistically healthy and dynamic digital markets — which will certainly be essential if the EU is to effectively grapple with surveillance capitalism very much remains to be seen. But we can at least say that the inclusion of the below statement in an EU antitrust division press release represents a change of tone (and that, in itself, looks like a step forward…):

“Competition law and data protection laws must work hand in hand to ensure that display advertising markets operate on a level playing field in which all market participants protect user privacy in the same manner.”

Returning to the specifics of the EU’s Google adtech probe, the Commission says it will be particularly examining:

  • The obligation to use Google’s services Display & Video 360 (‘DV360′) and/or Google Ads to purchase online display advertisements on YouTube.
  • The obligation to use Google Ad Manager to serve online display advertisements on YouTube, and potential restrictions placed by Google on the way in which services competing with Google Ad Manager are able to serve online display advertisements on YouTube.
  • The apparent favouring of Google’s ad exchange “AdX” by DV360 and/or Google Ads and the potential favouring of DV360 and/or Google Ads by AdX.
  • The restrictions placed by Google on the ability of third parties, such as advertisers, publishers or competing online display advertising intermediaries, to access data about user identity or user behaviour which is available to Google’s own advertising intermediation services, including the Doubleclick ID.
  • Google’s announced plans to prohibit the placement of third party ‘cookies’ on Chrome and replace them with the “Privacy Sandbox” set of tools, including the effects on online display advertising and online display advertising intermediation markets.
  • Google’s announced plans to stop making the advertising identifier available to third parties on Android smart mobile devices when a user opts out of personalised advertising, and the effects on online display advertising and online display advertising intermediation markets.

Commenting on the investigation in a statement, Commission EVP and competition chief, Margrethe Vestager, added:

“Online advertising services are at the heart of how Google and publishers monetise their online services. Google collects data to be used for targeted advertising purposes, it sells advertising space and also acts as an online advertising intermediary. So Google is present at almost all levels of the supply chain for online display advertising. We are concerned that Google has made it harder for rival online advertising services to compete in the so-called ad tech stack. A level playing field is of the essence for everyone in the supply chain. Fair competition is important — both for advertisers to reach consumers on publishers’ sites and for publishers to sell their space to advertisers, to generate revenues and funding for content. We will also be looking at Google’s policies on user tracking to make sure they are in line with fair competition.”

Contacted for comment on the Commission investigation, a Google spokesperson sent us this statement:

“Thousands of European businesses use our advertising products to reach new customers and fund their websites every single day. They choose them because they’re competitive and effective. We will continue to engage constructively with the European Commission to answer their questions and demonstrate the benefits of our products to European businesses and consumers.”

Google also claimed that publishers keep around 70% of the revenue when using its products — saying in some instances it can be more.

It also suggested that publishers and advertisers often use multiple technologies simultaneously, further claiming that it builds its own technologies to be interoperable with more than 700 rival platforms for advertisers and 80 rival platforms for publishers.

UK’s ICO warns over ‘Big Data’ surveillance threat of live facial recognition in public

By Natasha Lomas

The UK’s chief data protection regulator has warned over reckless and inappropriate use of live facial recognition (LFR) in public places.

Publishing an opinion today on the use of this biometric surveillance in public — to set out what is dubbed as the “rules of engagement” — the information commissioner, Elizabeth Denham, also noted that a number of investigations already undertaken by her office into planned applications of the tech have found problems in all cases.

“I am deeply concerned about the potential for live facial recognition (LFR) technology to be used inappropriately, excessively or even recklessly. When sensitive personal data is collected on a mass scale without people’s knowledge, choice or control, the impacts could be significant,” she warned in a blog post.

“Uses we’ve seen included addressing public safety concerns and creating biometric profiles to target people with personalised advertising.

“It is telling that none of the organisations involved in our completed investigations were able to fully justify the processing and, of those systems that went live, none were fully compliant with the requirements of data protection law. All of the organisations chose to stop, or not proceed with, the use of LFR.”

“Unlike CCTV, LFR and its algorithms can automatically identify who you are and infer sensitive details about you. It can be used to instantly profile you to serve up personalised adverts or match your image against known shoplifters as you do your weekly grocery shop,” Denham added.

“In future, there’s the potential to overlay CCTV cameras with LFR, and even to combine it with social media data or other ‘Big Data’ systems — LFR is supercharged CCTV.”

The use of biometric technologies to identify individuals remotely sparks major human rights concerns, including around privacy and the risk of discrimination.

Across Europe there are campaigns — such as Reclaim your Face — calling for a ban on biometric mass surveillance.

In another targeted action, back in May, Privacy International and others filed legal challenges at the controversial US facial recognition company, Clearview AI, seeking to stop it from operating in Europe altogether. (Some regional police forces have been tapping in — including in Sweden where the force was fined by the national DPA earlier this year for unlawful use of the tech.)

But while there’s major public opposition to biometric surveillance in Europe, the region’s lawmakers have so far — at best — been fiddling around the edges of the controversial issue.

A pan-EU regulation the European Commission presented in April, which proposes a risk-based framework for applications of artificial intelligence, included only a partial prohibition on law enforcement’s use of biometric surveillance in public places — with wide ranging exemptions that have drawn plenty of criticism.

There have also been calls for a total ban on the use of technologies like live facial recognition in public from MEPs across the political spectrum. The EU’s chief data protection supervisor has also urged lawmakers to at least temporarily ban the use of biometric surveillance in public.

The EU’s planned AI Regulation won’t apply in the UK, in any case, as the country is now outside the bloc. And it remains to be seen whether the UK government will seek to weaken the national data protection regime.

A recent report it commissioned to examine how the UK could revise its regulatory regime, post-Brexit, has — for example — suggested replacing the UK GDPR with a new “UK framework” — proposing changes to “free up data for innovation and in the public interest”, as it puts it, and advocating for revisions for AI and “growth sectors”. So whether the UK’s data protection regime will be put to the torch in a post-Brexit bonfire of ‘red tape’ is a key concern for rights watchers.

(The Taskforce on Innovation, Growth and Regulatory Reform report advocates, for example, for the complete removal of Article 22 of the GDPR — which gives people rights not to be subject to decisions based solely on automated processing — suggesting it be replaced with “a focus” on “whether automated profiling meets a legitimate or public interest test”, with guidance on that envisaged as coming from the Information Commissioner’s Office (ICO). But it should also be noted that the government is in the process of hiring Denham’s successor; and the digital minister has said he wants her replacement to take “a bold new approach” that “no longer sees data as a threat, but as the great opportunity of our time”. So, er, bye-bye fairness, accountability and transparency then?)

For now, those seeking to implement LFR in the UK must comply with provisions in the UK’s Data Protection Act 2018 and the UK General Data Protection Regulation (aka, its implementation of the EU GDPR which was transposed into national law before Brexit), per the ICO opinion, including data protection principles set out in UK GDPR Article 5, including lawfulness, fairness, transparency, purpose limitation, data minimisation, storage limitation, security and accountability.

Controllers must also enable individuals to exercise their rights, the opinion also said.

“Organisations will need to demonstrate high standards of governance and accountability from the outset, including being able to justify that the use of LFR is fair, necessary and proportionate in each specific context in which it is deployed. They need to demonstrate that less intrusive techniques won’t work,” wrote Denham. “These are important standards that require robust assessment.

“Organisations will also need to understand and assess the risks of using a potentially intrusive technology and its impact on people’s privacy and their lives. For example, how issues around accuracy and bias could lead to misidentification and the damage or detriment that comes with that.”

The timing of the publication of the ICO’s opinion on LFR is interesting in light of wider concerns about the direction of UK travel on data protection and privacy.

If, for example, the government intends to recruit a new, ‘more pliant’ information commissioner — who will happily rip up the rulebook on data protection and AI, including in areas like biometric surveillance — it will at least be rather awkward for them to do so with an opinion from the prior commissioner on the public record that details the dangers of reckless and inappropriate use of LFR.

Certainly, the next information commissioner won’t be able to say they weren’t given clear warning that biometric data is particularly sensitive — and can be used to estimate or infer other characteristics, such as their age, sex, gender or ethnicity.

Or that ‘Great British’ courts have previously concluded that “like fingerprints and DNA [a facial biometric template] is information of an ‘intrinsically private’ character”, as the ICO opinion notes, while underlining that LFR can cause this super sensitive data to be harvested without the person in question even being aware it’s happening. 

Denham’s opinion also hammers hard on the point about the need for public trust and confidence for any technology to succeed, warning that: “The public must have confidence that its use is lawful, fair, transparent and meets the other standards set out in data protection legislation.”

The ICO has previously published an Opinion into the use of LFR by police forces — which she said also sets “a high threshold for its use”. (And a few UK police forces — including the Met in London — have been among the early adopters of facial recognition technology, which has in turn led some into legal hot water on issues like bias.)

Disappointingly, though, for human rights advocates, the ICO opinion shies away from recommending a total ban on the use of biometric surveillance in public by private companies or public organizations — with the commissioner arguing that while there are risks with use of the technology there could also be instances where it has high utility (such as in the search for a missing child).

“It is not my role to endorse or ban a technology but, while this technology is developing and not widely deployed, we have an opportunity to ensure it does not expand without due regard for data protection,” she wrote, saying instead that in her view “data protection and people’s privacy must be at the heart of any decisions to deploy LFR”.

Denham added that (current) UK law “sets a high bar to justify the use of LFR and its algorithms in places where we shop, socialise or gather”.

“With any new technology, building public trust and confidence in the way people’s information is used is crucial so the benefits derived from the technology can be fully realised,” she reiterated, noting how a lack of trust in the US has led to some cities banning the use of LFR in certain contexts and led to some companies pausing services until rules are clearer.

“Without trust, the benefits the technology may offer are lost,” she also warned.

There is one red line that the UK government may be forgetting in its unseemly haste to (potentially) gut the UK’s data protection regime in the name of specious ‘innovation’. Because if it tries to, er, ‘liberate’ national data protection rules from core EU principles (of lawfulness, fairness, proportionality, transparency, accountability and so on) — it risks falling out of regulatory alignment with the EU, which would then force the European Commission to tear up a EU-UK data adequacy arrangement (on which the ink is still drying).

The UK having a data adequacy agreement from the EU is dependent on the UK having essentially equivalent protections for people’s data. Without this coveted data adequacy status UK companies will immediately face far greater legal hurdles to processing the data of EU citizens (as the US now does, in the wake of the demise of Safe Harbor and Privacy Shield). There could even be situations where EU data protection agencies order EU-UK data flows to be suspended altogether…

Obviously such a scenario would be terrible for UK business and ‘innovation’ — even before you consider the wider issue of public trust in technologies and whether the Great British public itself wants to have its privacy rights torched.

Given all this, you really have to wonder whether anyone inside the UK government has thought this ‘regulatory reform’ stuff through. For now, the ICO is at least still capable of thinking for them.

 

Adtech ‘data breach’ GDPR complaint is headed to court in EU

By Natasha Lomas

New York-based IAB Tech Labs, a standards body for the digital advertising industry, is being taken to court in Germany by the Irish Council for Civil Liberties (ICCL) in a piece of privacy litigation that’s targeted at the high speed online ad auction process known as real-time bidding (RTB).

While that may sound pretty obscure the case essentially loops in the entire ‘data industrial complex’ of adtech players, large and small, which make money by profiling Internet users and selling access to their attention — from giants like Google and Facebook to other household names (the ICCL’s PR also name-checks Amazon, AT&T, Twitter and Verizon, the latter being the parent company of TechCrunch — presumably because all participate in online ad auctions that can use RTB); as well as the smaller (typically non-household name) adtech entities and data brokers which also also involved in handling people’s data to run high velocity background auctions that target behavioral ads at web users.

The driving force behind the lawsuit is Dr Johnny Ryan, a former adtech insider turned whistleblower who’s now a senior fellow a the ICCL — and who has dubbed RTB the biggest data breach of all time.

He points to the IAB Tech Lab’s audience taxonomy documents which provide codes for what can be extremely sensitive information that’s being gathered about Internet users, based on their browsing activity, such as political affiliation, medical conditions, household income, or even whether they may be a parent to a special needs child.

The lawsuit contends that other industry documents vis-a-vis the ad auction system confirm there are no technical measures to limit what companies can do with people’s data, nor who they might pass it on to.

The lack of security inherent to the RTB process also means other entities not directly involved in the adtech bidding chain could potentially intercept people’s information — when it should, on the contrary, be being protected from unauthorized access, per EU law…

Ryan and others have been filing formal complaints against RTB security issue for years, arguing the system breaches a core principle of Europe’s General Data Protection Regulation (GDPR) — which requires that personal data be “processed in a manner that ensures appropriate security… including protection against unauthorised or unlawful processing and against accidental loss” — and which, they contend, simply isn’t possible given how RTB functions.

The problem is that Europe’s data protection agencies have failed to act. Which is why Ryan, via the ICCL, has decided to take the more direct route of filing a lawsuit.

“There aren’t many DPAs around the union that haven’t received evidence of what I think is the biggest data breach of all time but it started with the UK and Ireland — neither of which took, I think it’s fair to say, any action. They both said they were doing things but nothing has changed,” he tells TechCrunch, explaining why he’s decided to take the step of litigating to try to enforce Internet users’ data protection rights.

“I want to take the most efficient route to protection people’s rights around data,” he adds.

Per Ryan, the Irish Data Protection Commission (DPC) has still not sent a statement of issues relating to the RTB complaint he lodged with them back in 2018 — so years later. In May 2019 the DPC did announce it was opening a formal investigation into Google’s adtech, following the RTB complaints, but the case remains open and unresolved. (We’ve contacted the DPC with questions about its progress on the investigation and will update with any response.)

Since the GDPR came into application in Europe in May 2018 there has been growth in privacy lawsuits  — including class action style suits — so litigation funders may be spying an opportunity to cash in on the growing enforcement gap left by resource-strapped and, well, risk-averse data protection regulators.

A similar complaint about RTB lodged with the UK’s Information Commissioner’s Office (ICO) also led to a lawsuit being filed last year — albeit in that case it was against the watchdog itself for failing to take any action. (The ICO’s last missive to the adtech industry told it to — uhhhh — expect audits.)

“The GDPR was supposed to create a situation where the average person does not need to wear a tin-foil hat, they do not need to be paranoid or take action to become well informed. Instead, supervisory authorities protect them. And these supervisory authorities — paid for by the tax payer — have very strong powers. They can gain admission to any documents and any premises. It’s not about fines I don’t think, just. They can tell the biggest most powerful companies in the world to stop doing what they’re doing with our data. That’s the ultimate power,” says Ryan. “So GDPR sets up these guardians — these potentially very empowered guardians — but they’ve not used those powers… That’s why we’re acting.”

“I do wish that I’d litigated years ago,” he adds. “There’s lots of reasons why I didn’t do that — I do wish, though, that this litigation was unnecessary because supervisory authorities protected me and you. But they didn’t. So now, as Irish politics like to say in the middle of a crisis, we are where we are. But this is — hopefully — several nails in the coffin [of RTB].”

We are going to court. Our lawsuit takes aim at Google, Facebook, Amazon, Twitter, Verizon, AT&T and the entire online advertising/tracking industry by challenging industry rules set by IAB TechLab. ⁦@ICCLtweethttps://t.co/D7NkyAILQg

— Johnny Ryan (@johnnyryan) June 16, 2021

The lawsuit has been filed in Germany as Ryan says they’ve been able to establish that IAB Tech Labs — which is NY-based and has no official establishment in Europe — has representation (a consultancy it hired) that’s based in the country. Hence they believe there is a clear route to litigate the case at the Landgerichte, Hamburg.

While Ryan has been indefatigably sounding the alarm about RTB for years he’s prepared to clock up more mileage going direct through the courts to see the natter through.

And to keep hammering home his message to the adtech industry that it must clean up its act and that recent attempts to maintain the privacy-hostile status quo — by trying to rebrand and repackage the same old data shuffle under shiny new claims of ‘privacy’ and ‘responsibility’ — simply won’t wash. So the message is really: Reform or die.

“This may very well end up at the ECJ [European Court of Justice]. And that would take a few years but long before this ends up at the ECJ I think it’ll be clear to the industry now that it’s time to reform,” he adds.

IAB Tech Labs has been contacted for comment on the ICCL’s lawsuit.

Ryan is by no means the only person sounding the alarm over adtech. Last year the European Parliament called for tighter controls on behavioral ads to be baked into reforms of the region’s digital rules — calling for regulation to favor less intrusive, contextual forms of advertising which do not rely on mass surveillance of Internet users.

While even Google has said it wants to depreciate support for tracking cookies in favor of a new stack of technology proposals that it dubs ‘Privacy Sandbox’ (although its proposed alternative — targeting groups of Internet users based on interests derived from tracking their browsing habits — has been criticized as potentially amplifying problems of predatory and exploitative ad targeting, so may not represent a truly clean break with the rights-hostile adtech status quo).

The IAB is also facing another major privacy law challenge in Europe — where complaints against a widely used framework it designed for websites to obtain Internet users’ consent to being tracked for ads online led to scrutiny by Belgium’s data protection agency. And, last year, its investigatory division found that the IAB Europe’s Transparency and Consent Framework (TCF) fails to meet the required standards of data protection under the GDPR.

The case went in front of the litigation chamber last week.

A verdict — and any enforcement action by the Belgian DPA over the IAB Europe’s TCF — remains pending.

CISA launches platform to let hackers report security bugs to US federal agencies

By Zack Whittaker

The Cybersecurity and Infrastructure Security Agency has launched a vulnerability disclosure program allowing ethical hackers to report security flaws to federal agencies.

The platform, launched with the help of cybersecurity companies Bugcrowd and Endyna, will allow civilian federal agencies to receive, triage and fix security vulnerabilities from the wider security community.

The move to launch the platform comes less than a year after the federal cybersecurity agency, better known as CISA, directed the civilian federal agencies that it oversees to develop and publish their own vulnerability disclosure policies. These policies are designed to set the rules of engagement for security researchers by outlining what (and how) online systems can be tested, and which can’t be.

It’s not uncommon for private companies to run VDP programs to allow hackers to report bugs, often in conjunction with a bug bounty to pay hackers for their work. The U.S. Department of Defense has for years warmed to hackers, the civilian federal government has been slow to adopt.

Bugcrowd, which last year raised $30 million at Series D, said the platform will “give agencies access to the same commercial technologies, world-class expertise, and global community of helpful ethical hackers currently used to identify security gaps for enterprise businesses.”

The platform will also help CISA share information about security flaws between other agencies.

The platform launches after a bruising few months for government cybersecurity, including a Russian-led espionage campaign against at least nine U.S. federal government agencies by hacking software house SolarWinds, and a China-linked cyberattack that backdoored thousands of Microsoft Exchange servers, including in the federal government.

Facebook changes misinfo rules to allow posts claiming COVID-19 is man-made

By Taylor Hatmaker

Facebook made a few noteworthy changes to its misinformation policies this week, including the news that the company will now allow claims that COVID was created by humans — a theory that contradicts the previously prevailing assumption that humans picked up the virus naturally from animals.

“In light of ongoing investigations into the origin of COVID-19 and in consultation with public health experts, we will no longer remove the claim that COVID-19 is man-made from our apps,” a Facebook spokesperson told TechCrunch. “We’re continuing to work with health experts to keep pace with the evolving nature of the pandemic and regularly update our policies as new facts and trends emerge.”

The company is adjusting its rules about pandemic misinformation in light of international investigations legitimating the theory that the virus could have escaped from a lab. While that theory clearly has enough credibility to be investigated at this point, it is often interwoven with demonstrably false misinformation about fake cures, 5G towers causing COVID and most recently the false claim that the AstraZeneca vaccine implants recipients with a Bluetooth chip.

Earlier this week, President Biden ordered a multi-agency intelligence report evaluating if the virus could have accidentally leaked out of a lab in Wuhan, China. Biden called this possibility one of two “likely scenarios.”

“… Shortly after I became President, in March, I had my National Security Advisor task the Intelligence Community to prepare a report on their most up-to-date analysis of the origins of COVID-19, including whether it emerged from human contact with an infected animal or from a laboratory accident,” Biden said in an official White House statement, adding that there isn’t sufficient evidence to make a final determination.

Claims that the virus was man-made or lab-made have circulated widely since the pandemic’s earliest days, even as the scientific community largely maintained that the virus probably made the jump from an infected animal to a human via natural means. But many questions remain about the origins of the virus and the U.S. has yet to rule out the possibility that the virus emerged from a Chinese lab — a scenario that would be a bombshell for international relations.

Prior to the COVID policy change, Facebook announced that it would finally implement harsher punishments against individuals who repeatedly peddle misinformation. The company will now throttle the News Feed reach of all posts from accounts that are found to habitually share known misinformation, restrictions it previously put in place for Pages, Groups, Instagram accounts and websites that repeatedly break the same rules.

Europe to press the adtech industry to help fight online disinformation

By Natasha Lomas

The European Union plans to beef up its response to online disinformation, with the Commission saying today it will step up efforts to combat harmful but not illegal content — including by pushing for smaller digital services and adtech companies to sign up to voluntary rules aimed at tackling the spread of this type of manipulative and often malicious content.

EU lawmakers pointed to risks such as the threat to public health posed by the spread of harmful disinformation about COVID-19 vaccines as driving the need for tougher action.

Concerns about the impacts of online disinformation on democratic processes are another driver, they said.

A new more expansive code of practice on disinformation is now being prepared — and will, they hope, be finalized in September, to be ready for application at the start of next year.

The Commission’s gear change is a fairly public acceptance that the EU’s voluntary code of practice — an approach Brussels has taken since 2018 — has not worked out as hope. And, well, we did warn them.

A push to get the adtech industry on board with demonetizing viral disinformation is certainly overdue.

It’s clear the online disinformation problem hasn’t gone away. Some reports have suggested problematic activity — like social media voter manipulation and computational propaganda — have been getting worse in recent years, rather than better.

However getting visibility into the true scale of the disinformation problem remains a huge challenge given those best placed to know (ad platforms) don’t freely open their systems to external researchers. And that’s something else the Commission would like to change.

Signatories to the EU’s current code of practice on disinformation are:

Google, Facebook, Twitter, Microsoft, TikTok, Mozilla, DOT Europe (Former EDiMA), the  World  Federation  of Advertisers  (WFA) and its Belgian counterpart, the  Union  of  Belgian  Advertisers  (UBA);  the  European Association of Communications Agencies (EACA), and its national members from France, Poland and the Czech Republic — respectively, Association   des   Agences   Conseils   en   Communication   (AACC), Stowarzyszenie Komunikacji Marketingowej/Ad Artis Art Foundation (SAR), and Asociace Komunikacnich Agentur (AKA); the Interactive Advertising Bureau (IAB Europe), Kreativitet & Kommunikation, and Goldbach Audience (Switzerland) AG.

EU lawmakers said they want to broaden participation by getting smaller platforms to join, as well as recruiting all the various players in the adtech space whose tools provide the means for monetizing online disinformation.

Commissioners said they want to see the code covering a “whole range” of actors in the online advertising industry (i.e. rather than the current handful).

It’s certainly notable that the digital advertising industry body Internet Advertising Bureau is not on that list. (We’ve reached out to the IAB Europe to ask if it’s planning to join the code and will update this report with any response.)

In its press release today the Commission also said it wants platforms and adtech players to exchange information on disinformation ads that have been refused by one of them — so there can be a more coordinate response to shut out bad actors.

As for those who are signed up already, the Commission’s report card on their performance was bleak.

Speaking during a press conference, internal market commissioner Thierry Breton said that only one of the five platform signatories to the code has “really” lived up to its commitments — which was presumably a reference to the first five tech giants in the above list (aka: Google, Facebook, Twitter, Microsoft and TikTok).

Breton demurred on doing an explicit name-and-shame of the four others — who he said have not “at all” done what was expected of them — saying it’s not the Commission’s place to do that.

Rather he said people should decide among themselves which of the platform giants that signed up to the code have failed to live up to their commitments. (Signatories since 2018 have pledged to take action to disrupt ad revenues of accounts and websites that spread disinformation; to enhance transparency around political and issue-based ads; tackle fake accounts and online bots; to empower consumers to report disinformation and access different news sources while improving the visibility and discoverability of authoritative content; and to empower the research community so outside experts can help monitor online disinformation through privacy-compliant access to platform data.)

Frankly it’s hard to imagine who from the above list of five tech giants might actually be meeting the Commission’s bar. (Microsoft perhaps, on account of its relatively modest social activity vs the others.)

Safe to say, there’s been a lot of more hot air (in the form of selective PR) on the charged topic of disinformation vs hard accountability from the major social platforms over the past three years.

So it’s perhaps no accident that Facebook chose today to puff up its historical efforts to combat what it refers to as “influence operations” — aka “coordinated efforts to manipulate or corrupt public debate for a strategic goal” — by publishing what it couches as a “threat report” detailing what it’s done in this area between 2017 and 2000.

Influence ops refer to online activity that may be being conducted by hostile foreign governments or by malicious agents seeking, in this case, to use Facebook’s ad tools as a mass manipulation tool — perhaps to try to skew an election result or influence the shape of looming regulations. And Facebook’s ‘threat report’ states that the tech giant took down and publicly reported only 150 such operations over the report period.

Yet as we know from Facebook whistleblower Sophie Zhang, the scale of the problem of mass malicious manipulation activity on Facebook’s platform is vast and its response to it is both under-resourced and PR-led. (A memo written by the former Facebook data scientist, covered by BuzzFeed last year, detailed a lack of institutional support for her work and how takedowns of influence operations could almost immediately respawn — without Facebook doing anything.)

NB: If it’s Facebook’s “broader enforcement against deceptive tactics that do not rise to the level of [Coordinate Inauthentic Behavior]” that you’re looking for, rather than efforts against ‘influence operations’, it has a whole other report for that — the Inauthentic Behavior Report! — because of course Facebook gets to mark its own homework when it comes to tackling fake activity, and shapes its own level of transparency since there are no legally binding reporting rules on disinformation.

Legally binding rules on handling online disinformation aren’t in the EU’s pipeline either — but commissioners said today that they wanted a beefed up and “more binding” code.

They do have some levers to pull here via a wider package of digital reforms that’s coming (aka the Digital Services Act).

The DSA will bring in legally binding rules for how platforms handle illegal content and they intend the tougher disinformation code to plug into that (in the form of what they call a “co-regulatory backstop for the measures that will be included in the revised and strengthened Code”).

It still won’t be legally binding but it may earn compliant platforms wider DSA ‘credit’. So it looks like disinformation-muck-spreaders’ arms are set to be twisted in a pincer regulatory move by making sure this stuff is looped into the legally binding DSA.

Still, Brussels maintains that it does not want to legislate around disinformation.

The risks are that a centralized approach might smell like censorship — and it sounds keen to avoid that charge at all costs.

The digital regulation packages the EU has put forward since the 2019 collage took up its mandate aim generally to increase transparency, safety and accountability online, its values and transparency commissioner, Vera Jourova, said today.

Breton also said that now is the “right time” to deepen obligations under the disinformation code — with the DSA incoming — and also to give the platforms time to adapt (and involve themselves in discussions on shaping additional obligations).

In another interesting remark he also talked about regulators needing to “be able to audit platforms” — in order to be able to “check what is happening with the algorithms that push these practices”. Though quite how audit powers can be made to fit with a voluntary, non-legally binding code of practice remains to be seen.

Discussing areas where the current code has fallen short Jourova pointed to inconsistencies of application across different EU Member States and languages.

She also said the Commission is keen for the beefed up code to do more to enable and empower users to act when they see something dodgy online — such as by providing users with tools to flag problem content.

Platforms should also provide users with the ability to appeal disinformation content takedowns (to avoid the risk of opinions being incorrectly removed).

The focus for the code would be on tackling false “facts not opinions”, she emphasized, saying the Commission wants platforms to “embed fact-checking into their systems” and for the code to work towards a “decentralized care of facts”.

She went on to say that the current signatories to the code haven’t provided external researchers with the kind of data access the Commission would like to see — to support greater transparency into (and accountability around) the disinformation problem.

The code does require either monthly (for COVID-19 disinformation), six monthly or yearly reports from signatories (depending on the size of the entity) but what’s being provided so far doesn’t add up to a comprehensive picture of disinformation activity and platform reaction, she said.

She also warned that online manipulation tactics are fast evolving and highly innovative — while saying the Commission would nonetheless like to see signatories agree on a set of identifiable “problematic techniques” to help speed up responses.

EU lawmakers will be coming with a specific plan for tackling political ads transparency in November, she noted.

They are also, in parallel, working on how to respond to the threat posed to European democracies by foreign interference cyberops — such as the aforementioned influence operations often found hosted on Facebook’s platform.

The commissioners did not give many details of those plans today but Jourova said it’s “high time to impose costs on perpetrators” — suggesting that some interesting possibilities may be being considered, such as trade sanctions for state-backed disops (although attribution would be one challenge).

Breton said countering foreign influence over the “informational space” is important work to defend the values of European democracy.

He also said the Commission’s anti-disinformation efforts would focus on support for education to help equip citizens with the necessary critical thinking capabilities to navigate the huge quantities of variable quality information that now surrounds them.

 

Peloton and Echelon profile photo metadata exposed riders’ real-world locations

By Zack Whittaker

Security researchers say at-home exercise giant Peloton and its closest rival Echelon were not stripping user-uploaded profile photos of their metadata, in some cases exposing users’ real-world location data.

Almost every file, photo or document contains metadata, which is data about the file itself, such as how big it is, when it was created, and by whom. Photos and video will often also include the location from where they were taken. That location data helps online services tag your photos or videos that you were at this restaurant or that other landmark.

But those online services — especially social platforms, where you see people’s profile photos — are supposed to remove location data from the file’s metadata so other users can’t snoop on where you’ve been, since location data can reveal where you live, work, where you go, and who you see.

Jan Masters, a security researcher at Pen Test Partners, found the metadata exposure as part of a wider look at Peloton’s leaky API. TechCrunch verified the bug by uploading a profile photo with GPS coordinates of our New York office, and checking the metadata of the file while it was on the server.

The bugs were privately reported to both Peloton and Echelon.

Peloton fixed its API issues earlier this month but said it needed more time to fix the metadata bug and to strip existing profile photos of any location data. A Peloton spokesperson confirmed the bugs were fixed last week. Echelon fixed its version of the bug earlier this month. But TechCrunch held this report until we had confirmation that both companies had fixed the bug and that metadata had been stripped from old profile photos.

It’s not known how long the bug existed or if anyone maliciously exploited it to scrape users’ personal information. Any copies, whether cached or scraped, could represent a significant privacy risk to users whose location identifies their home address, workplace, or other private location.

Parler infamously didn’t scrub metadata from user-uploaded photos, which exposed the locations of millions of users when archivists exploited weaknesses on the platform’s API to download its entire contents. Others have been slow to adopt metadata stripping, like Slack, even if it got there in the end.

Read more:

Echelon exposed riders’ account data, thanks to a leaky API

By Zack Whittaker

Image Credits: Echelon (stock image)

Peloton wasn’t the only at-home workout giant exposing private account data. Rival exercise giant Echelon also had a leaky API that let virtually anyone access riders’ account information.

Fitness technology company Echelon, like Peloton, offers a range of workout hardware — bikes, rowers and a treadmill — as a cheaper alternative for members to exercise at home. Its app also lets members join virtual classes without the need for workout equipment.

But Jan Masters, a security researcher at Pen Test Partners, found that Echelon’s API allowed him to access the account data — including name, city, age, sex, phone number, weight, birthday and workout statistics and history — of any other member in a live or pre-recorded class. The API also disclosed some information about members’ workout equipment, such as its serial number.

Masters, if you recall, found a similar bug with Peloton’s API, which let him make unauthenticated requests and pull private user account data directly from Peloton’s servers without the server ever checking to make sure he (or anyone else) was allowed to request it.

Echelon’s API allows its members’ devices and apps to talk with Echelon’s servers over the internet. The API was supposed to check if the member’s device was authorized to pull user data by checking for an authorization token. But Masters said the token wasn’t needed to request data.

Masters also found another bug that allowed members to pull data on any other member because of weak access controls on the API. Masters said this bug made it easy to enumerate user account IDs and scrape account data from Echelon’s servers. Facebook, LinkedIn, Peloton and Clubhouse have all fallen victim to scraping attacks that abuse access to APIs to pull in data about users on their platforms.

Ken Munro, founder of Pen Test Partners, disclosed the vulnerabilities to Echelon on January 20 in a Twitter direct message, since the company doesn’t have a public-facing vulnerability disclosure process (which it says is now “under review”). But the researchers did not hear back during the 90 days after the report was submitted, the standard amount of time security researchers give companies to fix flaws before their details are made public.

TechCrunch asked Echelon for comment, and was told that the security flaws identified by Masters — which he wrote up in a blog post — were fixed in January.

“We hired an outside service to perform a penetration test of systems and identify vulnerabilities. We have taken appropriate actions to correct these, most of which were implemented by January 21, 2021. However, Echelon’s position is that the User ID is not PII [personally identifiable information],” said Chris Martin, Echelon’s chief information security officer, in an email.

Echelon did not name the outside security company but said while the company said it keeps detailed logs, it did not say if it had found any evidence of malicious exploitation.

But Munro disputed the company’s claim of when it fixed the vulnerabilities, and provided TechCrunch with evidence that one of the vulnerabilities was not fixed until at least mid-April, and another vulnerability could still be exploited as recently as of this week.

When asked for clarity, Echelon did not address the discrepancies. “[The security flaws] have been remediated,” Martin reiterated.

Echelon also confirmed it fixed a bug that allowed users under the age of 13 to sign up. Many companies block access to children under the age of 13 to avoid complying with the Children’s Online Privacy Protection Act, or COPPA, a U.S. law that puts strict rules on what data companies can collect on children. TechCrunch was able to create an Echelon account this week with an age less than 13, despite the page saying: “Minimum age of use is 13 years old.”

❌