Environmental, social and governance (ESG) factors should be key considerations for CTOs and technology leaders scaling next generation companies from day one. Investors are increasingly prioritizing startups that focus on ESG, with the growth of sustainable investing skyrocketing.
What’s driving this shift in mentality across every industry? It’s simple: Consumers are no longer willing to support companies that don’t prioritize sustainability. According to a survey conducted by IBM, the COVID-19 pandemic has elevated consumers’ focus on sustainability and their willingness to pay out of their own pockets for a sustainable future. In tandem, federal action on climate change is increasing, with the U.S. rejoining the Paris Climate Agreement and a recent executive order on climate commitments.
Over the past few years, we have seen an uptick in organizations setting long-term sustainability goals. However, CEOs and chief sustainability officers typically forecast these goals, and they are often long term and aspirational — leaving the near and midterm implementation of ESG programs to operations and technology teams.
Until recently, choosing cloud regions meant considering factors like cost and latency to end users. But carbon is another factor worth considering.
CTOs are a crucial part of the planning process, and in fact, can be the secret weapon to help their organization supercharge their ESG targets. Below are a few immediate steps that CTOs and technology leaders can take to achieve sustainability and make an ethical impact.
As more businesses digitize and more consumers use devices and cloud services, the energy needed by data centers continues to rise. In fact, data centers account for an estimated 1% of worldwide electricity usage. However, a forecast from IDC shows that the continued adoption of cloud computing could prevent the emission of more than 1 billion metric tons of carbon dioxide from 2021 through 2024.
Make compute workloads more efficient: First, it’s important to understand the links between computing, power consumption and greenhouse gas emissions from fossil fuels. Making your app and compute workloads more efficient will reduce costs and energy requirements, thus reducing the carbon footprint of those workloads. In the cloud, tools like compute instance auto scaling and sizing recommendations make sure you’re not running too many or overprovisioned cloud VMs based on demand. You can also move to serverless computing, which does much of this scaling work automatically.
Deploy compute workloads in regions with lower carbon intensity: Until recently, choosing cloud regions meant considering factors like cost and latency to end users. But carbon is another factor worth considering. While the compute capabilities of regions are similar, their carbon intensities typically vary. Some regions have access to more carbon-free energy production than others, and consequently the carbon intensity for each region is different.
So, choosing a cloud region with lower carbon intensity is often the simplest and most impactful step you can take. Alistair Scott, co-founder and CTO of cloud infrastructure startup Infracost, underscores this sentiment: “Engineers want to do the right thing and reduce waste, and I think cloud providers can help with that. The key is to provide information in workflow, so the people who are responsible for infraprovisioning can weigh the CO2 impact versus other factors such as cost and data residency before they deploy.”
Another step is to estimate your specific workload’s carbon footprint using open-source software like Cloud Carbon Footprint, a project sponsored by ThoughtWorks. Etsy has open-sourced a similar tool called Cloud Jewels that estimates energy consumption based on cloud usage information. This is helping them track progress toward their target of reducing their energy intensity by 25% by 2025.
Beyond reducing environmental impact, CTOs and technology leaders can have significant, direct and meaningful social impact.
Include societal benefits in the design of your products: As a CTO or technology founder, you can help ensure that societal benefits are prioritized in your product roadmaps. For example, if you’re a fintech CTO, you can add product features to expand access to credit in underserved populations. Startups like LoanWell are on a mission to increase access to capital for those typically left out of the financial system and make the loan origination process more efficient and equitable.
When thinking about product design, a product needs to be as useful and effective as it is sustainable. By thinking about sustainability and societal impact as a core element of product innovation, there is an opportunity to differentiate yourself in socially beneficial ways. For example, Lush has been a pioneer of package-free solutions, and launched Lush Lens — a virtual package app leveraging cameras on mobile phones and AI to overlay product information. The company hit 2 million scans in its efforts to tackle the beauty industry’s excessive use of (plastic) packaging.
Responsible AI practices should be ingrained in the culture to avoid social harms: Machine learning and artificial intelligence have become central to the advanced, personalized digital experiences everyone is accustomed to — from product and content recommendations to spam filtering, trend forecasting and other “smart” behaviors.
It is therefore critical to incorporate responsible AI practices, so benefits from AI and ML can be realized by your entire user base and that inadvertent harm can be avoided. Start by establishing clear principles for working with AI responsibly, and translate those principles into processes and procedures. Think about AI responsibility reviews the same way you think about code reviews, automated testing and UX design. As a technical leader or founder, you get to establish what the process is.
Promoting governance does not stop with the board and CEO; CTOs play an important role, too.
Create a diverse and inclusive technology team: Compared to individual decision-makers, diverse teams make better decisions 87% of the time. Additionally, Gartner research found that in a diverse workforce, performance improves by 12% and intent to stay by 20%.
It is important to reinforce and demonstrate why diversity, equity and inclusion is important within a technology team. One way you can do this is by using data to inform your DEI efforts. You can establish a voluntary internal program to collect demographics, including gender, race and ethnicity, and this data will provide a baseline for identifying diversity gaps and measuring improvements. Consider going further by baking these improvements into your employee performance process, such as objectives and key results (OKRs). Make everyone accountable from the start, not just HR.
These are just a few of the ways CTOs and technology leaders can contribute to ESG progress in their companies. The first step, however, is to recognize the many ways you as a technology leader can make an impact from day one.
Luxembourg’s National Commission for Data Protection (CNPD) has hit Amazon with a record-breaking €746 million ($887m) GDPR fine over the way it uses customer data for targeted advertising purposes.
Amazon disclosed the ruling in an SEC filing on Friday in which it slammed the decision as baseless and added that it intended to defend itself “vigorously in this matter.”
“Maintaining the security of our customers’ information and their trust are top priorities,” an Amazon spokesperson said in a statement. “There has been no data breach, and no customer data has been exposed to any third party. These facts are undisputed.
“We strongly disagree with the CNPD’s ruling, and we intend to appeal. The decision relating to how we show customers relevant advertising relies on subjective and untested interpretations of European privacy law, and the proposed fine is entirely out of proportion with even that interpretation.”
The penalty is the result of a 2018 complaint by French privacy rights group La Quadrature du Net, a group that claims to represent the interests of thousands of Europeans to ensure their data isn’t used by big tech companies to manipulate their behavior for political or commercial purposes. The complaint, which also targets Apple, Facebook Google and LinkedIn and was filed on behalf of more than 10,000 customers, alleges that Amazon manipulates customers for commercial means by choosing what advertising and information they receive.
La Quadrature du Net welcomed the fine issued by the CNPD, which “comes after three years of silence that made us fear the worst.”
“The model of economic domination based on the exploitation of our privacy and free will is profoundly illegitimate and contrary to all the values that our democratic societies claim to defend,” the group added in a blog post published on Friday.
The CNPD has also ruled that Amazon must commit to changing its business practices. However, the regulator has not publicly committed on its decision, and Amazon didn’t specify what revised business practices it is proposing.
The record penalty, which trumps the €50 million GDPR penalty levied against Google in 2019, comes amid heightened scrutiny of Amazon’s business in Europe. In November last year, the European Commission announced formal antitrust charges against the company, saying the retailer has misused its position to compete against third-party businesses using its platform. At the same time, the Commission a second investigation into its alleged preferential treatment of its own products on its site and those of its partners.
DevOps is fundamentally about collaboration and agility. Unfortunately, when we add security and compliance to the picture, the message gets distorted.
The term “DevSecOps” has come into fashion the past few years with the intention of seamlessly integrating security and compliance into the DevOps framework. However, the reality is far from the ideal: Security tools have been bolted onto the existing DevOps process along with new layers of automation, and everyone’s calling it “DevSecOps.” This is a misguided approach that fails to embrace the principles of collaboration and agility.
Integrating security into DevOps to deliver DevSecOps demands changed mindsets, processes and technologies. Security and risk management leaders must adhere to the collaborative, agile nature of DevOps for security testing to be seamless in development, making the “Sec” in DevSecOps transparent. — Neil MacDonald, Gartner
In an ideal world, all developers would be trained and experienced in secure coding practices from front end to back end and be skilled in preventing everything from SQL injection to authorization framework exploits. Developers would also have all the information they need to make security-related decisions early in the design phase.
If a developer is working on a type of security control they haven’t worked on before, an organization should provide the appropriate training before there is a security issue.
Once again, the reality falls short of the ideal. While CI/CD automation has given developers ownership over the deployment of their code, those developers are still hampered by a lack of visibility into relevant information that would help them make better decisions before even sitting down to write code.
The entire concept of discovering and remediating vulnerabilities earlier in the development process is already, in some ways, out of date. A better approach is to provide developers with the information and training they need to prevent potential risks from becoming vulnerabilities in the first place.
Consider a developer that is assigned to add PII fields to an internet-facing API. The authorization controls in the cloud API gateway are critical to the security of the new feature. “Shifting left and extending right” doesn’t mean that a scanning tool or security architect should detect a security risk earlier in the process — it means that a developer should have all the context to prevent the vulnerability before it even happens. Continuous feedback is key to up-leveling the security knowledge of developers by orders of magnitude.
Despite their rich engineering talent, Blockchain entrepreneurs in the EU often struggle to find backing due to the dearth of large funds and investment expertise in the space. But a big move takes place at an EU level today, as the European Investment Fund makes a significant investment into a blockchain and digital assets venture fund.
Fabric Ventures, a Luxembourg-based VC billed as backing the “Open Economy” has closed $130 million for its 2021 fund, $30 million of which is coming from the European Investment Fund (EIF). Other backers of the new fund include 33 founders, partners, and executives from Ethereum, (Transfer)Wise, PayPal, Square, Google, PayU, Ledger, Raisin, Ebury, PPRO, NEAR, Felix Capital, LocalGlobe, Earlybird, Accelerator Ventures, Aztec Protocol, Raisin, Aragon, Orchid, MySQL, Verifone, OpenOcean, Claret Capital, and more.
This makes it the first EIF-backed fund mandated to invest in digital assets and blockchain technology.
EIF Chief Executive Alain Godard said: “We are very pleased to be partnering with Fabric Ventures to bring to the European market this fund specializing in Blockchain technologies… This partnership seeks to address the need [in Europe] and unlock financing opportunities for entrepreneurs active in the field of blockchain technologies – a field of particular strategic importance for the EU and our competitiveness on the global stage.”
The subtext here is that the EIF wants some exposure to these new, decentralized platforms, potentially as a bulwark against the centralized platforms coming out of the US and China.
And yes, while the price of Bitcoin has yo-yo’d, there is now $100 billion invested in the decentralized finance sector and $1.5 billion market in the NFT market. This technology is going nowhere.
Fabric hasn’t just come from nowhere, either. Various Fabric Ventures team members have been involved in Orchestream, the Honeycomb Project at Sun Microsystems, Tideway, RPX, Automic, Yoyo Wallet, and Orchid.
Richard Muirhead is Managing Partner, and is joined by partners Max Mersch and Anil Hansjee. Hansjee becomes General Partner after leaving PayPal’s Venture Fund, which he led for EMEA. The team has experience in token design, market infrastructure, and community governance.
The same team started the Firestartr fund in 2012, backing Tray.io, Verse, Railsbank, Wagestream, Bitstamp, and others.
Muirhead said: “It is now well acknowledged that there is a need for a web that is user-owned and, consequently, more human-centric. There are astonishing people crafting this digital fabric for the benefit of all. We are excited to support those people with our latest fund.”
On a call with TechCrunch Muirhead added: “The thing to note here is that there’s a recognition at European Commission level, that this area is one of geopolitical significance for the EU bloc. On the one hand, you have the ‘wild west’ approach of North America, and, arguably, on the other is the surveillance state of the Chinese Communist Party.”
He said: “The European Commission, I think, believes that there is a third way for the individual, and to use this new wave of technology for the individual. Also for businesses. So we can have networks and marketplaces of individuals sharing their data for their own benefit, and businesses in supply chains sharing data for their own mutual benefits. So that’s the driving view.”
Two Democratic senators introduced a bill Thursday that would strip away the liability shield that social media platforms hold dear when those companies boost anti-vaccine conspiracies and other kinds of health misinformation.
The Health Misinformation Act, introduced by Senators Amy Klobuchar (D-MN) and Ben Ray Luján (D-NM), would create a new carve-out in Section 230 of the Communications Decency Act to hold platforms liable for algorithmically-promoted health misinformation and conspiracies. Platforms rely on Section 230 to protect them from legal liability for the vast amount of user-created content they host.
“For far too long, online platforms have not done enough to protect the health of Americans,” Klobuchar said. “These are some of the biggest, richest companies in the world and they must do more to prevent the spread of deadly vaccine misinformation.”
The bill would specifically alter Section 230’s language to revoke liability protections in the case of “health misinformation that is created or developed through the interactive computer service” if that misinformation is amplified through an algorithm. The proposed exception would only kick in during a declared national public health crisis, like the advent of Covid-19, and wouldn’t apply in normal times. The bill would task the Secretary of the Department of Health and Human Services (HHS) with defining health misinformation.
“Features that are built into technology platforms have contributed to the spread of misinformation and disinformation, with social media platforms incentivizing individuals to share content to get likes, comments, and other positive signals of engagement, which rewards engagement rather than accuracy,” the bill reads.
The bill also makes mention of the “disinformation dozen” — just twelve people, including anti-vaccine activist Robert F. Kennedy Jr. and a grab bag of other conspiracy theorists, who account for a massive swath of the anti-vax misinformation ecosystem. Many of the individuals on the list still openly spread their messaging through social media accounts on Twitter, Facebook and other platforms.
Section 230’s defenders generally view the idea of new carve-outs to the law as dangerous. Because Section 230 is such a foundational piece of the modern internet, enabling everything from Yelp and Reddit to the comment section below this post, they argue that the potential for unforeseen second order effects means the law should be left intact.
But some members of Congress — both Democrats and Republicans — see Section 230 as a valuable lever in their quest to regulate major social media companies. While the White House is pursuing its own path to craft consequences for overgrown tech companies through the Justice Department and the FTC, Biden’s office said earlier this week that the president is “reviewing” Section 230 as well. But as Trump also discovered, weakening Section 230 is a task that only Congress is positioned to accomplish — and even that is still a long shot.
While the new Democratic bill is narrowly targeted as far as proposed changes to Section 230 go, it’s also unlikely to attract bipartisan support.
Republicans are also interest in stripping away some of Big Tech’s liability protections, but generally hold the view that platforms remove too much content rather than too little. Republicans are also more likely to sow misinformation about the Covid-19 vaccines themselves, framing vaccination as a partisan issue. Whether the bill goes anywhere or not, it’s clear that an alarming portion of Americans have no intention of getting vaccinated — even with a much more contagious variant on the rise and colder months on the horizon.
“As COVID-19 cases rise among the unvaccinated, so has the amount of misinformation surrounding vaccines on social media,” Luján said of the proposed changes to Section 230. “Lives are at stake.”
The Biden administration tripled down on its commitment to reining in powerful tech companies Tuesday, proposing committed Big Tech critic Jonathan Kanter to lead the Justice Department’s antitrust division.
Kanter is a lawyer with a long track record of representing smaller companies like Yelp in antitrust cases against Google. He currently practices law at his own firm, which specializes in advocacy for state and federal antitrust enforcement.
“Throughout his career, Kanter has also been a leading advocate and expert in the effort to promote strong and meaningful antitrust enforcement and competition policy,” the White House press release stated. Progressives celebrated the nomination as a win, though some of Biden’s new antitrust hawks have enjoyed support from both political parties.
Jonathan Kanter's nomination to lead @TheJusticeDept’s Antitrust Division is tremendous news for workers and consumers. He’s been a leader in the fight to check consolidated corporate power and strengthen competition in our markets. https://t.co/mLQACA0c4j
— Elizabeth Warren (@SenWarren) July 20, 2021
The Justice Department already has a major antitrust suit against Google in the works. The lawsuit, filed by Trump’s own Justice Department, accuses the company of “unlawfully maintaining monopolies” through anti-competitive practices in its search and search advertising businesses. If successfully confirmed, Kanter would be positioned to steer the DOJ’s big case against Google.
In a 2016 NYT op-ed, Kanter argued that Google is notorious for relying on an anti-competitive “playbook” to maintain its market dominance. Kanter pointed to Google’s long history of releasing free ad-supported products and eventually restricting competition through “discriminatory and exclusionary practices” in a given corner of the market.
Kanter is just the latest high-profile Big Tech critic that’s been elevated to a major regulatory role under Biden. Last month, Biden named fierce Amazon critic Lina Khan as FTC chair upon her confirmation to the agency. In March, Biden named another noted Big Tech critic, Columbia law professor Tim Wu, to the National Economic Council as a special assistant for tech and competition policy.
All signs point to the Biden White House gearing up for a major federal fight with Big Tech. Congress is working on a set of Big Tech bills, but in lieu of — or in tandem with — legislative reform, the White House can flex its own regulatory muscle through the FTC and DOJ.
In new comments to MSNBC, the White House confirmed that it is also “reviewing” Section 230 of the Communications Decency Act, a potent snippet of law that protects platforms from liability for user-generated content.
PayPal-owned payments app Venmo will no longer offer a public, global feed of users’ transactions, as part of a significant redesign focused on expanding the app’s privacy controls and better highlighting some of Venmo’s newer features. The company says it will instead only show users their “friends feed” — meaning, the app’s social feed where you can see just your friends’ transactions.
Venmo has struggled over the years to balance its desire to add a social element to its peer-to-peer payments-based network, with the need to offer users their privacy.
A few years ago, the company was forced to settle a complaint with the FTC over its handling of privacy disclosures in the app along with other issues related to the security and privacy of user transactions. One of the concerns at the time was a setting that made all transactions public by default — a feature the FTC said wasn’t being properly explained to customers. As part of the settlement, Venmo had to inform both new and existing users how to limit the visibility of their transactions, among other changes.
However, privacy issues have continued to follow Venmo over the years. More recently, BuzzFeed News was able to track down President Biden’s secret Venmo account because of the lack of privacy around Venmo friend lists, for example. Afterwards, the company rolled out friend-list privacy controls to address the issue.
Image Credits: Venmo
In the newly updated app, Venmo will still highlight this friend-list privacy setting so users can choose whether or not they want to have their profile appear on other people’s friends’ lists. Users will also still be able to remove or add contacts from their friend list at any time, block people and set their transaction privacy either as they post or retroactively to public, private or friends-only. It’s unclear what advantage posting publicly has though, as the global, public feed is gone. Instead, public transactions would be visible to a users’ nonfriends only when someone visited their profile directly.
In addition to the privacy changes, Venmo’s redesign aims to make it easier for people to discover the app’s new features, the company says.
Now, a new bottom navigation option will allow users to toggle between their social feed, Venmo’s products like the Venmo Card and crypto, and their personal profile. The newly elevated “Cards” section will allow Venmo Credit and Debit cardholders to manage their cards and access their rewards and offers, as before. Meanwhile, the “Crypto” tab will let users learn and explore the world of crypto, view real-time trends and buy, sell or hold different types of cryptocurrencies.
Image Credits: Venmo
Venmo first added support for crypto earlier this year, following parent company PayPal’s move to do the same, and now offers access to Bitcoin, Ethereum, Litecoin and Bitcoin Cash. Before, the option appeared as a small button next to the “Pay or Request” button at the bottom of the screen, which contributed to Venmo’s cluttered feel.
The updated app will also include support for new payment types and expanded purchase protections, which Venmo announced last month, and said would arrive on July 20. Customers will now be able to indicate if their purchase is for “goods and services” when they transact with a seller, which will make the transactions eligible for Venmo’s purchase protection plan — even if the seller doesn’t have a proper “business” account.
Because this now charges sellers a 1.9% plus 10-cent fee, there had been some backlash from users who either misunderstood the changes or just didn’t like them. But the move could help boost Venmo revenue.
PayPal said in February that Venmo grew users 32% over 2020 to reach 70 million active accounts, and expects the app to generate nearly $900 million in revenue this year — likely in part thanks to this and other new initiatives, like its crypto transaction fees.
Image Credits: Venmo
Beyond the more functional changes and the privacy updates, Venmo’s redesign also modernizes the look-and-feel of the app itself, which had become a little dated and overly busy. As Venmo had expanded its array of services, the hamburger (three line) menu in the top right of the old version of the app had turned into a long list of options and settings. Now that’s gone. The app uses new iconography, an updated font, and lots of white space to make it feel fresh and clean.
The app’s changes also somewhat de-emphasize the importance of the social feed itself. Although it may still default to that tab, other options now have equal footing with tabs of their own, instead of being hidden away in a menu or in a smaller button.
Venmo says the redesigned Venmo app will begin to roll out today to select customers and will be available to all users across the U.S. over the next few weeks.
Carbon tracking is very much the new hot thing in tech, and we’ve previously covered more generalist startups doing this at scale for companies, such as Plan A Earth out of Berlin.
But there’s clearly an opportunity to get deep into a vertical sector and tailor solutions to it.
That’s the plan of Vaayu, a carbon tracking platform aimed specifically at retailers. It has now raised $1.57 million in pre-seed funding in a round led by CapitalT. Several Angels also took part, including Atomico’s Angel Program, Planet Positive LP, Saarbrücker 21, Expedite Ventures, and NP-Hard Ventures.
Carbon tracking for the retail fashion industry, in particular, is urgently needed. Unfortunately, the fashion industry remains responsible for 10% of annual global carbon emissions, which ads up to more than all international flights and maritime shipping combined.
Vaayu says it integrates with various point-of-sale systems, such as Shopify and Webflow. It then pulls in data on logistics, operations, and packaging to monitor, measure, and reduce their carbon emissions. Normally, retailers calculate emissions once a year, which is obviously far less accurate.
Vaayu was founded in 2020 by Namrata Sandhu (CEO) former head of Sustainability at fashion retailer Zalando, as well as Anita Daminov (CPO) and Luca Schmid (CTO). Vaayu currently has 25 global brand customers, including Missoma, Armed Angels, and Organic Basics.
Commenting on the fundraise, Namrata Sandhu, CEO, Vaayu, said: “We have only nine short years left to achieve the UN’s goal of reducing carbon emissions by 50% by 2030 and as the third-largest contributor to global emissions, retailers need to take action — and fast. Vaayu is here to help retailers measure, monitor, and reduce their carbon footprint at scale across the entire supply chain — something that I know from my own experience can be complex and expensive.
Speaking to me over a call, Sandhu told me: “Putting the focus on retail basically allows us to automate the calculation, which means in three clicks you can get your carbon footprint right away. That then allows us to really accurate data, and with that, we can basically do reductions specific to the business but using software, rather than any kind of manual intervention or a kind of ‘intermediate’ state where you need to put together an Excel sheet. Because we focus on retail we can automate the entire process and also automate the reductions.”
“We are delighted to be backed by female-led CapitalT who understood us and our vision right from the start. We look forward to developing Vaayu further in the coming months so we can reach as many retailers as possible and help put the brakes on the impending climate crisis,” she added.
Janneke Niessen, founding partner, CapitalT commented: “We are very excited to join Vaayu on their mission to reduce carbon emission for retailers worldwide. The Vaayu product is very scalable and its quick and easy implementation allows for fast adoption. We are confident that with this experienced team, Vaayu will soon be one of the fastest-growing climate tech companies in Europe and the world.”
Bioengineering may soon provide compelling, low-carbon alternatives in industries where even the best methods produce significant emissions. Utilizing natural and engineered biological process has led to low-carbon textiles from AlgiKnit, cell-cultured premium meats from Orbillion and fuels captured from waste emissions via LanzaTech — and leaders from those companies will be joining us onstage for the Extreme Tech Challenge Global Finals on July 22.
We’re co-hosting the event, with panels like this one all day and a pitch-off that will feature a number of innovative startups with a sustainability angle.
I’ll be moderating a panel on using bioengineering to create change directly in industries with large carbon footprints: textiles, meat production and manufacturing.
AlgiKnit is a startup that is sourcing raw material for fabric from kelp, which is an eco-friendly alternative to textile crop monocultures and artificial materials like acrylic. CEO Aaron Nesser will speak to the challenge of breaking into this established industry and overcoming preconceived notions of what an algae-derived fabric might be like (spoiler: it’s like any other fabric).
Orbillion Bio is one of the new crop of alternative protein companies offering cell-cultured meats (just don’t call them “lab” or “vat” grown) to offset the incredibly wasteful livestock industry. But it’s more than just growing a steak — there are regulatory and market barriers aplenty that CEO Patricia Bubner can speak to, as well as the technical challenge.
LanzaTech works with factories to capture emissions as they’re emitted, collecting the useful particles that would otherwise clutter the atmosphere and repurposing them in the form of premium fuels. This is a delicate and complex process that needs to be a partnership, not just a retrofitting operation, so CEO Jennifer Holmgren will speak to their approach convincing the industry to work with them at the ground floor.
It should be a very interesting conversation, so tune in on July 22 to hear these and other industry leaders focused on sustainability discuss how innovation at the startup level can contribute to the fight against climate change. Plus it’s free!
The consumer protection association umbrella group, the Beuc, said today that together with eight of its member organizations it’s filed a complaint with the European Commission and with the European network of consumer authorities.
“The complaint is first due to the persistent, recurrent and intrusive notifications pushing users to accept WhatsApp’s policy updates,” it wrote in a press release.
“The content of these notifications, their nature, timing and recurrence put an undue pressure on users and impair their freedom of choice. As such, they are a breach of the EU Directive on Unfair Commercial Practices.”
After earlier telling users that notifications about the need to accept the new policy would become persistent, interfering with their ability to use the service, WhatsApp later rowed back from its own draconian deadline.
However the app continues to bug users to accept the update — with no option not to do so (users can close the policy prompt but are unable to decline the new terms or stop the app continuing to pop-up a screen asking them to accept the update).
“In addition, the complaint highlights the opacity of the new terms and the fact that WhatsApp has failed to explain in plain and intelligible language the nature of the changes,” the Beuc went on. “It is basically impossible for consumers to get a clear understanding of what consequences WhatsApp’s changes entail for their privacy, particularly in relation to the transfer of their personal data to Facebook and other third parties. This ambiguity amounts to a breach of EU consumer law which obliges companies to use clear and transparent contract terms and commercial communications.”
The organization pointed out that WhatsApp’s policy updates remain under scrutiny by privacy regulations in Europe — which it argues is another factor that makes Facebook’s aggressive attempts to push the policy on users highly inappropriate.
And while this consumer-law focused complaint is separate to the privacy issues the Beuc also flags — which are being investigated by EU data protection authorities (DPAs) — it has called on those regulators to speed up their investigations, adding: “We urge the European network of consumer authorities and the network of data protection authorities to work in close cooperation on these issues.”
The Beuc has produced a report setting out its concerns about the WhatsApp ToS change in more detail — where it hits out at the “opacity” of the new policies, further asserting:
“WhatsApp remains very vague about the sections it has removed and the ones it has added. It is up to users to seek out this information by themselves. Ultimately, it is almost impossible for users to clearly understand what is new and what has been amended. The opacity of the new policies is in breach of Article 5 of the UCTD [Unfair Contract Terms Directive] and is also a misleading and unfair practice prohibited under Article 5 and 6 of the UCPD [Unfair Commercial Practices Directive].”
Reached for comment on the consumer complaint, a WhatsApp spokesperson told us:
“Beuc’s action is based on a misunderstanding of the purpose and effect of the update to our terms of service. Our recent update explains the options people have to message a business on WhatsApp and provides further transparency about how we collect and use data. The update does not expand our ability to share data with Facebook, and does not impact the privacy of your messages with friends or family, wherever they are in the world. We would welcome an opportunity to explain the update to Beuc and to clarify what it means for people.”
The Commission was also contacted for comment on the Beuc’s complaint — we’ll update this report if we get a response.
The complaint is just the latest pushback in Europe over the controversial terms change by Facebook-owned WhatsApp — which triggered a privacy warning from Italy back in January, followed by an urgency procedure in Germany in May when Hamburg’s DPA banned the company from processing additional WhatsApp user data.
Although, earlier this year, Facebook’s lead data regulator in the EU, Ireland’s Data Protection Commission, appeared to accept Facebook’s reassurances that the ToS changes do not affect users in the region.
German DPAs were less happy, though. And Hamburg invoked emergency powers allowed for in the General Data Protection Regulation (GDPR) in a bid to circumvent a mechanism in the regulation that (otherwise) funnels cross-border complaints and concerns via a lead regulator — typically where a data controller has their regional base (in Facebook/WhatsApp’s case that’s Ireland).
Such emergency procedures are time-limited to three months. But the European Data Protection Board (EDPB) confirmed today that its plenary meeting will discuss the Hamburg DPA’s request for it to make an urgent binding decision — which could see the Hamburg DPA’s intervention set on a more lasting footing, depending upon what the EDPB decides.
In the meanwhile, calls for Europe’s regulators to work together to better tackle the challenges posed by platform power are growing, with a number of regional competition authorities and privacy regulators actively taking steps to dial up their joint working — in a bid to ensure that expertise across distinct areas of law doesn’t stay siloed and, thereby, risk disjointed enforcement, with conflicting and contradictory outcomes for Internet users.
There seems to be a growing understanding on both sides of the Atlantic for a joined up approach to regulating platform power and ensuring powerful platforms don’t simply get let off the hook.
The Biden administration just introduced a sweeping, ambitious plan to forcibly inject competition into some consolidated sectors of the American economy — the tech sector prominent among them — through executive action.
“Today President Biden is taking decisive action to reduce the trend of corporate consolidation, increase competition, and deliver concrete benefits to America’s consumers, workers, farmers, and small businesses,” a new White House fact sheet on the forthcoming order states.
The order, which Biden will sign Friday, initiates a comprehensive “whole-of-government” approach that loops in more then twelve different agencies at the federal level to regulate monopolies, protect consumers and curtail bad behavior from some of the world’s biggest corporations.
In the fact sheet, the White House lays out its plans to take matters to regulate big business into its own hands at the federal level. As far as tech is concerned, that comes largely through emboldening the FTC and the Justice Department — two federal agencies with antitrust enforcement powers.
Most notably for Big Tech, which is already bracing for regulatory existential threats, the White House explicitly asserts here that those agencies have legal cover to “challenge prior bad mergers that past Administrations did not previously challenge” — i.e., unwinding acquisitions that built a handful of tech companies into the behemoths they are today. The order calls on antitrust agencies to enforce antitrust laws “vigorously.”
Federal scrutiny will prioritize “dominant internet platforms, with particular attention to the acquisition of nascent competitors, serial mergers, the accumulation of data, competition by ‘free’ products, and the effect on user privacy.” Facebook, Google and Amazon are particularly on notice here, though Apple isn’t likely to escape federal attention either.
“Over the past 10 years, the largest tech platforms have acquired hundreds of companies — including alleged ‘killer acquisitions’ meant to shut down a potential competitive threat,” the White House wrote in the fact sheet. “Too often, federal agencies have not blocked, conditioned, or, in some cases, meaningfully examined these acquisitions.”
The biggest tech companies have regularly defended their longstanding strategy of buying up the competition by arguing that because those acquisitions went through without friction at the time, they shouldn’t be viewed as illegal in hindsight. In no uncertain terms, the new executive order makes it clear that the Biden administration isn’t having any of it.
The White House also specifically singles out internet service providers for scrutiny, ordering the FCC to prioritize consumer choice and institute broadband “nutrition labels” that clearly state speed caps and hidden fees. The FCC began working on the labels in the Obama administration but the work was scrapped after Trump took office.
The order also directly calls on the FCC to restore net neutrality rules, which were stripped in 2017 to the widespread horror of open internet advocates and most of the tech industry outside of the service providers that stood to benefit.
The White House will also tell the FTC to create new privacy rules meant to guard consumers against surveillance and the “accumulation of extraordinarily amounts of sensitive personal information,” which free services like Facebook, YouTube and others have leveraged to build their vast empires. The White House also taps the FTC to create rules that protect smaller businesses from being preempted by large platforms, which in many cases abuse their market dominance with a different sort of data-based surveillance to out-compete up-and-coming competitors.
Finally, the executive order encourages the FTC to put right-to-repair rules in place that would free consumers from constraints that discourage DIY and third-party repairs. A new White House Competition Council under the director of the National Economic Council will coordinate the federal execution of the proposals laid out in the new order.
The antitrust effort from the executive branch mirrors parallel actions in the FTC and Congress. In the FTC, Biden has installed a fearsome antitrust crusader in Lina Khan, a young legal scholar and fierce Amazon critic who proposes a philosophical overhaul to the way the federal government defines monopolies. Khan now leads the FTC as its chair.
In Congress, a bipartisan flurry of bills intended to rein in the tech industry are slowly wending their way toward becoming law, though plenty of hurdles remain. Last month, the House Judiciary Committee debated the six bills, which were crafted separately to help them survive opposing lobbying pushes from the tech industry. These legislative efforts could modernize antitrust laws, which have failed to keep pace with the modern realities of giant, internet-based businesses.
“Competition policy needs new energy and approaches so that we can address America’s monopoly problem,” Sen. Amy Klobuchar, a prominent tech antitrust hawk in Congress, said of the executive order. “That means legislation to update our antitrust laws, but it also means reimagining what the federal government can do to promote competition under our current laws.”
Citing the acceleration of corporate consolidation in recent decades, the White House argues that a handful of large corporations dominates across industries, including healthcare, agriculture and tech and consumers, workers and smaller competitors pay the price for their outsized success. The administration will focus antitrust enforcement on those corners of the market as well as evaluating the labor market and worker protections on the whole.
“Inadequate competition holds back economic growth and innovation … Economists find that as competition declines, productivity growth slows, business investment and innovation decline, and income, wealth, and racial inequality widen,” the White House wrote.
The EU for all its lethargy, faults and fetishization of bureaucracy, is, ultimately, a good idea. It might be 64 years from the formation of the European Common Market, but it is 29 years since the EU’s formation in the Maastricht Treaty, and this international entity is definitely still acting like an indecisive millennial, happy to flit around tech startup policy. It’s long due time for this digital nomad to commit to one ‘location’ on how it treats startups.
If there’s one thing we can all agree on, this is a unique moment in time. The COVID-19 pandemic has accelerated the acceptance of technology globally, especially in Europe. Thankfully, tech companies and startups have proven to be more resilient than much of the established economy. As a result, the EU’s political leaders have started to look towards the innovation economy for a more sustainable future in Europe.
But this moment has not come soon enough.
The European tech scene is still lagging behind its US and Asia counterparts in numbers of startups created, talent in the tech sector, financing rounds, and IPOs / exits. It doesn’t help, of course, that the European market is so fractionalized, and will be for a long time to come.
But there is absolutely no excuse when it comes to the EU’s obligations to reform startup legislation, taxation, and the development of talent, to “level the playing field” against the US and Asian tech giants.
But, to put it bluntly: The EU can’t seem to get its shit together around startups.
Consider this litany of proposals.
Starting as far back a 2016 we had the Start-Up and Scale-Up Initiative. We even had the Scale-Up Manifesto in the same year. Then there was the Cluj Recommendations (2019), and the Not Optional campaign for options reform in 2020.
Let’s face it, the community of VC´s, founders, and startup associations in Europe has been saying mostly the same things for years, to national and European leaders.
Finally, this year, we got something approaching a summation of all these efforts.
Portugal, which has the European presidency for the first half of this year, took the bull by its horns and created something approaching a final draft of what the EU needs.
After, again, intense consultations with European ecosystem stakeholders, it identified eight best practices in order to level the playing field covering the gamut of issues such as fast startup creation, talent, stock options, innovation in regulation and access to finance. You name it, it covered it.
These were then put into the Startup Nations Standard and presented to the European Council at Digital Day on March 19th, together with the European Commission’s DG CNECT and its Commissioner Tierry Breton. I wrote about this at the time.
Would the EU finally get a grip, and sign up for these evidently workable proposals?
It seemed, at least, that we might be getting somewhere. Some 25 member states signed the declaration that day, and perhaps for the first time, the political consensus seemed to be forming around this policy.
Indeed, a body set up to shepherd the initiative (the European Startup Nations Alliance) was even announced by Portuguese Prime Minister António Costa which, he said, would be tasked with monitoring, developing and optimizing the standards, collecting data from the member states on their success and failure, and reporting on its findings in a bi-annual conference aligned with the changing presidency of the European Council.
It would seem we could pop open a chilled bottle of DOC Bairrada Espumante and celebrate that Europe might finally start implementing at least the basics from these suggested policies.
But no. With the pandemic still raging, it seemed the EU’s leaders still had plenty of time on their hands to ponder these subjects.
Thus it was that the Scaleup Europe initiative emerged from the mind of Emmanuel Macron, assembling a select group of 150+ of Europe’s leading tech founders, investors, researchers, corporate CEOs, and government officials to do some more pondering about startups. And then there was the Global Powerhouse Initiative of DG Research & Innovations Commissioner Mariya Gabriel.
Yes, ladies and gentlemen. We were about to go through this process all over again, with the EU acting as if it had the memory span of a giant goldfish.
Now, I’m not arguing that all these collective actions are a bad thing. But, by golly, European startups need more decisive action than this.
As things stand, instead of implementing the very reasonable Portuguese proposals, we will now have to wait for the EU’s wheels to slowly turn until the French presidency comes around next year.
That said, with any luck, a body to oversee the implementation of tech startup policy that is mandated by the European community, composed of organisation like La French Tech, Startup Portugal and Startup Estonia, might finally seem within reach.
But to anyone from the outside, it feels again as if the gnashing of EU policy teeth will have to go on yet longer. With the French calling for a ‘La French Tech for Europe’ and the Portuguese having already launched ESNA, the efforts seem far from coordinated.
In the final analysis, tech startup founders and investors could not care less where this new body comes from or which country launches it.
After years of contributions, years of consultations, the time for action is now.
It’s time for EU member states to agree, and move forward, helping other member states catch up based on established best practices.
It’s time for the long-awaited European Tech Giants to blossom, take on the US-born Big Tech Giants, and for Europe to finally punch its weight.
Microsoft-owned LinkedIn has committed to doing more to quickly purge illegal hate speech from its platform in the European Union by formally signing up to a self-regulatory initiative that seeks to tackle the issue through a voluntary Code of Conduct.
In statement today, the European Commission announced that the professional social network has joined the EU’s Code of Conduct on Countering Illegal Hate Speech Online, with justice commissioner, Didier Reynders, welcoming LinkedIn’s (albeit tardy) participation, and adding in a statement that the code “is and will remain an important tool in the fight against hate speech, including within the framework established by digital services legislation”.
“I invite more businesses to join, so that the online world is free from hate,” Reynders added.
While LinkedIn’s name wasn’t formally associated with the voluntary Code before now it said it has “supported” the effort via parent company Microsoft, which was already signed up.
In a statement on its decision to formally join now, it also said:
“LinkedIn is a place for professional conversations where people come to connect, learn and find new opportunities. Given the current economic climate and the increased reliance jobseekers and professionals everywhere are placing on LinkedIn, our responsibility is to help create safe experiences for our members. We couldn’t be clearer that hate speech is not tolerated on our platform. LinkedIn is a strong part of our members’ professional identities for the entirety of their career — it can be seen by their employer, colleagues and potential business partners.”
In the EU ‘illegal hate speech’ can mean content that espouses racist or xenophobic views, or which seeks to incite violence or hatred against groups of people because of their race, skin color, religion or ethnic origin etc.
A number of Member States have national laws on the issue — and some have passed their own legislation specifically targeted at the digital sphere. So the EU Code is supplementary to any actual hate speech legislation. It is also non-legally binding.
The initiative kicked off back in 2016 — when a handful of tech giants (Facebook, Twitter, YouTube and Microsoft) agreed to accelerate takedowns of illegal speech (or well, attach their brand names to the PR opportunity associated with saying they would).
Since the Code became operational, a handful of other tech platforms have joined — with video sharing platform TikTok signing up last October, for example.
But plenty of digital services (notably messaging platforms) still aren’t participating. Hence the Commission’s call for more digital services companies to get on board.
At the same time, the EU is in the process of firming up hard rules in the area of illegal content.
Last year the Commission proposed broad updates (aka the Digital Services Act) to existing ecommerce rules to set operational ground rules that they said are intended to bring online laws in line with offline legal requirements — in areas such as illegal content, and indeed illegal goods. So, in the coming years, the bloc will get a legal framework that tackles — at least at a high level — the hate speech issue, not merely a voluntary Code.
The EU also recently adopted legislation on terrorist content takedowns (this April) — which is set to start applying to online platforms from next year.
But it’s interesting to note that, on the perhaps more controversial issue of hate speech (which can deeply intersect with freedom of expression), the Commission wants to maintain a self-regulatory channel alongside incoming legislation — as Reynders’ remarks underline.
Brussels evidently sees value in having a mixture of ‘carrots and sticks’ where hot button digital regulation issues are concerned. Especially in the controversial ‘danger zone’ of speech regulation.
So, while the DSA is set to bake in standardized ‘notice and response’ procedures to help digital players swiftly respond to illegal content, by keeping the hate speech Code around it means there’s a parallel conduit where key platforms could be encouraged by the Commission to commit to going further than the letter of the law (and thereby enable lawmakers to sidestep any controversy if they were to try to push more expansive speech moderation measures into legislation).
The EU has — for several years — had a voluntary a Code of Practice on Online Disinformation too. (And a spokeswoman for LinkedIn confirmed it has been signed up to that since its inception, also through its parent company Microsoft.)
And while lawmakers recently announced a plan to beef that Code up — to make it “more binding”, as they oxymoronically put it — it certainly isn’t planning to legislate on that (even fuzzier) speech issue.
In further public remarks today on the hate speech Code, the Commission said that a fifth monitoring exercise in June 2020 showed that on average companies reviewed 90% of reported content within 24 hours and removed 71% of content that was considered to be illegal hate speech.
It added that it welcomed the results — but also called for signatories to redouble their efforts, especially around providing feedback to users and in how they approach transparency around reporting and removals.
The Commission has also repeatedly calls for platforms signed up to the disinformation Code to do more to tackle the tsunami of ‘fake news’ being fenced on their platforms, including — on the public health front — what they last year dubbed a coronavirus infodemic.
The COVID-19 crisis has undoubtedly contributed to concentrating lawmakers’ minds on the complex issue of how to effectively regulate the digital sphere and likely accelerated a number of EU efforts.
As we become more and more aware of the kind of impact we are having on this planet we call our home, just about everything is having its CO2 impact measured. Who knew, until recently, that streaming Netflix might have a measurable impact on the environment, for instance. But given vast swathes of the Internet are populated by Web sites, as well as streaming services, then they too must have some sort of impact.
It transpires that a new service has identified how to gauge that, and now it’s raised Venture capital to scale.
Ryte raised €8.5 million ($10M) in a previously undisclosed round led by Bayern Kapital out of Munich and Octopus Investments out of London earlier this year for its Website User Experience Platform.
It has now launched the ‘Ryte Website Carbon KPI’, which claims to be able to help make 5% of all websites carbon neutral by 2023.
Ryte says it worked with data scientists and environmental experts to develop the ability to accurately measure the carbon impact of client’s websites. According to carbon transition thinktank, the Shift Project, the carbon footprint of our gadgets, the internet, and the systems supporting them accounts for about 3.7% of global greenhouse emissions. And this trend is rising rapidly as the world digitizes itself, especially post-pandemic.
Ryte has now engaged its Data Scientist, Katharina Meraner, who has a PhD in climate science and global warming, and input from Climate Partner, to launch this new service.
Andy Bruckschloegl, CEO of Ryte said: “There are currently 189 million active websites. Our goal is to make 5% of all active websites, or 9.5 million websites, climate neutral by the end of 2023 with the help of our platform, strong partners, social media activities, and much more. Time is ticking and making websites carbon neutral is really easy compared to other industries and processes.”
Ryte says it is also collaborating with a reforestation project in San Jose, Nicaragua, to allow its customers to offset their remaining emissions through the purchase of climate certificates.
Using a proprietary algorithm, Ryte says it measures the code of the entire website, average page size, as well as monthly traffic by channel then produces a calculation of the amount of CO2 it uses up.
Admittedly there are similar services but these are ad-hoc and not connected to a platform. A simple Google search will bring us sites like Websitecarbon, Ecosistant, and academic papers. But as far as I can tell, a startup like this hasn’t put this kind of service into their platform yet.
“Teaming up with Ryte will help raise awareness on how information technology contributes to climate change – while at the same time providing tools to make a difference. Ryte’s industry-leading carbon calculator enables thousands of website owners to understand their carbon footprint, to offset unavoidable carbon emissions and thus lay a basis for a comprehensive climate action strategy,” commented Tristan A. Foerster, Co-CEO ClimatePartner.
Update: Google has now confirmed the delay, writing in a blog post that its engagement with UK regulators over the so-called “Privacy Sandbox” means support for tracking cookies won’t start being phased out in Chrome until the second half of 2023.
Our original report follows below…
Adtech giant Google appears to be leaning toward postponing a long planned depreciation of third party tracking cookies.
The plan dates back to 2019 when it announced the long-term initiative that will make it harder for online marketers and advertisers to track web users, including by depreciating third party cookies in Chrome.
Then in January 2020 it said it would make the switch within two years. Which would mean by 2022.
Google confirmed to TechCrunch that it has a Privacy Sandbox announcement incoming today — set for 4pm BST/5pm CET — after we contacted it to ask for confirmation of information we’d heard, via our own sources.
We’ve been told Google’s new official timeline for implementation will be 2023.
However a spokesman for the tech giant danced around providing a direct confirmation — saying that “an update” is incoming shortly.
“We do have an announcement today that will shed some light on Privacy Sandbox updates,” the spokesman also told us.
He had responded to our initial email — which had asked Google to confirm that it will postpone the implementation of Privacy Sandbox to 2023; and for any statement on the delay — with an affirmation (“yep”) so, well, a delay looks likely. But we’ll see how exactly Google will spin that in a few minutes when it publishes the incoming Privacy Sandbox announcement.
Google has previously said it would depreciate support for third party cookies by 2022 — which naturally implies that the wider Privacy Sandbox stack of related adtech would also need to be in place by then.
Earlier this year it slightly hedged the 2022 timeline, saying in January that any changes would not be made before 2022.
The issue for Google is that regulatory scrutiny of its plan has stepped up — following antitrust complaints from the adtech industry which faces huge changes to how it can track and target Internet users.
In Europe, the UK’s Competition and Markets Authority has been working with the UK’s Information Commissioner’s Office to understand the competition and privacy implications of Google’s planned move. And, earlier this month, the CMA issued a notification of intention to accept proposed commitments from Google that would enable the regulator to block any depreciation of cookies if it’s not happy it can be done in a way that’s good for competition and privacy.
At the time we asked Google how the CMA’s involvement might impact the Privacy Sandbox timeline but the company declined to comment.
Increased regulatory oversight of Big Tech will have plenty of ramifications — most obviously it means the end of any chance for giants like Google to ‘move fast and break things’.
While the cryptocurrency market’s most recent hype wave seems to be dying down after a spectacular rise, Andreessen Horowitz’s crypto arm is reaffirming its commitment to startups building blockchain projects with a hulking new $2.2 billion crypto fund.
It’s the firm’s largest vertical-specific fund ever — by quite a bit.
Andreessen Horowitz’s 2018 crypto fund ushered in $300 million of LP commitments and its second fund, which it closed in April of last year, clocked in at $515 million. The new multi-billion dollar fund not only showcases how institutional backers are growing more comfortable with cryptocurrencies, but also how Andreessen Horowitz’s assets under management have been quickly swelling to compete with other deep-pocketed firms including the ever-prolific Tiger Global.
With this announcement, Andreessen now has some $18.8 billion assets under management.
LPs are likely far less wary to take a chance on crypto after Andreessen Horowitz’s stake in Coinbase equated to some $11.2 billion at the time of the direct listing’s first trades, though the stock has slid back some 30% in recent months as the crypto market has shrunk.
Some of the firm’s other major crypto bets include NBA Top Shot maker Dapper Labs which hit a $7.5 billion valuation this spring. Blockchain infrastructure startup Dfinity raised at a $9.5 billion valuation this past September. Last year, the firm led the Series A of Uniswap, which is poised to be a major player in the Ethereum ecosystem. In addition to equity investments, a16z has also made major bets on the currencies themselves.
An earlier report from Newcomer last month reported a16z was targeting a $2 billion crypto fund and that they had already unloaded some of their crypto holdings before most cryptocurrencies took a major dive in recent weeks.
Crypto Fund III will continue to be managed by GPs Chris Dixon and Katie Haun, but the firm has also begun spinning out a more robust management team around the crypto vertical.
Anthony Albanese, who joined the firm last year from the NYSE, has been appointed COO of the division. Tomicah Tillemann, who previously served as a senior advisor to now-President Joe Biden and as chairman of the Global Blockchain Business Council, will be a16z Crypto’s Global Head of Policy. Rachael Horwitz is also coming aboard as an Operating Partner leading marketing and communications for a16z crypto; leaving Google after a stint as Coinbase’s first VP of Communications as well.
A couple other folks are also coming on in advisory capacity, including entrepreneur Alex Price and a couple others who will likely be a tad helpful in regulatory maneuverings including Bill Hinman, formerly of the SEC, and Brent McIntosh, who recently served as Under Secretary of the Treasury for International Affairs.
EU antitrust authorities are finally taking a broad and deep look into Google’s adtech stack and role in the online ad market — confirming today that they’ve opened a formal investigation.
Google has already been subject to three major EU antitrust enforcements over the past five years — against Google Shopping (2017), Android (2018) and AdSense (2019). But the European Commission has, until now, avoided officially wading into the broader issue of its role in the adtech supply chain. (The AdSense investigation focused on Google’s search ad brokering business, though Google claims the latest probe represents that next stage of that 2019 enquiry, rather than stemming from a new complaint).
The Commission said that the new Google antitrust investigation will assess whether it has violated EU competition rules by “favouring its own online display advertising technology services in the so called ‘ad tech’ supply chain, to the detriment of competing providers of advertising technology services, advertisers and online publishers”.
Display advertising spending in the EU in 2019 was estimated to be approximately €20BN, per the Commission.
“The formal investigation will notably examine whether Google is distorting competition by restricting access by third parties to user data for advertising purposes on websites and apps, while reserving such data for its own use,” it added in a press release.
Earlier this month, France’s competition watchdog fined Google $268M in a case related to self-preferencing within the adtech market — which the watchdog found constituted an abuse by Google of a dominant position for ad servers for website publishers and mobile apps.
In that instance Google sought a settlement — proposing a number of binding interoperability agreements which the watchdog accepted. So it remains to be seen whether the tech giant may seek to push for a similar outcome at the EU level.
There is one cautionary signal in that respect in the Commission’s press release which makes a point of flagging up EU data protection rules — and highlighting the need to take into account the protection of “user privacy”.
That’s an interesting side-note for the EU’s antitrust division to include, given some of the criticism that France’s Google adtech settlement has attracted — for risking cementing abusive user exploitation (in the form of adtech privacy violations) into the sought for online advertising market rebalancing.
Or as Cory Doctorow neatly explains it in this Twitter thread: “The last thing we want is competition in practices that harm the public.”
Aka, unless competition authorities wise up to the data abuses being perpetuated by dominant tech platforms — such as through enlightened competition authorities engaging in close joint-working with privacy regulators (in the EU this is, at least, possible since there’s regulation in both areas) — there’s a very real risk that antitrust enforcement against Big (ad)Tech could simply supercharge the user-hostile privacy abuses that surveillance giants have only been able to get away with because of their market muscle.
So, tl;dr, ill-thought through antitrust enforcement actually risks further eroding web users’ rights… and that would indeed be a terrible outcome. (Unless you’re Google; then it would represent successfully playing one regulator off against another at the expense of users.)
The last thing we want is competition in practices that harm the public – we don't want companies to see who can commit the most extensive human rights abuses at the lowest costs. That's not something we want to render more efficient.https://t.co/qDPr6OtP90
— Cory Doctorow (@doctorow) June 8, 2021
The need for competition and privacy regulators to work together to purge Big Tech market abuses has become an active debate in Europe — where a few pioneering regulators (like German’s FCO) are ahead of the pack.
The UK’s Competition and Markets Authority (CMA) and Information Commissioner’s Office (ICO) also recently put out a joint statement — laying out their conviction that antitrust and data protection regulators must work together to foster a thriving digital economy that’s healthy across all dimensions — i.e. for competitors, yes, but also for consumers.
A recent CMA proposed settlement related to Google’s planned replacement for tracking cookies — aka ‘Privacy Sandbox’, which has also been the target of antitrust complaints by publishers — was notable in baking in privacy commitments and data protection oversight by the ICO in addition to the CMA carrying out its competition enforcement role.
It’s fair to say that the European Commission has lagged behind such pioneers in appreciating the need for synergistic regulatory joint-working, with the EU’s antitrust chief roundly ignoring — for example — calls to block Google’s acquisition of Fitbit over the data advantage it would entrench, in favor of accepting a few ‘concessions’ to waive the deal through.
So it’s interesting to see the EU’s antitrust division here and now — at the very least — virtue signalling an awareness of the problem of regional regulators approaching competition and privacy as if they exist in firewalled silos.
Whether this augurs the kind of enlightened regulatory joint working — to achieve holistically healthy and dynamic digital markets — which will certainly be essential if the EU is to effectively grapple with surveillance capitalism very much remains to be seen. But we can at least say that the inclusion of the below statement in an EU antitrust division press release represents a change of tone (and that, in itself, looks like a step forward…):
“Competition law and data protection laws must work hand in hand to ensure that display advertising markets operate on a level playing field in which all market participants protect user privacy in the same manner.”
Returning to the specifics of the EU’s Google adtech probe, the Commission says it will be particularly examining:
Commenting on the investigation in a statement, Commission EVP and competition chief, Margrethe Vestager, added:
“Online advertising services are at the heart of how Google and publishers monetise their online services. Google collects data to be used for targeted advertising purposes, it sells advertising space and also acts as an online advertising intermediary. So Google is present at almost all levels of the supply chain for online display advertising. We are concerned that Google has made it harder for rival online advertising services to compete in the so-called ad tech stack. A level playing field is of the essence for everyone in the supply chain. Fair competition is important — both for advertisers to reach consumers on publishers’ sites and for publishers to sell their space to advertisers, to generate revenues and funding for content. We will also be looking at Google’s policies on user tracking to make sure they are in line with fair competition.”
Contacted for comment on the Commission investigation, a Google spokesperson sent us this statement:
“Thousands of European businesses use our advertising products to reach new customers and fund their websites every single day. They choose them because they’re competitive and effective. We will continue to engage constructively with the European Commission to answer their questions and demonstrate the benefits of our products to European businesses and consumers.”
Google also claimed that publishers keep around 70% of the revenue when using its products — saying in some instances it can be more.
It also suggested that publishers and advertisers often use multiple technologies simultaneously, further claiming that it builds its own technologies to be interoperable with more than 700 rival platforms for advertisers and 80 rival platforms for publishers.
The need for markets-focused competition watchdogs and consumer-centric privacy regulators to think outside their respective ‘legal silos’ and find creative ways to work together to tackle the challenge of big tech market power was the impetus for a couple of fascinating panel discussions organized by the Centre for Economic Policy Research (CEPR), which were livestreamed yesterday but are available to view on-demand here.
The conversations brought together key regulatory leaders from Europe and the US — giving a glimpse of what the future shape of digital markets oversight might look like at a time when fresh blood has just been injected to chair the FTC so regulatory change is very much in the air (at least around tech antitrust).
CEPR’s discussion premise is that integration, not merely intersection, of competition and privacy/data protection law is needed to get a proper handle on platform giants that have, in many cases, leveraged their market power to force consumers to accept an abusive ‘fee’ of ongoing surveillance.
That fee both strips consumers of their privacy and helps tech giants perpetuate market dominance by locking out interesting new competition (which can’t get the same access to people’s data so operates at a baked in disadvantage).
A running theme in Europe for a number of years now, since a 2018 flagship update to the bloc’s data protection framework (GDPR), has been the ongoing under-enforcement around the EU’s ‘on-paper’ privacy rights — which, in certain markets, means regional competition authorities are now actively grappling with exactly how and where the issue of ‘data abuse’ fits into their antitrust legal frameworks.
The regulators assembled for CEPR’s discussion included, from the UK, the Competition and Markets Authority’s CEO Andrea Coscelli and the information commissioner, Elizabeth Denham; from Germany, the FCO’s Andreas Mundt; from France, Henri Piffaut, VP of the French competition authority; and from the EU, the European Data Protection Supervisor himself, Wojciech Wiewiórowski, who advises the EU’s executive body on data protection legislation (and is the watchdog for EU institutions’ own data use).
The UK’s CMA now sits outside the EU, of course — giving the national authority a higher profile role in global mergers & acquisition decisions (vs pre-brexit), and the chance to help shape key standards in the digital sphere via the investigations and procedures it chooses to pursue (and it has been moving very quickly on that front).
The CMA has a number of major antitrust probes open into tech giants — including looking into complaints against Apple’s App Store and others targeting Google’s plan to depreciate support for third party tracking cookies (aka the so-called ‘Privacy Sandbox’) — the latter being an investigation where the CMA has actively engaged the UK’s privacy watchdog (the ICO) to work with it.
Only last week the competition watchdog said it was minded to accept a set of legally binding commitments that Google has offered which could see a quasi ‘co-design’ process taking place, between the CMA, the ICO and Google, over the shape of the key technology infrastructure that ultimately replaces tracking cookies. So a pretty major development.
Germany’s FCO has also been very active against big tech this year — making full use of an update to the national competition law which gives it the power to take proactive inventions around large digital platforms with major competitive significance — with open procedures now against Amazon, Facebook and Google.
The Bundeskartellamt was already a pioneer in pushing to loop EU data protection rules into competition enforcement in digital markets in a strategic case against Facebook, as we’ve reported before. That closely watched (and long running) case — which targets Facebook’s ‘superprofiling’ of users, based on its ability to combine user data from multiple sources to flesh out a single high dimension per-user profile — is now headed to Europe’s top court (so likely has more years to run).
But during yesterday’s discussion Mundt confirmed that the FCO’s experience litigating that case helped shape key amendments to the national law that’s given him beefier powers to tackle big tech. (And he suggested it’ll be a lot easier to regulate tech giants going forward, using these new national powers.)
“Once we have designated a company to be of ‘paramount significance’ we can prohibit certain conduct much more easily than we could in the past,” he said. “We can prohibit, for example, that a company impedes other undertaking by data processing that is relevant for competition. We can prohibit that a use of service depends on the agreement to data collection with no choice — this is the Facebook case, indeed… When this law was negotiated in parliament parliament very much referred to the Facebook case and in a certain sense this entwinement of competition law and data protection law is written in a theory of harm in the German competition law.
“This makes a lot of sense. If we talk about dominance and if we assess that this dominance has come into place because of data collection and data possession and data processing you need a parameter in how far a company is allowed to gather the data to process it.”
“The past is also the future because this Facebook case… has always been a big case. And now it is up to the European Court of Justice to say something on that,” he added. “If everything works well we might get a very clear ruling saying… as far as the ECN [European Competition Network] is concerned how far we can integrate GDPR in assessing competition matters.
“So Facebook has always been a big case — it might get even bigger in a certain sense.”
France’s competition authority and its national privacy regulator (the CNIL), meanwhile, have also been joint working in recent years.
Including over a competition complaint against Apple’s pro-user privacy App Tracking Transparency feature (which last month the antitrust watchdog declined to block) — so there’s evidence there too of respective oversight bodies seeking to bridge legal silos in order to crack the code of how to effectively regulate tech giants whose market power, panellists agreed, is predicated on earlier failures of competition law enforcement that allowed tech platforms to buy up rivals and sew up access to user data, entrenching advantage at the expense of user privacy and locking out the possibility of future competitive challenge.
The contention is that monopoly power predicated upon data access also locks consumers into an abusive relationship with platform giants which can then, in the case of ad giants like Google and Facebook, extract huge costs (paid not in monetary fees but in user privacy) for continued access to services that have also become digital staples — amping up the ‘winner takes all’ characteristic seen in digital markets (which is obviously bad for competition too).
Yet, traditionally at least, Europe’s competition authorities and data protection regulators have been focused on separate workstreams.
The consensus from the CEPR panels was very much that that is both changing and must change if civil society is to get a grip on digital markets — and wrest control back from tech giants to that ensure consumers and competitors aren’t both left trampled into the dust by data-mining giants.
Denham said her motivation to dial up collaboration with other digital regulators was the UK government entertaining the idea of creating a one-stop-shop ‘Internet’ super regulator. “What scared the hell out of me was the policymakers the legislators floating the idea of one regulator for the Internet. I mean what does that mean?” she said. “So I think what the regulators did is we got to work, we got busy, we become creative, got our of our silos to try to tackle these companies — the likes of which we have never seen before.
“And I really think what we have done in the UK — and I’m excited if others think it will work in their jurisdictions — but I think that what really pushed us is that we needed to show policymakers and the public that we had our act together. I think consumers and citizens don’t really care if the solution they’re looking for comes from the CMA, the ICO, Ofcom… they just want somebody to have their back when it comes to protection of privacy and protection of markets.
“We’re trying to use our regulatory levers in the most creative way possible to make the digital markets work and protect fundamental rights.”
During the earlier panel, the CMA’s Simeon Thornton, a director at the authority, made some interesting remarks vis-a-vis its (ongoing) Google ‘Privacy Sandbox’ investigation — and the joint working it’s doing with the ICO on that case — asserting that “data protection and respecting users’ rights to privacy are very much at the heart of the commitments upon which we are currently consulting”.
“If we accept the commitments Google will be required to develop the proposals according to a number of criteria including impacts on privacy outcomes and compliance with data protection principles, and impacts on user experience and user control over the use of their personal data — alongside the overriding objective of the commitments which is to address our competition concerns,” he went on, adding: “We have worked closely with the ICO in seeking to understand the proposals and if we do accept the commitments then we will continue to work closely with the ICO in influencing the future development of those proposals.”
“If we accept the commitments that’s not the end of the CMA’s work — on the contrary that’s when, in many respects, the real work begins. Under the commitments the CMA will be closely involved in the development, implementation and monitoring of the proposals, including through the design of trials for example. It’s a substantial investment from the CMA and we will be dedicating the right people — including data scientists, for example, to the job,” he added. “The commitments ensure that Google addresses any concerns that the CMA has. And if outstanding concerns cannot be resolved with Google they explicitly provide for the CMA to reopen the case and — if necessary — impose any interim measures necessary to avoid harm to competition.
“So there’s no doubt this is a big undertaking. And it’s going to be challenging for the CMA, I’m sure of that. But personally I think this is the sort of approach that is required if we are really to tackle the sort of concerns we’re seeing in digital markets today.”
Thornton also said: “I think as regulators we do need to step up. We need to get involved before the harm materializes — rather than waiting after the event to stop it from materializing, rather than waiting until that harm is irrevocable… I think it’s a big move and it’s a challenging one but personally I think it’s a sign of the future direction of travel in a number of these sorts of cases.”
Also speaking during the regulatory panel session was FTC commissioner Rebecca Slaughter — a dissenter on the $5BN fine it hit Facebook with back in 2019 for violating an earlier consent order (as she argued the settlement provided no deterrent to address underlying privacy abuse, leaving Facebook free to continue exploiting users’ data) — as well as Chris D’Angelo, the chief deputy AG of the New York Attorney General, which is leading a major states antitrust case against Facebook.
Slaughter pointed out that the FTC already combines a consumer focus with attention on competition but said that historically there has been separation of divisions and investigations — and she agreed on the need for more joined-up working.
She also advocated for US regulators to get out of a pattern of ineffective enforcement in digital markets on issues like privacy and competition where companies have, historically, been given — at best — what amounts to wrist slaps that don’t address root causes of market abuse, perpetuating both consumer abuse and market failure. And be prepared to litigate more.
As regulators toughen up their stipulations they will need to be prepared for tech giants to push back — and therefore be prepared to sue instead of accepting a weak settlement.
“That is what is most galling to me that even where we take action, in our best faith good public servants working hard to take action, we keep coming back to the same questions, again and again,” she said. “Which means that the actions we are taking isn’t working. We need different action to keep us from having the same conversation again and again.”
Slaughter also argued that it’s important for regulators not to pile all the burden of avoiding data abuses on consumers themselves.
“I want to sound a note of caution around approaches that are centered around user control,” she said. “I think transparency and control are important. I think it is really problematic to put the burden on consumers to work through the markets and the use of data, figure out who has their data, how it’s being used, make decisions… I think you end up with notice fatigue; I think you end up with decision fatigue; you get very abusive manipulation of dark patterns to push people into decisions.
“So I really worry about a framework that is built at all around the idea of control as the central tenant or the way we solve the problem. I’ll keep coming back to the notion of what instead we need to be focusing on is where is the burden on the firms to limit their collection in the first instance, prohibit their sharing, prohibit abusive use of data and I think that that’s where we need to be focused from a policy perspective.
“I think there will be ongoing debates about privacy legislation in the US and while I’m actually a very strong advocate for a better federal framework with more tools that facilitate aggressive enforcement but I think if we had done it ten years ago we probably would have ended up with a notice and consent privacy law and I think that that would have not been a great outcome for consumers at the end of the day. So I think the debate and discussion has evolved in an important way. I also think we don’t have to wait for Congress to act.”
As regards more radical solutions to the problem of market-denting tech giants — such as breaking up sprawling and (self-servingly) interlocking services empires — the message from Europe’s most ‘digitally switched on’ regulators seemed to be don’t look to us for that; we are going to have to stay in our lanes.
So tl;dr — if antitrust and privacy regulators’ joint working just sums to more intelligent fiddling round the edges of digital market failure, and it’s break-ups of US tech giants that’s what’s really needed to reboot digital markets, then it’s going to be up to US agencies to wield the hammers. (Or, as Coscelli elegantly phrased it: “It’s probably more realistic for the US agencies to be in the lead in terms of structural separation if and when it’s appropriate — rather than an agency like ours [working from inside a mid-sized economy such as the UK’s].”)
The lack of any representative from the European Commission on the panel was an interesting omission in that regard — perhaps hinting at ongoing ‘structural separation’ between DG Comp and DG Justice where digital policymaking streams are concerned.
The current competition chief, Margrethe Vestager — who also heads up digital strategy for the bloc, as an EVP — has repeatedly expressed reluctance to impose radical ‘break up’ remedies on tech giants. She also recently preferred to waive through another Google digital merger (its acquisition of fitness wearable Fitbit) — agreeing to accept a number of ‘concessions’ and ignoring major mobilization by civil society (and indeed EU data protection agencies) urging her to block it.
Yet in an earlier CEPR discussion session, another panellist — Yale University’s Dina Srinivasan — pointed to the challenges of trying to regulate the behavior of companies when there are clear conflicts of interest, unless and until you impose structural separation as she said has been necessary in other markets (like financial services).
“In advertising we have an electronically traded market with exchanges and we have brokers on both sides. In a competitive market — when competition was working — you saw that those brokers were acting in the best interest of buyers and sellers. And as part of carrying out that function they were sort of protecting the data that belonged to buyers and sellers in that market, and not playing with the data in other ways — not trading on it, not doing conduct similar to insider trading or even front running,” she said, giving an example of how that changed as Google gained market power.
“So Google acquired DoubleClick, made promises to continue operating in that manner, the promises were not binding and on the record — the enforcement agencies or the agencies that cleared the merger didn’t make Google promise that they would abide by that moving forward and so as Google gained market power in that market there’s no regulatory requirement to continue to act in the best interests of your clients, so now it becomes a market power issue, and after they gain enough market power they can flip data ownership and say ‘okay, you know what before you owned this data and we weren’t allowed to do anything with it but now we’re going to use that data to for example sell our own advertising on exchanges’.
“But what we know from other markets — and from financial markets — is when you flip data ownership and you engage in conduct like that that allows the firm to now build market power in yet another market.”
The CMA’s Coscelli picked up on Srinivasan’s point — saying it was a “powerful” one, and that the challenges of policing “very complicated” situations involving conflicts of interests is something that regulators with merger control powers should be bearing in mind as they consider whether or not to green light tech acquisitions.
(Just one example of a merger in the digital space that the CMA is still scrutizing is Facebook’s acquisition of animated GIF platform Giphy. And it’s interesting to speculate whether, had brexit happened a little faster, the CMA might have stepped in to block Google’s Fitibit merger where the EU wouldn’t.)
Coscelli also flagged the issue of regulatory under-enforcement in digital markets as a key one, saying: “One of the reasons we are today where we are is partially historic under-enforcement by competition authorities on merger control — and that’s a theme that is extremely interesting and relevant to us because after the exit from the EU we now have a bigger role in merger control on global mergers. So it’s very important to us that we take the right decisions going forward.”
“Quite often we intervene in areas where there is under-enforcement by regulators in specific areas… If you think about it when you design systems where you have vertical regulators in specific sectors and horizontal regulators like us or the ICO we are more successful if the vertical regulators do their job and I’m sure they are more success if we do our job properly.
“I think we systematically underestimate… the ability of companies to work through whatever behavior or commitments or arrangement are offered to us, so I think these are very important points,” he added, signalling that a higher degree of attention is likely to be applied to tech mergers in Europe as a result of the CMA stepping out from the EU’s competition regulation umbrella.
Also speaking during the same panel, the EDPS warned that across Europe more broadly — i.e. beyond the small but engaged gathering of regulators brought together by CEPR — data protection and competition regulators are far from where they need to be on joint working, implying that the challenge of effectively regulating big tech across the EU is still a pretty Sisyphean one.
It’s true that the Commission is not sitting on hands in the face of tech giant market power.
At the end of last year it proposed a regime of ex ante regulations for so-called ‘gatekeeper’ platforms, under the Digital Markets Act. But the problem of how to effectively enforce pan-EU laws — when the various agencies involved in oversight are typically decentralized across Member States — is one key complication for the bloc. (The Commission’s answer with the DMA was to suggest putting itself in charge of overseeing gatekeepers but it remains to be seen what enforcement structure EU institutions will agree on.)
Clearly, the need for careful and coordinated joint working across multiple agencies with different legal competencies — if, indeed, that’s really what’s needed to properly address captured digital markets vs structural separation of Google’s search and adtech, for example, and Facebook’s various social products — steps up the EU’s regulatory challenge in digital markets.
“We can say that no effective competition nor protection of the rights in the digital economy can be ensured when the different regulators do not talk to each other and understand each other,” Wiewiórowski warned. “While we are still thinking about the cooperation it looks a little bit like everybody is afraid they will have to trade a little bit of its own possibility to assess.”
“If you think about the classical regulators isn’t it true that at some point we are reaching this border where we know how to work, we know how to behave, we need a little bit of help and a little bit of understanding of the other regulator’s work… What is interesting for me is there is — at the same time — the discussion about splitting of the task of the American regulators joining the ones on the European side. But even the statements of some of the commissioners in the European Union saying about the bigger role the Commission will play in the data protection and solving the enforcement problems of the GDPR show there is no clear understanding what are the differences between these fields.”
One thing is clear: Big tech’s dominance of digital markets won’t be unpicked overnight. But, on both sides of the Atlantic, there are now a bunch of theories on how to do it — and growing appetite to wade in.
The UK’s chief data protection regulator has warned over reckless and inappropriate use of live facial recognition (LFR) in public places.
Publishing an opinion today on the use of this biometric surveillance in public — to set out what is dubbed as the “rules of engagement” — the information commissioner, Elizabeth Denham, also noted that a number of investigations already undertaken by her office into planned applications of the tech have found problems in all cases.
“I am deeply concerned about the potential for live facial recognition (LFR) technology to be used inappropriately, excessively or even recklessly. When sensitive personal data is collected on a mass scale without people’s knowledge, choice or control, the impacts could be significant,” she warned in a blog post.
“Uses we’ve seen included addressing public safety concerns and creating biometric profiles to target people with personalised advertising.
“It is telling that none of the organisations involved in our completed investigations were able to fully justify the processing and, of those systems that went live, none were fully compliant with the requirements of data protection law. All of the organisations chose to stop, or not proceed with, the use of LFR.”
“Unlike CCTV, LFR and its algorithms can automatically identify who you are and infer sensitive details about you. It can be used to instantly profile you to serve up personalised adverts or match your image against known shoplifters as you do your weekly grocery shop,” Denham added.
“In future, there’s the potential to overlay CCTV cameras with LFR, and even to combine it with social media data or other ‘Big Data’ systems — LFR is supercharged CCTV.”
The use of biometric technologies to identify individuals remotely sparks major human rights concerns, including around privacy and the risk of discrimination.
Across Europe there are campaigns — such as Reclaim your Face — calling for a ban on biometric mass surveillance.
In another targeted action, back in May, Privacy International and others filed legal challenges at the controversial US facial recognition company, Clearview AI, seeking to stop it from operating in Europe altogether. (Some regional police forces have been tapping in — including in Sweden where the force was fined by the national DPA earlier this year for unlawful use of the tech.)
But while there’s major public opposition to biometric surveillance in Europe, the region’s lawmakers have so far — at best — been fiddling around the edges of the controversial issue.
A pan-EU regulation the European Commission presented in April, which proposes a risk-based framework for applications of artificial intelligence, included only a partial prohibition on law enforcement’s use of biometric surveillance in public places — with wide ranging exemptions that have drawn plenty of criticism.
There have also been calls for a total ban on the use of technologies like live facial recognition in public from MEPs across the political spectrum. The EU’s chief data protection supervisor has also urged lawmakers to at least temporarily ban the use of biometric surveillance in public.
The EU’s planned AI Regulation won’t apply in the UK, in any case, as the country is now outside the bloc. And it remains to be seen whether the UK government will seek to weaken the national data protection regime.
A recent report it commissioned to examine how the UK could revise its regulatory regime, post-Brexit, has — for example — suggested replacing the UK GDPR with a new “UK framework” — proposing changes to “free up data for innovation and in the public interest”, as it puts it, and advocating for revisions for AI and “growth sectors”. So whether the UK’s data protection regime will be put to the torch in a post-Brexit bonfire of ‘red tape’ is a key concern for rights watchers.
(The Taskforce on Innovation, Growth and Regulatory Reform report advocates, for example, for the complete removal of Article 22 of the GDPR — which gives people rights not to be subject to decisions based solely on automated processing — suggesting it be replaced with “a focus” on “whether automated profiling meets a legitimate or public interest test”, with guidance on that envisaged as coming from the Information Commissioner’s Office (ICO). But it should also be noted that the government is in the process of hiring Denham’s successor; and the digital minister has said he wants her replacement to take “a bold new approach” that “no longer sees data as a threat, but as the great opportunity of our time”. So, er, bye-bye fairness, accountability and transparency then?)
For now, those seeking to implement LFR in the UK must comply with provisions in the UK’s Data Protection Act 2018 and the UK General Data Protection Regulation (aka, its implementation of the EU GDPR which was transposed into national law before Brexit), per the ICO opinion, including data protection principles set out in UK GDPR Article 5, including lawfulness, fairness, transparency, purpose limitation, data minimisation, storage limitation, security and accountability.
Controllers must also enable individuals to exercise their rights, the opinion also said.
“Organisations will need to demonstrate high standards of governance and accountability from the outset, including being able to justify that the use of LFR is fair, necessary and proportionate in each specific context in which it is deployed. They need to demonstrate that less intrusive techniques won’t work,” wrote Denham. “These are important standards that require robust assessment.
“Organisations will also need to understand and assess the risks of using a potentially intrusive technology and its impact on people’s privacy and their lives. For example, how issues around accuracy and bias could lead to misidentification and the damage or detriment that comes with that.”
The timing of the publication of the ICO’s opinion on LFR is interesting in light of wider concerns about the direction of UK travel on data protection and privacy.
If, for example, the government intends to recruit a new, ‘more pliant’ information commissioner — who will happily rip up the rulebook on data protection and AI, including in areas like biometric surveillance — it will at least be rather awkward for them to do so with an opinion from the prior commissioner on the public record that details the dangers of reckless and inappropriate use of LFR.
Certainly, the next information commissioner won’t be able to say they weren’t given clear warning that biometric data is particularly sensitive — and can be used to estimate or infer other characteristics, such as their age, sex, gender or ethnicity.
Or that ‘Great British’ courts have previously concluded that “like fingerprints and DNA [a facial biometric template] is information of an ‘intrinsically private’ character”, as the ICO opinion notes, while underlining that LFR can cause this super sensitive data to be harvested without the person in question even being aware it’s happening.
Denham’s opinion also hammers hard on the point about the need for public trust and confidence for any technology to succeed, warning that: “The public must have confidence that its use is lawful, fair, transparent and meets the other standards set out in data protection legislation.”
The ICO has previously published an Opinion into the use of LFR by police forces — which she said also sets “a high threshold for its use”. (And a few UK police forces — including the Met in London — have been among the early adopters of facial recognition technology, which has in turn led some into legal hot water on issues like bias.)
Disappointingly, though, for human rights advocates, the ICO opinion shies away from recommending a total ban on the use of biometric surveillance in public by private companies or public organizations — with the commissioner arguing that while there are risks with use of the technology there could also be instances where it has high utility (such as in the search for a missing child).
“It is not my role to endorse or ban a technology but, while this technology is developing and not widely deployed, we have an opportunity to ensure it does not expand without due regard for data protection,” she wrote, saying instead that in her view “data protection and people’s privacy must be at the heart of any decisions to deploy LFR”.
Denham added that (current) UK law “sets a high bar to justify the use of LFR and its algorithms in places where we shop, socialise or gather”.
“With any new technology, building public trust and confidence in the way people’s information is used is crucial so the benefits derived from the technology can be fully realised,” she reiterated, noting how a lack of trust in the US has led to some cities banning the use of LFR in certain contexts and led to some companies pausing services until rules are clearer.
“Without trust, the benefits the technology may offer are lost,” she also warned.
There is one red line that the UK government may be forgetting in its unseemly haste to (potentially) gut the UK’s data protection regime in the name of specious ‘innovation’. Because if it tries to, er, ‘liberate’ national data protection rules from core EU principles (of lawfulness, fairness, proportionality, transparency, accountability and so on) — it risks falling out of regulatory alignment with the EU, which would then force the European Commission to tear up a EU-UK data adequacy arrangement (on which the ink is still drying).
The UK having a data adequacy agreement from the EU is dependent on the UK having essentially equivalent protections for people’s data. Without this coveted data adequacy status UK companies will immediately face far greater legal hurdles to processing the data of EU citizens (as the US now does, in the wake of the demise of Safe Harbor and Privacy Shield). There could even be situations where EU data protection agencies order EU-UK data flows to be suspended altogether…
Obviously such a scenario would be terrible for UK business and ‘innovation’ — even before you consider the wider issue of public trust in technologies and whether the Great British public itself wants to have its privacy rights torched.
Given all this, you really have to wonder whether anyone inside the UK government has thought this ‘regulatory reform’ stuff through. For now, the ICO is at least still capable of thinking for them.
Democratic Senator Kirsten Gillibrand has revived a bill that would establish a new U.S. federal agency to shield Americans from the invasive practices of tech companies operating in their own backyard.
Last year, Gillibrand (D-NY) introduced the Data Protection Act, a legislative proposal that would create an independent agency designed to address modern concerns around privacy and tech that existing government regulators have proven ill-equipped to handle.
“The U.S. needs a new approach to privacy and data protection and it’s Congress’ duty to step forward and seek answers that will give Americans meaningful protection from private companies that value profits over people,” Sen. Gillibrand said.
The revamped bill, which retains its core promise of a new “Data Protection Agency,” is co-sponsored by Ohio Democrat Sherrod Brown and returns to the new Democratic Senate with a few modifications.
In the spirit of all of the tech antitrust regulation chatter going on right now, the 2021 version of the bill would also empower the Data Protection Agency to review any major tech merger involving a data aggregator or other deals that would see the user data of 50,000 people change hands.
Other additions to the bill would establish an office of civil rights to “advance data justice” and allow the agency to evaluate and penalize high-risk data practices, like the use of algorithms, biometric data and harvesting data from children and other vulnerable groups.
Gillibrand calls the notion of updating regulation to address modern tech concerns “critical” — and she’s not alone. Democrats and Republicans seldom find common ground in 2021, but a raft of new bipartisan antitrust bills show that Congress has at last grasped how important it is to rein in tech’s most powerful companies lest they lose the opportunity altogether.
The Data Protection Act lacks the bipartisan sponsorship enjoyed by the set of new House tech bills, but with interest in taking on big tech at an all-time high, it could attract more support. Of all of the bills targeting the tech industry in the works right now, this one isn’t likely to go anywhere without more bipartisan interest, but that doesn’t mean its ideas aren’t worth considering.
Like some other proposals wending their way through Congress, this bill recognizes that the FTC has failed to meaningfully punish big tech companies for their bad behavior. In Gillibrand’s vision, the Data Protection Agency could rise to modern regulatory challenges where the FTC has failed. In other proposals, the FTC would be bolstered with new enforcement powers or infused with cash that could help the agency’s bite match its bark.
It’s possible that modernizing the tools that federal agencies have at hand won’t be sufficient. Cutting back more than a decade of overgrowth from tech’s data giants won’t be easy, particularly because the stockpile of Americans’ data that made those companies so wealthy is already out in the wild.
A new agency dedicated to wresting control of that data from powerful tech companies could bridge the gap between Europe’s own robust data protections and the absence of federal regulation we’ve seen in the U.S. But until something does, Silicon Valley’s data hoarders will eagerly fill the power vacuum themselves.