FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Google ditches pay-to-play Android search choice auction for free version after EU pressure

By Natasha Lomas

Google is ditching a massively unpopular auction format that underpins an choice screen it offers in the European Union, it said today. Eligible search providers will be able to freely participate.

The auction model was Google’s ‘remedy’ of choice — following the 2018 EU $5BN antitrust enforcement against Android — but rivals have always maintained it’s anything but fair, as we’ve reported previously (here, here, here, for eg).

The Android choice screen presents users in the region with a selection of search engines to choose as a default at the point of device set up (or factory reset). The offered choices depend on sealed bids made by search engine companies bidding to pay Google to win one of three available slots.

Google’s own search engine is a staple ‘choice’ on the screen regardless of EU market.

The pay-to-play model Google devised is not only loudly hated by smaller search engine players (including those with alternative business models, such as the Ecosia tree-planting search engine), but it been entirely ineffectual at restoring competitive balance in search marketshare so it’s not surprising Google has been forced to ditch it.

The Commission had signalled a change might be coming, with Bloomberg reporting in May remarks by the EU’s competition chief, Margrethe Vesager, that it was “actively working on making” Google’s Android choice screen for search and browser rivals work. So it evidently heard the repeated cries of ‘foul’ and ‘it’s not working, yo!’. And — finally — it acted.

However, framing its own narrative, Google writes that it’s been in “constructive discussions” with EU lawmakers for years about “how to promote even more choice on Android devices, while ensuring that we can continue to invest in, and provide, the Android platform for free for the long term”, as it puts it.

It also seems to be trying to throw some shade/blame back at the EU — writing that it only introduced what it calls a “promotional opportunity” (lol) “in consultation with the Commission”. (Ergo, ‘don’t blame us gov, blame them!’)

In another detail-light paragraph of its blog, Google says it’s now making “some final changes” — including making participation free for “eligible search providers” — after what it describes as “further feedback from the Commission”

“We will also be increasing the number of search providers shown on the screen. These changes will come into effect from September this year on Android devices,” it adds.

The planned changes raise new questions — such as what criteria it will be using to determine eligibility; and will Google’s criteria be transparent or, like the problematic auction, sealed from view? And how many search engines will be presented to users? More than the current four, that’s clear.

Where Google’s own search engine will appear in the list will also be very interesting to see, as well as the criteria for ranking all the options (marketshare? random allocation?).

Google’s blog is mealy mouthed on any/all such detail — but the Commission gave us a pretty good glimpse when we asked (see their comment below).

It still remains to seen whether any other devilish dark pattern design details will appear when we see the full implementation.

But it’s worth noting that it’s not in Google’s gift to claim these changes are “final”. EU regulators are responsible for monitoring antitrust compliance — so if fresh complaints flow they will be duty bound to listen and react.

In one response to Google’s auction U-turn, pro-privacy search player DuckDuckGo was already critical — but more on the scope than the detail.

Founder Gabriel Weinberg pointed out that not only is the switch three years too late but Google should also be applying it across all platforms (desktop and Chrome too), as well as making it seamlessly easy for Android users to switch default, rather than gating the choice screen to set-up and/or factory reset (as we’ve reported before).

Google is now doing what it should have done 3yr ago: a free search preference menu on Android in the EU: https://t.co/M9XmB1VuGr

However, it should be on all platforms (e.g., also desktop Chrome), accessible at all times (i.e., not just on factory reset), and in all countries. https://t.co/HcIrE8KJx3

— Gabriel Weinberg (@yegg) June 8, 2021

Another long-time critic of the auction model, tiny not-for-profit Ecosia, was jubilant that its fight against the search behemonth has finally paid off.

Commenting in a statement, CEO Christian Kroll said: “This is a real life David versus Goliath story — and David has won. This is a momentous day, and a real moment of celebration for Ecosia. We’ve campaigned for fairness in the search engine market for several years, and with this, we have something that resembles a level playing field in the market. Search providers now have a chance to compete more fairly in the Android market, based on the appeal of their product, rather than being shut out by monopolistic behaviour.”

The Commission, meanwhile, confirmed to TechCrunch that it acted after a number of competitors raised concerns over the auction model — with a spokeswoman saying it had “discussed with Google means to improve that choice screen to address those concerns”.

“We welcome the changes introduced by Google to the choice screen. Being included on the choice screen will now be free for rival search providers,” she went on. “In addition, more search providers will be included in the choice screen. Therefore, users will have even more opportunities to choose an alternative.”

The Commission also offered a little more detail of how the choice screen will look come fall, saying that “on almost all devices, five search providers will be immediately visible”.

“They will be selected based on their market share in the user’s country and displayed in a randomised order which ensures that Google will not always be the first. Users will be able to scroll down to see up to seven more search providers, bringing the total search providers displayed in the choice screen to 12.”

“These are positive developments for the implementation of the remedy following our Android decision,” the spokeswoman added.

So it will certainly be very interesting indeed to see whether this Commission-reconfigured much bigger and more open choice screen helps move the regional need on Google’s search engine market share.

Interesting times indeed!

Croatia’s Gideon Brothers raises $31M for its 3D vision-enabled autonomous warehouse robots

By Mike Butcher

Proving that Central and Eastern Europe remains a powerhouse of hardware engineering matched with software, Gideon Brothers (GB), a Zagreb, Croatia-based robotics and AI startup, has raised a $31 million Series A round led by Koch Disruptive Technologies (KDT), the venture and growth arm of Koch Industries Inc., with participation from DB Schenker, Prologis Ventures and Rite-Hite.

The round also includes participation from several of Gideon Brothers’ existing backers: Taavet Hinrikus (co-founder of TransferWise), Pentland Ventures, Peaksjah, HCVC (Hardware Club), Ivan Topčić, Nenad Bakić and Luca Ascani.

The investment will be used to accelerate the development and commercialization of GB’s AI and 3D vision-based “autonomous mobile robots” or “AMRs”. These perform simple tasks such as transporting, picking up and dropping off products in order to free up humans to perform more valuable tasks.

The company will also expand its operations in the EU and U.S. by opening offices in Munich, Germany and Boston, Massachusetts, respectively.

Gideon Brothers founders

Gideon Brothers founders. Image Credits: Gideon Brothers

Gideon Brothers make robots and the accompanying software platform that specializes in horizontal and vertical handling processes for logistics, warehousing, manufacturing and retail businesses. For obvious reasons, the need to roboticize supply chains has exploded during the pandemic.

Matija Kopić, CEO of Gideon Brothers, said: “The pandemic has greatly accelerated the adoption of smart automation, and we are ready to meet the unprecedented market demand. The best way to do it is by marrying our proprietary solutions with the largest, most demanding customers out there. Our strategic partners have real challenges that our robots are already solving, and, with us, they’re seizing the incredible opportunity right now to effect robotic-powered change to some of the world’s most innovative organizations.”

He added: “Partnering with these forward-thinking industry leaders will help us expand our global footprint, but we will always stay true to our Croatian roots. That is our superpower. The Croatian startup scene is growing exponentially and we want to unlock further opportunities for our country to become a robotics & AI powerhouse.”

Annant Patel, director at Koch Disruptive Technologies, said: “With more than 300 Koch operations and production units globally, KDT recognizes the unique capabilities of and potential for Gideon Brothers’ technology to substantially transform how businesses can approach warehouse and manufacturing processes through cutting edge AI and 3D AMR technology.”

Xavier Garijo, member of the Board of Management for Contract Logistics, DB Schenker, added: “Our partnership with Gideon Brothers secures our access to best in class robotics and intelligent material handling solutions to serve our customers in the most efficient way.”

GB’s competitors include Seegrid, Teradyne (MiR), Vecna Robotics, Fetch Robotics, AutoGuide Mobile Robots, Geek+ and Otto Motors.

France fines Google $268M for adtech abuses and gets interoperability commitments

By Natasha Lomas

France’s competition watchdog, L’Autorité de la concurrence, has fined Google up to €220 million (~$268M) in a case related to self-preferencing within the adtech market which the watchdog found constituted an abuse by Google of a dominant position for ad servers for website publishers and mobile apps.

L’Autorité began looking into Google’s adtech business following complaints from a number of French publishers.

Today it said Google had requested a settlement — and is “not disputing the facts of the case” — with the tech giant proposing certain ‘interoperability’ commitments that the regulator has accepted, and which will form a binding part of the decision.

The watchdog called the action a world first in probing Google’s complex algorithmic ad auctions.

Commenting in a statement, L’Autorité’s president, Isabelle de Silva, said: “The decision sanctioning Google has a very special meaning because it is the first decision in the world to look into complex algorithmic processes. Auctions through which online display advertising works. The investigation, carried out particularly quickly, revealed the processes by which Google, relying on its considerable dominant position on ad servers for sites and applications, was favored over its competitors on both ad servers and SSP platforms. These very serious practices penalized competition in the emerging online advertising market, and have enabled Google not only to preserve but also to increase its dominant position. This sanction and these commitments will make it possible to restore a level playing field for all players, and the ability of publishers to make the most of their advertising space. ”

At specific issue is preferential treatment Google granted to its own proprietary technologies — offered under the Google Ad Manager brand — on both the demand and supply sides; via the operation of its DFP ad server (which allows publishers of sites and applications to sell their spaces advertising), and its sales platform SSP AdX (which organizes the auction process allowing publishers to sell their ‘impressions’ or advertising inventories to advertisers), per the watchdog.

L’Autorité found that Google’s preferential treatment of its adtech harmed competitors and publishers.

Reached for comment, a Google spokeswoman referred us to this blog post discussing the settlement where Maria Gomri, a legal director for Google France, writes that it has “agreed on a set of commitments to make it easier for publishers to make use of data and use our tools with other ad technologies” — before detailing the steps it has pledged to take.

The publishing groups that made the original complaint against Google in France were News Corp Inc., the Le Figaro group and the Rossel La Voix group, although Le Figaro withdrew its referral last November — at the same time as it signed a content-licensing deal with Google, related to Google’s News Showcase product (a vehicle Google has spun up as legislators in different markets around the world have taken steps to force it to pay for some content reuse).

France’s competition watchdog had earlier ordered Google to negotiate with publishers over remuneration for reuse of their content, following the transposing into national law of updated, pan-EU copyright rules — which extend neighbouring rights to publishers’ news snippets. So the adtech giant’s operations remain under scrutiny on that front too.

Google agrees to interoperability changes

Google has agreed to improve the interoperability of Google Ad Manager services with third-party ad server and advertising space sales platform solutions, per L’Autorité, as well as agreeing to end provisions that favor itself.

“The practices in question are particularly serious because they penalized Google’s competitors in the SSP market and the publishers of sites and mobile applications,” it writes in a press release (translated from French with Google Translate). “Among these, the press groups — including those who were [the source] of the referral to the Authority — were affected even though their economic model is also strongly weakened by the decline in sales of paper subscriptions and the decline in associated advertising revenue.”

L’Autorité confirmed it has accepted Google’s commitments — and makes them binding in its decision. The commitments will be mandatory for a three year period, per the agreement.

The commitments Google has offered appear to speak to some operational details that have emerged via a Texas antitrust lawsuit also targeting Google’s adtech.

Earlier this year, documents surfaced via that suit which appeared to show the tech giant operated a secret program that used data from past bids in its digital ad exchange to allegedly give its own ad-buying system an advantage over competitors, per the WSJ — which reported that the so-called ‘Project Bernanke’ program was not disclosed to publishers who sold ads through Google’s exchange.

In the area of data access, Google has committed to the L’Autorité to devise a solution to ensure that all buyers which use Google Ad Manager to participate in its ad exchange receive equal access to data from its auctions — “to help them efficiently buy ad space from publishers”. Including when publishers use an off-platform technique called ‘Header Bidding’ (which enables publishers to run an auction among multiple ad exchanges but which, as a result of how Google operates, has meant such buyers may be at a data disadvantage vs those participating through Google’s own platform).

Google claims it is “usually not technically possible” for it to identify participants in Header Bidding auctions, and thus that it cannot share data with those buyers. But it’s now committed to address that by working “to create a solution that ensures that all buyers that a publisher works with, including those who participate in Header Bidding, can receive equal access to data related to outcomes from the Ad Manager auction”.

It notes that “in particular” it will be “providing information around the ‘minimum bid to win’ from previous auctions”, going forward — which would address one disadvantageous blind-spot for publishers taking an off-platform route to try to earn more ad revenue.

Another commitment from Google to the French watchdog is a pledge to increase flexibility for publishers using its Ad Manager product — including by letting them set custom pricing rules for ads that are in sensitive categories and implementing product changes aimed at improving interoperability between Ad Manager and third-party ad servers.

Google also writes that it is “reaffirming” that it won’t limit Ad Manager publishers from negotiating specific terms or pricing directly with other sell-side platforms (SSPs); and says it is committing to continue to provide publishers with controls to include or exclude certain buyers at their discretion when they use its product.

The third batch of commitments focus on transparency — and the opacity of adtech has long been a core criticism of the market, including for the competitive dimension as unclear workings by dominant platforms can be used to shield abusive practices from view. (Indeed, L’Autorité already fined Google $166M back in December 2019 for having what it billed then as “opaque and difficult to understand” rules for its ad platform, Google Ads, and for applying them in “an unfair and random manner.”)

On transparency, Google has pledged not to use data from other SSPs to optimize bids in its own exchange in a way that other SSPs can’t reproduce. It also says it’s reupping a promise not to share any bid from any Ad Manager auction participants with any other auction participant prior to completion of the auction.

“Additionally, we’ll give publishers at least three months’ notice for major changes requiring significant implementation effort that publishers must adopt, unless those are related to security or privacy protections, or are required by law,” it further notes.

The commitments made to L’Autorité will apply to how Google operates its adtech in the French market — but are also set to be applied more widely.

“We will be testing and developing these changes over the coming months before rolling them out more broadly, including some globally,” Gomri added in the blog post.

L’Autorité‘s action comes after years of attention paid to the online advertising market.

Back in 2018 it published a report that delved into a number of competitive advantages leveraged by Facebook and Google, noting how the duopoly’s ad targeting offerings benefit from their leadership positions in respective markers and the resultant network effects; and also from their vertical integration model (playing in both publishing and technical intermediation); as well as from the ‘logged’ environments both have developed, requiring users to log in to access ‘free’ services — giving them access to a high volume of sociodemographic and behavioral data to power their ad targeting products, among other competitive advantages.

The UK’s Competition and Markets Authority has also conducted an online ad market study in recent years — findings from which are underpinning ‘pro-competition’ regulatory reform that’s now being targeted at tech giants with ‘strategic market status’ which will, in the future, be subject to an ex ante regime of custom requirements aimed at preemptively preventing market abuse.

The European Commission has, meanwhile, issued multiple antitrust enforcements against Google’s business in recent years — including a $1.7BN fine related to its search ad brokering business, AdSense, in 2019, and a $2.7BN penalty for its price comparison service, Google Shopping, back in 2017, to name two.

More recently, the EU regulators have been reported to be further probing Google’s adtech practices. So more interventions could be forthcoming.

However the Commission’s preferred approach of not imposing specific remedies itself — nor obtaining specific commitments, beyond a general requirement not to continue the sanctioned abuse (or any equivalent behavior) — seems to have failed to move the needle, certainly where Google’s market dominance is concerned.

Still, EU lawmakers’ experience with Google antitrust cases has certainly informed a recent pan-EU plan for a set of ex ante rules to apply to digital ‘gatekeepers’ — incoming under the Digital Markets Act, which was presented by Brussels last December.

Google’s Gradient Ventures leads $8.2M Series A for Vault Platform’s misconduct reporting SaaS

By Natasha Lomas

Fixing workplace misconduct reporting is a mission that’s snagged London-based Vault Platform backing from Google’s AI focused fund, Gradient Ventures, which is the lead investor in an $8.2 million Series A that’s being announced today.

Other investors joining the round are Illuminate Financial, along with existing investors including Kindred Capital and Angular Ventures. Its $4.2M seed round was closed back in 2019.

Vault sells a suite of SaaS tools to enterprise-sized or large/scale-up companies to support them to pro-actively manage internal ethics and integrity issues. As well as tools for staff to report issues, data and analytics is baked into the platform — so it can support with customers’ wider audit and compliance requirements.

In an interview with TechCrunch, co-founder and CEO Neta Meidav said that as well as being wholly on board with the overarching mission to upgrade legacy reporting tools like hotlines provided to staff to try to surface conduct-related workplace risks (be that bullying and harassment; racism and sexism; or bribery, corruption and fraud), as you might expect Gradient Ventures was interested in the potential for applying AI to further enhance Vault’s SaaS-based reporting tool.

A feature of its current platform, called ‘GoTogether’, consists of an escrow system that allows users to submit misconduct reports to the relevant internal bodies but only if they are not the first or only person to have made a report about the same person — the idea being that can help encourage staff (or outsiders, where open reporting is enabled) to report concerns they may otherwise hesitate to, for various reasons.

Vault now wants to expand the feature’s capabilities so it can be used to proactively surface problematic conduct that may not just relate to a particular individual but may even affect a whole team or division — by using natural language processing to help spot patterns and potential linkages in the kind of activity being reported.

“Our algorithms today match on an alleged perpetrator’s identity. However many events that people might report on are not related to a specific person — they can be more descriptive,” explains Meidav. “For example if you are experiencing some irregularities in accounting in your department, for example, and you’re suspecting that there is some sort of corruption or fraudulent activity happening.”

“If you think about the greatest [workplace misconduct] disasters and crises that happened in recent years — the Dieselgate story at Volkswagen, what happened in Boeing — the common denominator in all these cases is that there’s been some sort of a serious ethical breach or failure which was observed by several people within the organization in remote parts of the organization. And the dots weren’t connected,” she goes on. “So the capacity we’re currently building and increasing — building upon what we already have with GoTogether — is the ability to connect on these repeated events and be able to connect and understand and read the human input. And connect the dots when repeated events are happening — alerting companies’ boards that there is a certain ‘hot pocket’ that they need to go and investigate.

“That would save companies from great risk, great cost, and essentially could prevent huge loss. Not only financial but reputational, sometimes it’s even loss to human lives… That’s where we’re getting to and what we’re aiming to achieve.”

There is the question of how defensible Vault’s GoTogether feature is — how easily it could be copied — given you can’t patent an idea. So baking in AI smarts may be a way to layer added sophistication to try to maintain a competitive edge.

“There’s some very sophisticated, unique technology there in the backend so we are continuing to invest in this side of our technology. And Gradient’s investment and the specific we’re receiving from Google now will only increase that element and that side of our business,” says Meidav when we ask about defensibility.

Commenting on the funding in a statement, Gradient Ventures founder and managing partner, Anna Patterson, added: “Vault tackles an important space with an innovative and timely solution. Vault’s application provides organizations with a data-driven approach to tackling challenges like occupational fraud, bribery or corruption incidents, safety failures and misconduct. Given their impressive team, technology, and customer traction, they are poised to improve the modern workplace.”

The London-based startup was only founded in 2018 — and while it’s most keen to talk about disrupting legacy hotline systems, which offer only a linear and passive conduit for misconduct reporting, there are a number of other startups playing in the same space. Examples include the likes of LA-based AllVoices, YC-backed WhispliHootsworth and Spot to name a few.

Competition seems likely to continue to increase as regulatory requirements around workplace reporting keep stepping up.

The incoming EU Whistleblower Protection Directive is one piece of regulation Vault expects will increase demand for smarter compliance solutions — aka “TrustTech”, as it seeks to badge it — as it will require companies of more than 250 employees to have a reporting solution in place by the end of December 2021, encouraging European businesses to cast around for tools to help shrink their misconduct-related risk.

She also suggests a platform solution can help bridge gaps between different internal teams that may need to be involved in addressing complaints, as well as helping to speed up internal investigations by offering the ability to chat anonymously with the original reporter.

Meidav also flags the rising attention US regulators are giving to workplace misconduct reporting — noting some recent massive awards by the SEC to external whistleblowers, such as the $28M paid out to a single whistleblower earlier this year (in relation to the Panasonic Avionics consultant corruption case).

She also argues that growing numbers of companies going public (such as via the SPAC trend, where there will have been reduced regulatory scrutiny ahead of the ‘blank check’ IPO) raises reporting requirements generally — meaning, again, more companies will need to have in place a system operated by a third party which allows anonymous and non-anonymous reporting. (And, well, we can only speculate whether companies going public by SPAC may be in greater need of misconduct reporting services vs companies that choose to take a more traditional and scrutinized route to market… )

“Just a few years back I had to convince investors that this category it really is a category — and fast forward to 2021, congratulations! We have a market here. It’s a growing category and there is competition in this space,” says Meidav.

“What truly differentiates Vault is that we did not just focus on digitizing an old legacy process. We focused on leveraging technology to truly empower more misconduct to surface internally and for employees to speak up in ways that weren’t available for them before. GoTogether is truly unique as well as the things that we’re doing on the operational side for a company — such as collaboration.”

She gives an example of how a customer in the oil and gas sector configured the platform to make use of an anonymous chat feature in Vault’s app so they could provide employees with a secure direct-line to company leadership.

“They’ve utilizing the anonymous chat that the app enables for people to have a direct line to leadership,” she says. “That’s incredible. That is such a progress, forward looking way to be utilizing this tool.”

Vault Platform’s suite of tools include an employee app and a Resolution Hub for compliance, HR, risk and legal teams (Image credits: Vault Platform)

Meidav says Vault has around 30 customers at this stage, split between the US and EU — its core regions of focus.

And while its platform is geared towards enterprises, its early customer base includes a fair number of scale-ups — with familiar names like Lemonade, Airbnb, Kavak, G2 and OVO Energy on the list.

Scale ups may be natural customers for this sort of product given the huge pressures that can be brought to bear upon company culture as a startup switches to expanding headcount very rapidly, per Meidav.

“They are the early adopters and they are also very much sensitive to events such as these kind of [workplace] scandals as it can impact them greatly… as well as the fact that when a company goes through a hyper growth — and usually you see hyper growth happening in tech companies more than in any other type of sector — hyper growth is at time when you really, as management, as leadership, it’s really important to safeguard your culture,” she suggests.

“Because it changes very, very quickly and these changes can lead to all sorts of things — and it’s really important that leadership is on top of it. So when a company goes through hyper growth it’s an excellent time for them to incorporate a tool such as Vault. As well as the fact that every company that even thinks of an IPO in the coming months or years will do very well to put a tool like Vault in place.”

Expanding Vault’s own team is also on the cards after this Series A close, as it guns for the next phase of growth for its own business. Presumably, though, it’s not short of a misconduct reporting solution.

Facebook’s use of ad data triggers antitrust probes in UK and EU

By Natasha Lomas

Facebook is facing a fresh pair of antitrust probes in Europe.

The UK’s Competition and Markets Authority (CMA) and the EU’s Competition Commission both announced formal investigations into the social media giant’s operations today — with what’s likely to have been co-ordinated timing.

The competition regulators will scrutinize how Facebook uses data from advertising customers and users of its single sign-on tool — specifically looking at whether it uses this data as an unfair lever against competitors in markets such as classified ads.

The pair also said they will seek to work closely together as their independent investigations progress.

With the UK outside the European trading bloc (post-Brexit), the national competition watchdog has a freer rein to pursue investigations that may be similar to or overlap with antitrust probes the EU is also undertaking.

And the two Facebook investigations do appear similar on the surface — with both broadly focused on how Facebook uses advertising data. (Though outcomes could of course differ.)

The danger for Facebook, here, is that a higher dimension of scrutiny will be applied to its business as a result of dual regulatory action — with the opportunity for joint working and cross-referencing of its responses (not to mention a little investigative competition between the UK and the EU’s agencies).

The CMA said it’s looking at whether Facebook has gained an unfair advantage over competitors in providing services for online classified ads and online dating through how it gathers and uses certain data.

Specifically, the UK’s regulator said it’s concerned that Facebook might have gained an unfair advantage over competitors providing services for online classified ads and online dating.

Facebook plays in both spaces of course, via Facebook Marketplace and Facebook Dating respectively.

In a statement on its action, CMA CEO, Andrea Coscelli, said: “We intend to thoroughly investigate Facebook’s use of data to assess whether its business practices are giving it an unfair advantage in the online dating and classified ad sectors. Any such advantage can make it harder for competing firms to succeed, including new and smaller businesses, and may reduce customer choice.”

The European Commission’s investigation will — similarly — focus on whether Facebook violated the EU’s competition rules by using advertising data gathered from advertisers in order to compete with them in markets where it is active.

Although it only cites classified ads as its example of the neighbouring market of particular concern for its probe.

The EU’s probe has another element, though, as it said it’s also looking at whether Facebook ties its online classified ads service to its social network in breach of the bloc’s competition rules.

In a separate (national) action, Germany’s competition authority opened a similar probe into Facebook tying Oculus to use of a Facebook account at the end of last year. So Facebook now has multiple antitrust probes on its plate in Europe, adding to its woes from the massive states antitrust lawsuit filed against it on home turf also back in December 2020.

“When advertising their services on Facebook, companies, which also compete directly with Facebook, may provide it commercially valuable data. Facebook might then use this data in order to compete against the companies which provided it,” the Commission noted in a press release.

“This applies in particular to online classified ads providers, the platforms on which many European consumers buy and sell products. Online classified ads providers advertise their services on Facebook’s social network. At the same time, they compete with Facebook’s own online classified ads service, ‘Facebook Marketplace’.”

The Commission added that a preliminary investigation it already undertook has raised concerns Facebook is distorting the market for online classified ads services. It will now take an in-depth look in order to make a full judgement on whether the social media behemoth is breaking EU competition rules.

Commenting in a statement, EVP Margrethe Vestager, who also heads up competition policy for the bloc, added: “Facebook is used by almost 3 billion people on a monthly basis and almost 7 million firms advertise on Facebook in total. Facebook collects vast troves of data on the activities of users of its social network and beyond, enabling it to target specific customer groups. We will look in detail at whether this data gives Facebook an undue competitive advantage in particular on the online classified ads sector, where people buy and sell goods every day, and where Facebook also competes with companies from which it collects data. In today’s digital economy, data should not be used in ways that distort competition.”

Reached for comment on the latest European antitrust probes, Facebook sent us this statement:

“We are always developing new and better services to meet evolving demand from people who use Facebook. Marketplace and Dating offer people more choices and both products operate in a highly competitive environment with many large incumbents. We will continue to cooperate fully with the investigations to demonstrate that they are without merit.”

Up til now, Facebook has been a bit of a blind spot for the Commission’s competition authority — with multiple investigations and enforcements chalked up by the bloc against other tech giants, such as (most notably) Google and Amazon.

But Vestager’s Facebook ‘dry patch’ has now formally come to an end.

The CMA, meanwhile, is working on wider pro-competition regulatory reforms aimed squarely at tech giants like Facebook and Google under a UK plan to clip the wings of the adtech duopoly.

 

In latest Big Tech antitrust push, Germany’s FCO eyes Google News Showcase fine print

By Natasha Lomas

The Bundeskartellamt, Germany’s very active competition authority, isn’t letting the grass grow under new powers it gained this year to tackle Big Tech: The Federal Cartel Office (FCO) has just announced a third proceeding against Google.

The FCO’s latest competition probe looks very interesting, as it’s targeting Google News Showcase — Google’s relatively recently launched product which curates a selection of third-party publishers’ content to appear in story panels on Google News (and other Google properties), content for which the tech giant pays a licensing fee.

Google started cutting content licensing deals with publishers around the world for News Showcase last year, announcing a total pot of $1 billion to fund the arrangements — with Germany one of the first markets where it inked deals.

However, its motivation to pay publishers to licence their journalism is hardly pure.

It follows years of bitter accusations from media companies that Google is freeloading off their content. To which the tech giant routinely responded with stonewalling statements — saying it would never pay for content because that’s not how online aggregation works. It also tried to fob off the industry with a digital innovation fund (aka Google News Initiative), which distributes small grants and offers free workshops and product advice, seeking to frame publishers’ decimated business models as a failure of innovation, leaving Google’s adtech machine scot free to steamroller on.

Google’s stonewalling-plus-chicken-feeding approach worked to stave off regulatory action for a long time, but eventually enough political pressure built up around the issue of media business models versus the online advertising duopoly that legislators started to make moves to try to address the power imbalance between traditional publishers and intermediating tech giants.

Most infamously in Australia, where lawmakers passed a news media bargaining code earlier this year.

Prior to its passage, both Facebook and Google, the twin targets for that law, warned the move could result in dire consequences — such as a total shutdown of their products, reduced quality or even fees to use their services.

Nothing like that happened, but lawmakers did agree to a last-minute amendment — adding a two-month mediation period to the legislation which allows digital platforms and publishers to strike deals on their own before having to enter into forced arbitration.

Critics say that allows for the two tech giants to continue to set their own terms when dealmaking with publishers, leveraging market muscle to strike deals that may disproportionately benefit Australia’s largest media firms — and doing so without any external oversight and with no guarantees that the resulting content arrangements foster media diversity and plurality or even support quality journalism.

In the EU, lawmakers acted earlier — taking the controversial route of extending copyright to cover snippets of news content back in 2019. (And Monday June 7 is the deadline for Member States to have transposed the rules into national law.)

France was among the first EU countries to bake the provision into national law — and its competition watchdog quickly ordered Google to pay for news reuse back in 2020 after Google tried to wiggle out of the legislation by stopping displaying snippets in the market.

It responded to the competition authority’s order with more obfuscation, though, agreeing earlier this year to pay French publishers for content reuse but also for their participation in News Showcase — bundling required-by-law payments (for news reuse) with content licensing deals of its own devising. And thereby making it difficult to understand the balance of mandatory payments versus commercial arrangements.

The problem with News Showcase is that these licensing arrangements are being done behind closed doors, in many cases ahead of relevant legislation and thus purely on Google’s terms — which means the initiative risks exacerbating concerns about the power imbalance between it and traditional publishers caught in a revenue bind as their business models have been massively disrupted by the switch to digital.

If Google suddenly offers some money for content, plenty of publishers might well jump — regardless of the terms. And perhaps especially because any publishers that hold out against licensing content to Google at the price it likes risk being disadvantaged by reduced visibility for their content, given Google’s dominance of the search market and content discoverability (via its ability to direct traffic to specific media properties, such as based on how prominently News Showcase content is displayed, for example).

The competition implications look clear.

But it’s still impressive that the Bundeskartellamt is spinning up an investigation into News Showcase so quickly.

The FCO said it’s acting on a complaint from Corint Media — looking at whether the announced integration of the Google News Showcase service into Google’s general search function is “likely to constitute self-preferencing or an impediment to the services offered by competing third parties”.

It also said it’s looking at whether contractual conditions include unreasonable terms (“to the detriment of the participating publishers”); and, in particular, “make it disproportionately difficult for them to enforce the ancillary copyright for press publishers introduced by the German Bundestag and Bundesrat in May 2021” — a reference to the transposed neighbouring right for news in the EU copyright reform.

So it will be examining the core issue of whether Google is trying to use News Showcase to undermine the new EU rights publishers gained under the copyright reform.

The FCO also said it wants to look at “how the conditions for access to Google’s News Showcase service are defined”.

Google launched the News Showcase in Germany on October 1 2020, with an initial 20 media companies participating — covering 50 publications. Although more have been added since.

Per the FCO, the News Showcase “story panels” were initially integrated in the Google News app but can now also be found in Google News on the desktop. It also notes that Google has said the panels will soon also appear in the general Google search results — a move that will further dial up the competition dynamics around the product, given Google’s massive dominance of the search market in Europe.

Commenting on its proceeding in a statement, Andreas Mundt, president of the Bundeskartellamt, said: “Cooperating with Google can be an attractive option for publishers and other news providers and offer consumers new or improved information services. However, it must be ensured that this will not result in discrimination between individual publishers. In addition, Google’s strong position in providing access to end customers must not lead to a situation where competing services offered by publishers or other news providers are squeezed out of the market. There must be an adequate balance between the rights and obligations of the content providers participating in Google’s programme.”

Google was contacted for comment on the FCO’s action — and it sent us this statement, attributed to spokesperson, Kay Oberbeck:

Showcase is one of many ways Google supports journalism, building on products and funds that all publishers can benefit from. Showcase is an international licensing program for news — the selection of partners is based on objective and non-discriminatory criteria, and partner content is not given preference in the ranking of our results. We will cooperate fully with the German Competition Authority and look forward to answering their questions.

The FCO’s scrutiny of Google News Showcase, follows hard on the heels of two other Google proceedings it opened last month, one to determine whether or not the tech giant meets the threshold of Germany’s new competition powers for tackling Big Tech — and another examining its data processing practices. Both remain ongoing.

The competition authority has also recently opened a proceeding into Amazon’s market dominance — and is also looking to extend another recent investigation of Facebook’s Oculus business, also by determining whether the social media giant’s business meets the threshold required under the new law.

The amendment to the German Competition Act came into force in January — giving the FCO greater powers to proactively impose conditions on large digital companies that are considered to be of “paramount significance for competition across markets” in order to pre-emptively control the risk of market abuse.

That it’s taking on so many proceedings in parallel against Big Tech shows it’s keen not to waste any time — putting itself in a position to come, as quickly as possible, with proactive interventions to address competitive problems caused by platform giants just as soon as it determines it can legally do that.

The Bundeskartellamt also has a pioneering case against Facebook’s “superprofiling” on its desk — which links privacy abuse to competition concerns and could drastically limit the tech giant’s ability to profile users. That investigation and case has been ongoing for years but was recently referred to Europe’s top court for an interpretation of key legal questions.

 

Europe wants to go its own way on digital identity

By Natasha Lomas

In its latest ambitious digital policy announcement, the European Union has proposed creating a framework for a “trusted and secure European e-ID” (aka digital identity) — which it said today it wants to be available to all citizens, residents and businesses to make it easer to use a national digital identity to prove who they are in order to access public sector or commercial services regardless of where they are in the bloc.

The EU does already have a regulation on electronic authentication systems (eIDAS), which entered into force in 2014, but the Commission’s intention with the e-ID proposal is to expand on that by addressing some of its limitations and inadequacies (such as poor uptake and a lack of mobile support).

It also wants the e-ID framework to incorporate digital wallets — meaning the user will be able to choose to download a wallet app to a mobile device where they can store and selectively share electronic documents which might be needed for a specific identity verification transaction, such as when opening a bank account or applying for a loan. Other functions (like e-signing) is also envisaged being supported by these e-ID digital wallets.

Other examples the Commission gives where it sees a harmonized e-ID coming in handy include renting a car or checking into a hotel. EU lawmakers also suggest full interoperability for authentication of national digital IDs could be helpful for citizens needing to submit a local tax declaration or enrolling in a regional university.

Some Member States do already offer national electronic IDs but there’s a problem with interoperability across borders, per the Commission, which noted today that just 14% of key public service providers across all Member States allow cross-border authentication with an e-Identity system, though it also said cross-border authentications are rising.

A universally accepted ‘e-ID’ could — in theory — help grease digital activity throughout the EU’s single market by making it easier for Europeans to verify their identity and access commercial or publicly provided services when travelling or living outside their home market.

EU lawmakers also seem to believe there’s an opportunity to ‘own’ a strategic piece of the digital puzzle here, if they can create a unifying framework for all European national digital IDs — offering consumers not just a more convenient alternative to carrying around a physical version of their national ID (at least in some situations), and/or other documents they might need to show when applying to access specific services, but what commissioners billed today as a “European choice” — i.e. vs commercial digital ID systems which may not offer the same high-level pledge of a “trusted and secure” ID system that lets the user entirely control who gets to sees which bits of their data.

A number of tech giants do of course already offer users the ability to sign in to third party digital services using the same credentials to access their own service. But in most cases doing so means the user is opening a fresh conduit for their personal data to flow back to the data-mining platform giant that controls the credential, letting Facebook (etc) further flesh out what it knows about that user’s Internet activity.

“The new European Digital Identity Wallets will enable all Europeans to access services online without having to use private identification methods or unnecessarily sharing personal data. With this solution they will have full control of the data they share,” is the Commission alternative vision for the proposed e-ID framework.

It also suggests the system could create substantial upside for European businesses — by supporting them in offering “a wide range of new services” atop the associated pledge of a “secure and trusted identification service”. And driving public trust in digital services is a key plank of how the Commission approaches digital policymaking — arguing that it’s a essential lever to grow uptake of online services.

However to say this e-ID scheme is ‘ambitious’ is a polite word for how viable it looks.

Aside from the tricky issue of adoption (i.e. actually getting Europeans to A) know about e-ID, and B) actually use it, by also C) getting enough platforms to support it, as well as D) getting providers on board to create the necessary wallets for envisaged functionality to pan out and be as robustly secure as promised), they’ll also — presumably — need to E) convince and/or compel web browsers to integrate e-ID so it can be accessed in a streamlined way.

The alternative (not being baked into browsers’ UIs) would surely make the other adoption steps trickier.

The Commission’s press release is fairly thin on such detail, though — saying only that: “Very large platforms will be required to accept the use of European Digital Identity wallets upon request of the user.”

Nonetheless, a whole chunk of the proposal is given over to discussion of “Qualified certificates for website authentication” — a trusted services provision, also expanding on the approach taken in eIDAS, which the Commission is keen for e-ID to incorporate in order to further boost user trust by offering a certified guarantee of who’s behind a website (although the proposal says it will be voluntary for websites to get certified).

The upshot of this component of the proposal is that web browsers would need to support and display these certificates, in order for the envisaged trust to flow — which sums to a whole lot of highly nuanced web infrastructure work needed to be done by third parties to interoperate with this EU requirement. (Work that browser makers already seem to have expressed serious misgivings about.)

Web browsers will be forced/compelled to accept authentication certificates. This is to guarantee the proof of the website operator identity. What standards should be used here? Will web browsers implement it? pic.twitter.com/sygngNHyQW

— Lukasz Olejnik (@lukOlejnik) June 3, 2021

Another big question-mark thrown up by the Commission’s e-ID plan is how exactly the envisaged certified digital identity wallets would store — and most importantly safeguard — user data. That very much remains to be determined, at this nascent stage.

There’s discussion in the regulation’s recitals, for example, of Member States being encouraged to “set-up jointly sandboxes to test innovative solutions in a controlled and secure environment in particular to improve the functionality, protection of personal data, security and interoperability of the solutions and to inform future updates of technical references and legal requirements”.

And it seems that a range of approaches are being entertained, with recital 11 discussing using biometric authentication for accessing digital wallets (while also noting potential rights risks as well as the need to ensure adequate security):

European Digital Identity Wallets should ensure the highest level of security for the personal data used for authentication irrespective of whether such data is stored locally or on cloud-based solutions, taking into account the different levels of risk. Using biometrics to authenticate is one of the identifications methods providing a high level of confidence, in particular when used in combination with other elements of authentication. Since biometrics represents a unique characteristic of a person, the use of biometrics requires organisational and security measures, commensurate to the risk that such processing may entail to the rights and freedoms of natural persons and in accordance with Regulation 2016/679.

In short, it’s clear that underlying the Commission’s big, huge idea of a unified (and unifying) European e-ID is a complex mass of requirements needed to deliver on the vision of a secure and trusted European digital ID that doesn’t just languish ignored and unused by most web users — some highly technical requirements, others (such as achieving the sought for widespread adoption) no less challenging.

The impediments to success here certainly look daunting.

Nonetheless, lawmakers are ploughing ahead, arguing that the pandemic’s acceleration of digital service adoption has shown the pressing need to address eIDAS’ shortcomings — and deliver on the goal of “effective and user-friendly digital services across the EU”.

Alongside today’s regulatory proposal they’ve put out a Recommendation, inviting Member States to “establish a common toolbox by September 2022 and to start the necessary preparatory work immediately” — with a goal of publishing the agreed toolbox in October 2022 and starting pilot projects (based on the agreed technical framework) sometime thereafter.

“This toolbox should include the technical architecture, standards and guidelines for best practices,” the Commission adds, eliding the large cans of worms being firmly cracked open.

Still, its penciled in timeframe for mass adoption — of around a decade — does a better job of illustrating the scale of the challenge, with the Commission writing that it wants 80% of citizens to be using an e-ID solution by 2030.

The even longer game the bloc is playing is to try to achieve digital sovereignty so it’s not beholden to foreign-owned tech giants. And an ‘own brand’, autonomously operated European digital identity does certainly align with that strategic goal.

Medium sees more employee exits after CEO publishes ‘culture memo’

By Natasha Mascarenhas

In April, Medium CEO Ev Williams wrote a memo to his staff about the company’s shifting culture in the wake of a challenging year.

“A healthy culture brings out the best in people,” he wrote. “They feel psychologically safe voicing their ideas and engaging in debate to find the best answer to any question — knowing that their coworkers are assuming good intent and giving them the benefit of the doubt because they give that in return.”

A few paragraphs later, Williams wrote that while counterperspectives and unpopular opinions are “always encouraged” to help make decisions, “repeated interactions that are nonconstructive, cast doubt, assume bad intent, make unsubstantiated accusations, or otherwise do not contribute to a positive environment have a massive negative impact on the team and working environment.”

He added: “These behaviors are not tolerated.”

The internal memo, obtained and verified by TechCrunch, was published nearly one month after Medium staff’s unionization attempt failed to pass, and roughly one week after Williams announced a pivot of the company’s editorial ambitions to focus less on in-house content and more on user-generated work.

Medium’s editorial team got voluntary payouts as part of the shift, with VP of Editorial Siobhan O’Connor and the entire staff of GEN Magazine stepping away.

However, several current and former employees told TechCrunch that they believe Medium’s mass exodus is tied more to Williams’ manifesto, dubbed “the culture memo,” than a pivot in editorial focus. Since the memo was published, many non-editorial staffers — who would presumably not be impacted by a shift in content priorities — have left the company, including product managers, several designers and dozens of engineers.

 

Those departing allege that Williams is trying to perform yet another reset of company strategy, at the cost of its most diverse talent. One pull of internal data that includes engineers, editorial staff, the product team, and a portion of its HR and finance team, suggests that, of the 241 people who started the year at Medium, some 50% of that pool are now gone. Medium, which has hired employees to fill some vacancies, denied these metrics, stating that it currently has 179 employees.

Medium said that 52% of departures were white, and that one third of the company is non-white and non-Asian. The first engineer that TechCrunch spoke to said that minorities are overrepresented in the departures at the company. They also added that, when they joined Medium, there were three transgender engineers. All have since left.

“A beloved dictator vibe”

In February, a number of Medium employees — led by the editorial staff — announced plans to organize into a union. The unionization effort was eventually defeated after falling short by one vote, a shortfall that some employees think was due to Medium executives pressuring staff to vote against the union.

The month after the unionization effort failed, Medium announced an editorial pivot. The company offered new positions or voluntary payouts for editorial staff. A number of employees left, which is not uncommon in the aftermath of a tense time period such as a failed unionization and the offer of a clear, financially safe route out.

In April, Williams posted the culture memo outlining his view on the company’s purpose and operating principles. In the memo, he writes that “there is no growth without risk-taking and no risk-taking without occasional failure” and that “feedback is a gift, and even tough feedback can and should be delivered with empathy and grace.” The CEO also noted the company’s commitment to diversity, and how adapting to “opportunities or threats is a prerequisite for winning.”

Notably, Medium has gone through a number of editorial strategy changes, dipping in and out of subscriptions, in-house content, and now, leaning on user-generated content and paid commissions.

“Team changes, strategy changes and reorganizations are inevitable. Each person’s adaptivity is a core strength of the company,” the memo reads.

The memo doesn’t explicitly address the unionization attempt, but does talk about how Medium will not tolerate “repeated interactions that are nonconstructive, cast doubt, assume bad intent, make unsubstantiated accusations, or otherwise do not contribute to a positive environment [but] have a massive negative impact on the team and working environment.”

Employees that we spoke to think that Williams’ memo, while internal rather than publicly posted, is reminiscent of statements put out by Coinbase CEO Brian Armstrong and Basecamp CEO Jason Fried, which both banned political discussion at work due to its incendiary or “distracting” nature. While the Medium memo doesn’t wholly ban politics, the first engineer said that the “undertone” of the statement creates a “not safe work environment.” Frustrated employees created a side-Slack to talk about issues at Medium.

In a statement to TechCrunch, Medium said that “many employees said they appreciated the clarity and there were directors and managers involved in shaping it.”

The month of the memo, churn tripled at the company compared to the month prior and was 30 times higher than the January metric, using an internal data set obtained by TechCrunch.

The second engineer that spoke to TechCrunch left the company last month and said that the memo didn’t have anything “egregious” at first glance.

“It was more of a beloved dictator vibe, of like, your words are vague enough that they’re not enforceable on anything else, and it looks good on paper,” they said. “If you just saw that memo and nothing else, it’s not a Coinbase memo, it’s not a Basecamp memo.”

But, given the timing of the memo, the engineer said their interpretation of Williams’ message was clear.

“[Medium wants] to enforce good vibes and shut down anything that is questioning ‘the mission,’” they said.

Medium’s extreme 

The same engineer thinks that “very few people left because of the editorial pivot.” Instead, the engineer explained a history of problematic issues at Medium, with a wave of departures that seem to be clearly triggered by the memo.

In July 2019, for example, Medium chose to publish a series that included a profile of Trump supporter Joy Villa with the headline “I have never been as prosecuted for being Black or Latina as I have been for supporting Trump.”

When the Latinx community at Medium spoke to leadership about discomfort in the headline, they claimed that executives from editorial didn’t do anything about the headline until it was mentioned in a public Slack channel. One editor asked anyone who had gone through the immigration process or was a part of the Latinx community to get in a room and explain their side, a moment that felt diminishing to this employee. The headline only changed when employees posted in a public Slack channel about their qualms.

“They think caring is enough,” the employee said. “And that listening is merciful and really caring, and therefore they’re really shocked when that is not enough.”

The third engineer who spoke to TechCrunch joined the company in 2019 because they were looking for a mission-driven company impacting more than just tech. They realized Medium had “deeper issues” during the Black Lives Matter movement last summer.

“There were deeper issues that I just hadn’t heard about because I wasn’t part of them. That just kind of got slid under the rug,” they said, such as the Trump supporter profile. The former employee explained how they learned that HR had ignored a report of an employee saying the N-word during that time, too. Medium said this is false.

“I don’t feel like I needed the memo to really understand their true colors,” they said.

After The Verge and Platformer published a report on Medium’s messy culture and chaotic editorial strategy, the second engineer said that multiple employees who were assumed to be tied to the story were pressured to resign.

“The way I see it, they fought dirty to defeat the union,” the first engineer said. “But it wasn’t a total success because all of these people have decided to leave in the wake of the decision, and that’s the cost. The people who are left basically feel like they have to nod and smile because Medium has made it clear that they don’t want you to bring your full self to work.”

The engineer said that Medium’s culture of reckoning is different from Coinbase because of the mission-oriented promise of the former.

“Some companies, like Coinbase, have said that ‘we want people who are not going to bring politics and social issues to work,’ so if you join Coinbase, that’s what you are expecting, and that’s fine,” they said. “But Medium specifically recruited people who care about the world, and justice, and believe in the freedom of speech and transparency.”

The engineer plans to officially resign soon and already has interviews lined up.

“It’s a good job market out there for software engineers, so why would I work for a company that is treating their own people unfairly?”

Europe’s cookie consent reckoning is coming

By Natasha Lomas

Cookie pop-ups getting you down? Complaints that the web is ‘unusable’ in Europe because of frustrating and confusing ‘data choices’ notifications that get in the way of what you’re trying to do online certainly aren’t hard to find.

What is hard to find is the ‘reject all’ button that lets you opt out of non-essential cookies which power unpopular stuff like creepy ads. Yet the law says there should be an opt-out clearly offered. So people who complain that EU ‘regulatory bureaucracy’ is the problem are taking aim at the wrong target.

EU law on cookie consent is clear: Web users should be offered a simple, free choice — to accept or reject.

The problem is that most websites simply aren’t compliant. They choose to make a mockery of the law by offering a skewed choice: Typically a super simple opt-in (to hand them all your data) vs a highly confusing, frustrating, tedious opt-out (and sometimes even no reject option at all).

Make no mistake: This is ignoring the law by design. Sites are choosing to try to wear people down so they can keep grabbing their data by only offering the most cynically asymmetrical ‘choice’ possible.

However since that’s not how cookie consent is supposed to work under EU law sites that are doing this are opening themselves to large fines under the General Data Protection Regulation (GDPR) and/or ePrivacy Directive for flouting the rules.

See, for example, these two whopping fines handed to Google and Amazon in France at the back end of last year for dropping tracking cookies without consent…

While those fines were certainly head-turning, we haven’t generally seen much EU enforcement on cookie consent — yet.

This is because data protection agencies have mostly taken a softly-softly approach to bringing sites into compliance. But there are signs enforcement is going to get a lot tougher. For one thing, DPAs have published detailed guidance on what proper cookie compliance looks like — so there are zero excuses for getting it wrong.

Some agencies had also been offering compliance grace periods to allow companies time to make the necessary changes to their cookie consent flows. But it’s now a full three years since the EU’s flagship data protection regime (GDPR) came into application. So, again, there’s no valid excuse to still have a horribly cynical cookie banner. It just means a site is trying its luck by breaking the law.

There is another reason to expect cookie consent enforcement to dial up soon, too: European privacy group noyb is today kicking off a major campaign to clean up the trashfire of non-compliance — with a plan to file up to 10,000 complaints against offenders over the course of this year. And as part of this action it’s offering freebie guidance for offenders to come into compliance.

Today it’s announcing the first batch of 560 complaints already filed against sites, large and small, located all over the EU (33 countries are covered). noyb said the complaints target companies that range from large players like Google and Twitter to local pages “that have relevant visitor numbers”.

“A whole industry of consultants and designers develop crazy click labyrinths to ensure imaginary consent rates. Frustrating people into clicking ‘okay’ is a clear violation of the GDPR’s principles. Under the law, companies must facilitate users to express their choice and design systems fairly. Companies openly admit that only 3% of all users actually want to accept cookies, but more than 90% can be nudged into clicking the ‘agree’ button,” said noyb chair and long-time EU privacy campaigner, Max Schrems, in a statement.

“Instead of giving a simple yes or no option, companies use every trick in the book to manipulate users. We have identified more than fifteen common abuses. The most common issue is that there is simply no ‘reject’ button on the initial page,” he added. “We focus on popular pages in Europe. We estimate that this project can easily reach 10,000 complaints. As we are funded by donations, we provide companies a free and easy settlement option — contrary to law firms. We hope most complaints will quickly be settled and we can soon see banners become more and more privacy friendly.”

To scale its action, noyb developed a tool which automatically parses cookie consent flows to identify compliance problems (such as no opt out being offered at the top layer; or confusing button coloring; or bogus ‘legitimate interest’ opt-ins, to name a few of the many chronicled offences); and automatically create a draft report which can be emailed to the offender after it’s been reviewed by a member of the not-for-profit’s legal staff.

It’s an innovative, scalable approach to tackling systematically cynical cookie manipulation in a way that could really move the needle and clean up the trashfire of horrible cookie pop-ups.

noyb is even giving offenders a warning first — and a full month to clean up their ways — before it will file an official complaint with their relevant DPA (which could lead to an eye-watering fine).

Its first batch of complaints are focused on the OneTrust consent management platform (CMP), one of the most popular template tools used in the region — and which European privacy researchers have previously shown (cynically) provides its client base with ample options to set non-compliant choices like pre-checked boxes… Talk about taking the biscuit.

A noyb spokeswoman said it’s started with OneTrust because its tool is popular but confirmed the group will expand the action to cover other CMPs in the future.

The first batch of noyb’s cookie consent complaints reveal the rotten depth of dark patterns being deployed — with 81% of the 500+ pages not offering a reject option on the initial page (meaning users have to dig into sub-menus to try to find it); and 73% using “deceptive colors and contrasts” to try to trick users into clicking the ‘accept’ option.

noyb’s assessment of this batch also found that a full 90% did not provide a way to easily withdraw consent as the law requires.

Cookie compliance problems found in the first batch of sites facing complaints (Image credit: noyb)

It’s a snapshot of truly massive enforcement failure. But dodgy cookie consents are now operating on borrowed time.

Asked if it was able to work out how prevalent cookie abuse might be across the EU based on the sites it crawled, noyb’s spokeswoman said it was difficult to determine, owing to technical difficulties encountered through its process, but she said an initial intake of 5,000 websites was whittled down to 3,600 sites to focus on. And of those it was able to determine that 3,300 violated the GDPR.

That still left 300 — as either having technical issues or no violations — but, again, the vast majority (90%) were found to have violations. And with so much rule-breaking going on it really does require a systematic approach to fixing the ‘bogus consent’ problem — so noyb’s use of automation tech is very fitting.

More innovation is also on the way from the not-for-profit — which told us it’s working on an automated system that will allow Europeans to “signal their privacy choices in the background, without annoying cookie banners”.

At the time of writing it couldn’t provide us with more details on how that will work (presumably it will be some kind of browser plug-in) but said it will be publishing more details “in the next weeks” — so hopefully we’ll learn more soon.

A browser plug-in that can automatically detect and select the ‘reject all’ button (even if only from a subset of the most prevalent CMPs) sounds like it could revive the ‘do not track’ dream. At the very least, it would be a powerful weapon to fight back against the scourge of dark patterns in cookie banners and kick non-compliant cookies to digital dust.

 

EU to review TikTok’s ToS after child safety complaints

By Natasha Lomas

TikTok has a month to respond to concerns raised by European consumer protection agencies earlier this year, EU lawmakers said today.

The Commission has launched what it described as “a formal dialogue” with the video sharing platform over its commercial practices and policy.

Areas of specific concern include hidden marketing, aggressive advertising techniques targeted at children, and certain contractual terms in TikTok’s policies that could be considered misleading and confusing for consumers, per the Commission.

Commenting in a statement, justice commissioner Didier Reynders added: “The current pandemic has further accelerated digitalisation. This has brought new opportunities but it has also created new risks, in particular for vulnerable consumers. In the European Union, it is prohibited to target children and minors with disguised advertising such as banners in videos. The dialogue we are launching today should support TikTok in complying with EU rules to protect consumers.”

The background to this is that back in February the European Consumer Organisation (BEUC) sent the the Commission a report calling out a number of TikTok’s policies and practices — including what it said were unfair terms and copyright practices. It also flagged the risk of children being exposed to inappropriate content on the platform, and accused TikTok of misleading data processing and privacy practices.

Complaints were filed around the same time by consumer organisations in 15 EU countries — urging those national authorities to investigate the social media giant’s conduct.

The multi-pronged EU action means TikTok has not just the Commission looking at the detail of its small print but is facing questions from a network of national consumer protection authorities — which is being co-led by the Swedish Consumer Agency and the Irish Competition and Consumer Protection Commission (which handles privacy issues related to the platform).

Nonetheless, the BEUC queried why the Commission hasn’t yet launched a formal enforcement procedure.

We hope that the authorities will stick to their guns in this ‘dialogue’ which we understand is not yet a formal launch of an enforcement procedure. It must lead to good results for consumers, tackling all the points that BEUC raised. BEUC also hopes to be consulted before an agreement is reached,” a spokesperson for the organization told us. 

Also reached for comment, TikTok sent us this statement on the Commission’s action, attributed to its director of public policy, Caroline Greer: 

“As part of our ongoing engagement with regulators and other external stakeholders over issues such as consumer protection and transparency, we are engaging in a dialogue with the Irish Consumer Protection Commission and the Swedish Consumer Agency and look forward to discussing the measures we’ve already introduced. In addition, we have taken a number of steps to protect our younger users, including making all under-16 accounts private-by-default, and disabling their access to direct messaging. Further, users under 18 cannot buy, send or receive virtual gifts, and we have strict policies prohibiting advertising directly appealing to those under the age of digital consent.”

The company told us it uses age verification for personalized ads — saying users must have verified that they are 13+ to receive these ads; as well as being over the age of digital consent in their respective EU country; and also having consented to receive targeted ads.

However TikTok’s age verification technology has been criticized as weak before now — and recent emergency child-safety-focused enforcement action by the Italian national data protection agency has led to TikTok having to pledge to strengthen its age verification processes in the country.

The Italian enforcement action also resulted in TikTok removing more than 500,000 accounts suspected of belonging to users aged under 13 earlier this month — raising further questions about whether it can really claim that under-13s aren’t routinely exposed to targeted ads on its platform.

In further background remarks it sent us, TikTok claimed it has clear labelling of sponsored content. But it also noted it’s made some recent changes — such as switching the label it applies on video advertising from ‘sponsored’ to ‘ad’ to make it clearer.

It also said it’s working on a toggle that aims to make it clearer to users when they may be exposed to advertising by other users by enabling the latter users to prominently disclose that their content contains advertising.

TikTok said the tool is currently in beta testing in Europe but it said it expects to move to general availability this summer and will also amend its ToS to require users to use this toggle whenever their content contains advertising. (But without adequate enforcement that may just end up as another overlooked and easily abused setting.)

The company recently announced a transparency center in Europe in a move that looks intended to counter some of the concerns being raised about its business in the region, as well as to prepare it for the increased oversight that’s coming down the pipe for all digital platforms operating in the EU — as the bloc works to update its digital rulebook.

 

EU bodies’ use of US cloud services from AWS, Microsoft being probed by bloc’s privacy chief

By Natasha Lomas

Europe’s lead data protection regulator has opened two investigations into EU institutions’ use of cloud services from U.S. cloud giants, Amazon and Microsoft, under so called Cloud II contracts inked earlier between European bodies, institutions and agencies and AWS and Microsoft.

A separate investigation has also been opened into the European Commission’s use of Microsoft Office 365 to assess compliance with earlier recommendations, the European Data Protection Supervisor (EDPS) said today.

Wojciech Wiewiórowski is probing the EU’s use of U.S. cloud services as part of a wider compliance strategy announced last October following a landmark ruling by the Court of Justice (CJEU) — aka, Schrems II — which struck down the EU-US Privacy Shield data transfer agreement and cast doubt upon the viability of alternative data transfer mechanisms in cases where EU users’ personal data is flowing to third countries where it may be at risk from mass surveillance regimes.

In October, the EU’s chief privacy regulator asked the bloc’s institutions to report on their transfers of personal data to non-EU countries. This analysis confirmed that data is flowing to third countries, the EDPS said today. And that it’s flowing to the U.S. in particular — on account of EU bodies’ reliance on large cloud service providers (many of which are U.S.-based).

That’s hardly a surprise. But the next step could be very interesting as the EDPS wants to determine whether those historical contracts (which were signed before the Schrems II ruling) align with the CJEU judgement or not.

Indeed, the EDPS warned today that they may not — which could thus require EU bodies to find alternative cloud service providers in the future (most likely ones located within the EU, to avoid any legal uncertainty). So this investigation could be the start of a regulator-induced migration in the EU away from U.S. cloud giants.

Commenting in a statement, Wiewiórowski said: “Following the outcome of the reporting exercise by the EU institutions and bodies, we identified certain types of contracts that require particular attention and this is why we have decided to launch these two investigations. I am aware that the ‘Cloud II contracts’ were signed in early 2020 before the ‘Schrems II’ judgement and that both Amazon and Microsoft have announced new measures with the aim to align themselves with the judgement. Nevertheless, these announced measures may not be sufficient to ensure full compliance with EU data protection law and hence the need to investigate this properly.”

Amazon and Microsoft have been contacted with questions regarding any special measures they have applied to these Cloud II contracts with EU bodies.

The EDPS said it wants EU institutions to lead by example. And that looks important given how, despite a public warning from the European Data Protection Board (EDPB) last year — saying there would be no regulatory grace period for implementing the implications of the Schrems II judgement — there hasn’t been any major data transfer fireworks yet.

The most likely reason for that is a fair amount of head-in-the-sand reaction and/or superficial tweaks made to contracts in the hopes of meeting the legal bar (but which haven’t yet been tested by regulatory scrutiny).

Final guidance from the EDPB is also still pending, although the Board put out detailed advice last fall.

The CJEU ruling made it plain that EU law in this area cannot simply be ignored. So as the bloc’s data regulators start scrutinizing contracts that are taking data out of the EU some of these arrangement are, inevitably, going to be found wanting — and their associated data flows ordered to stop.

To wit: A long-running complaint against Facebook’s EU-US data transfers — filed by the eponymous Max Schrems, a long-time EU privacy campaigners and lawyer, all the way back in 2013 — is slowing winding toward just such a possibility.

Last fall, following the Schrems II ruling, the Irish regulator gave Facebook a preliminary order to stop moving Europeans’ data over the pond. Facebook sought to challenge that in the Irish courts but lost its attempt to block the proceeding earlier this month. So it could now face a suspension order within months.

How Facebook might respond is anyone’s guess but Schrems suggested to TechCrunch last summer that the company will ultimately need to federate its service, storing EU users’ data inside the EU.

The Schrems II ruling does generally look like it will be good news for EU-based cloud service providers which can position themselves to solve the legal uncertainty issue (even if they aren’t as competitively priced and/or scalable as the dominant US-based cloud giants).

Fixing U.S. surveillance law, meanwhile — so that it gets independent oversight and accessible redress mechanisms for non-citizens in order to no longer be considered a threat to EU people’s data, as the CJEU judges have repeatedly found — is certainly likely to take a lot longer than ‘months’. If indeed the US authorities can ever be convinced of the need to reform their approach.

Still, if EU regulators finally start taking action on Schrems II — by ordering high profile EU-US data transfers to stop — that might help concentrate US policymakers’ minds toward surveillance reform. Otherwise local storage may be the new future normal.

Europe to press the adtech industry to help fight online disinformation

By Natasha Lomas

The European Union plans to beef up its response to online disinformation, with the Commission saying today it will step up efforts to combat harmful but not illegal content — including by pushing for smaller digital services and adtech companies to sign up to voluntary rules aimed at tackling the spread of this type of manipulative and often malicious content.

EU lawmakers pointed to risks such as the threat to public health posed by the spread of harmful disinformation about COVID-19 vaccines as driving the need for tougher action.

Concerns about the impacts of online disinformation on democratic processes are another driver, they said.

A new more expansive code of practice on disinformation is now being prepared — and will, they hope, be finalized in September, to be ready for application at the start of next year.

The Commission’s gear change is a fairly public acceptance that the EU’s voluntary code of practice — an approach Brussels has taken since 2018 — has not worked out as hope. And, well, we did warn them.

A push to get the adtech industry on board with demonetizing viral disinformation is certainly overdue.

It’s clear the online disinformation problem hasn’t gone away. Some reports have suggested problematic activity — like social media voter manipulation and computational propaganda — have been getting worse in recent years, rather than better.

However getting visibility into the true scale of the disinformation problem remains a huge challenge given those best placed to know (ad platforms) don’t freely open their systems to external researchers. And that’s something else the Commission would like to change.

Signatories to the EU’s current code of practice on disinformation are:

Google, Facebook, Twitter, Microsoft, TikTok, Mozilla, DOT Europe (Former EDiMA), the  World  Federation  of Advertisers  (WFA) and its Belgian counterpart, the  Union  of  Belgian  Advertisers  (UBA);  the  European Association of Communications Agencies (EACA), and its national members from France, Poland and the Czech Republic — respectively, Association   des   Agences   Conseils   en   Communication   (AACC), Stowarzyszenie Komunikacji Marketingowej/Ad Artis Art Foundation (SAR), and Asociace Komunikacnich Agentur (AKA); the Interactive Advertising Bureau (IAB Europe), Kreativitet & Kommunikation, and Goldbach Audience (Switzerland) AG.

EU lawmakers said they want to broaden participation by getting smaller platforms to join, as well as recruiting all the various players in the adtech space whose tools provide the means for monetizing online disinformation.

Commissioners said they want to see the code covering a “whole range” of actors in the online advertising industry (i.e. rather than the current handful).

It’s certainly notable that the digital advertising industry body Internet Advertising Bureau is not on that list. (We’ve reached out to the IAB Europe to ask if it’s planning to join the code and will update this report with any response.)

In its press release today the Commission also said it wants platforms and adtech players to exchange information on disinformation ads that have been refused by one of them — so there can be a more coordinate response to shut out bad actors.

As for those who are signed up already, the Commission’s report card on their performance was bleak.

Speaking during a press conference, internal market commissioner Thierry Breton said that only one of the five platform signatories to the code has “really” lived up to its commitments — which was presumably a reference to the first five tech giants in the above list (aka: Google, Facebook, Twitter, Microsoft and TikTok).

Breton demurred on doing an explicit name-and-shame of the four others — who he said have not “at all” done what was expected of them — saying it’s not the Commission’s place to do that.

Rather he said people should decide among themselves which of the platform giants that signed up to the code have failed to live up to their commitments. (Signatories since 2018 have pledged to take action to disrupt ad revenues of accounts and websites that spread disinformation; to enhance transparency around political and issue-based ads; tackle fake accounts and online bots; to empower consumers to report disinformation and access different news sources while improving the visibility and discoverability of authoritative content; and to empower the research community so outside experts can help monitor online disinformation through privacy-compliant access to platform data.)

Frankly it’s hard to imagine who from the above list of five tech giants might actually be meeting the Commission’s bar. (Microsoft perhaps, on account of its relatively modest social activity vs the others.)

Safe to say, there’s been a lot of more hot air (in the form of selective PR) on the charged topic of disinformation vs hard accountability from the major social platforms over the past three years.

So it’s perhaps no accident that Facebook chose today to puff up its historical efforts to combat what it refers to as “influence operations” — aka “coordinated efforts to manipulate or corrupt public debate for a strategic goal” — by publishing what it couches as a “threat report” detailing what it’s done in this area between 2017 and 2000.

Influence ops refer to online activity that may be being conducted by hostile foreign governments or by malicious agents seeking, in this case, to use Facebook’s ad tools as a mass manipulation tool — perhaps to try to skew an election result or influence the shape of looming regulations. And Facebook’s ‘threat report’ states that the tech giant took down and publicly reported only 150 such operations over the report period.

Yet as we know from Facebook whistleblower Sophie Zhang, the scale of the problem of mass malicious manipulation activity on Facebook’s platform is vast and its response to it is both under-resourced and PR-led. (A memo written by the former Facebook data scientist, covered by BuzzFeed last year, detailed a lack of institutional support for her work and how takedowns of influence operations could almost immediately respawn — without Facebook doing anything.)

NB: If it’s Facebook’s “broader enforcement against deceptive tactics that do not rise to the level of [Coordinate Inauthentic Behavior]” that you’re looking for, rather than efforts against ‘influence operations’, it has a whole other report for that — the Inauthentic Behavior Report! — because of course Facebook gets to mark its own homework when it comes to tackling fake activity, and shapes its own level of transparency since there are no legally binding reporting rules on disinformation.

Legally binding rules on handling online disinformation aren’t in the EU’s pipeline either — but commissioners said today that they wanted a beefed up and “more binding” code.

They do have some levers to pull here via a wider package of digital reforms that’s coming (aka the Digital Services Act).

The DSA will bring in legally binding rules for how platforms handle illegal content and they intend the tougher disinformation code to plug into that (in the form of what they call a “co-regulatory backstop for the measures that will be included in the revised and strengthened Code”).

It still won’t be legally binding but it may earn compliant platforms wider DSA ‘credit’. So it looks like disinformation-muck-spreaders’ arms are set to be twisted in a pincer regulatory move by making sure this stuff is looped into the legally binding DSA.

Still, Brussels maintains that it does not want to legislate around disinformation.

The risks are that a centralized approach might smell like censorship — and it sounds keen to avoid that charge at all costs.

The digital regulation packages the EU has put forward since the 2019 collage took up its mandate aim generally to increase transparency, safety and accountability online, its values and transparency commissioner, Vera Jourova, said today.

Breton also said that now is the “right time” to deepen obligations under the disinformation code — with the DSA incoming — and also to give the platforms time to adapt (and involve themselves in discussions on shaping additional obligations).

In another interesting remark he also talked about regulators needing to “be able to audit platforms” — in order to be able to “check what is happening with the algorithms that push these practices”. Though quite how audit powers can be made to fit with a voluntary, non-legally binding code of practice remains to be seen.

Discussing areas where the current code has fallen short Jourova pointed to inconsistencies of application across different EU Member States and languages.

She also said the Commission is keen for the beefed up code to do more to enable and empower users to act when they see something dodgy online — such as by providing users with tools to flag problem content.

Platforms should also provide users with the ability to appeal disinformation content takedowns (to avoid the risk of opinions being incorrectly removed).

The focus for the code would be on tackling false “facts not opinions”, she emphasized, saying the Commission wants platforms to “embed fact-checking into their systems” and for the code to work towards a “decentralized care of facts”.

She went on to say that the current signatories to the code haven’t provided external researchers with the kind of data access the Commission would like to see — to support greater transparency into (and accountability around) the disinformation problem.

The code does require either monthly (for COVID-19 disinformation), six monthly or yearly reports from signatories (depending on the size of the entity) but what’s being provided so far doesn’t add up to a comprehensive picture of disinformation activity and platform reaction, she said.

She also warned that online manipulation tactics are fast evolving and highly innovative — while saying the Commission would nonetheless like to see signatories agree on a set of identifiable “problematic techniques” to help speed up responses.

EU lawmakers will be coming with a specific plan for tackling political ads transparency in November, she noted.

They are also, in parallel, working on how to respond to the threat posed to European democracies by foreign interference cyberops — such as the aforementioned influence operations often found hosted on Facebook’s platform.

The commissioners did not give many details of those plans today but Jourova said it’s “high time to impose costs on perpetrators” — suggesting that some interesting possibilities may be being considered, such as trade sanctions for state-backed disops (although attribution would be one challenge).

Breton said countering foreign influence over the “informational space” is important work to defend the values of European democracy.

He also said the Commission’s anti-disinformation efforts would focus on support for education to help equip citizens with the necessary critical thinking capabilities to navigate the huge quantities of variable quality information that now surrounds them.

 

DuckDuckGo presses the case for true ‘one-click’ search competition on Android

By Natasha Lomas

When antitrust accusations close in on Google the tech giant loves to fire back a riposte that competition is just “one click away“. It’s a disingenuous retort from an online advertising behemoth whose power and profits stem from its expertise in capturing markets by manipulating and monopolizing Internet users’ attention.

Indeed, the entire brand is arguably a dark pattern.

Behold the child-like colors! The friendly babble of syllables! The tempting freebies! The tall talk of missions and moonshots! And tucked quietly beneath that Googley exterior: The adtech giant tracking Internet users en masse to sell their attention. The business model that makes money through mass surveillance and people profiling.

Google’s ‘other bets’ have always been PR pocket change beside its ads profit machine. The fun stuff is simply how Google primes its people data pump.

So what if Google’s infamous ‘one-click competition’ claim were to actually be made true in the arena of Android search engine choice? A market where Google’s activity is being closely monitored by EU competition regulators — after a 2018 antitrust decision.

Three years ago the tech giant hit with a $5BN penalty and an order to stop using Android (aka its freebie for mobile device makers) to lock in the dominance of its own-brand search engine (and other Google services) on mobile, where its operating system is massively dominant.

It went on to adopt a so-called ‘choice screen’ on Android in the region — which prompts device users to pick a default search engine from a selection of options (Google auctions slots to rivals).

But the choice is more of a one-shot than a dynamic, ongoing possibility to switch the default for Android users — as they are only asked to choose their default choice on set up of a new device or after a factory reset.

“That means, for all practical purposes, if you want to change your default device search engine again easily, you can’t,” writes DuckDuckGo in its latest blog post pushing for reform of Google’s self-serving Android ‘remedy’.

By DDG’s count it takes 15+ clicks (not one) to switch default search engine on an Android device at any other point (i.e. after initial set up or factory reset). And it says it knows “from experience” that this over-15-clicks method “trips up almost everyone”.

“In other words, one click competition becomes in fact ‘one factory reset away’,” it goes on. “The only reasons we can think of for setting up a preference menu this way are anti-competitive ones.”

The pro-privacy search engine has been banging the drum on this point for months (if not years) at this point. Nor is it alone in complaining about Google’s remedy. And complaints aren’t limited to how hard it is to switch search engines at any other point after set-up, either.

Notably, Google’s decision to opt for a ‘pay-to-play’ model by auctioning slots on the choice screen has been widely criticized — with multiple search rivals arguing that an auction isn’t fair and does not result in a level playing field for competition (Google’s own search engine always appears as a choice, of course, and it doesn’t have to pay anyone to appear).

Not-for-profit search engine Ecosia, for example, points out that the auction format essentially discriminates against non-profit search engines, undermining the public good they may be trying to do (in its case it uses ad revenue from search to plant trees to try to help reduce global carbon emissions — so money paid to Google to win the auction means less money it can spend planting trees).

DDG has also been a critic of the paid auction model from the start. But with its latest blog post it told TechCrunch it’s trying to make sure the ‘ease of switching’ issue doesn’t get lost in criticism of the auction.

It continues to argue that multiple components need to be reformed if the choice screen is to have the pro-competition effect EU antirust regulators are seeking.

It’s increasing clear that the current implementation isn’t working for anyone other than Google — which has been able to maintain its grip on the mobile search market, almost three years after the Commission’s antitrust intervention.

Its share of the search engine market on mobile devices has not declined since 2018. Indeed, as of February it was actually up slightly on the marketshare it had when the antitrust ruling was made, per Statista data.

That can’t be what market rebalancing success looks like.

Previously when we’ve put rivals’ criticisms to the Commission it tends to offer a few stock responses — saying it’s monitoring Google’s implementation and is committed to an effective implementation of the 2018 decision — while avoiding engaging with the substance of the criticisms or specific suggestions to fix Google’s remedy.

The Commission reiterated the same lines when we contacted it now about DuckDuckGo’s call for true ‘one-click’ competition on Android by easier default search engine switching.

But there are signs EU regulators may finally be preparing to do something.

Earlier this month Bloomberg reported on comments made by antitrust chief and Commission EVP Margrethe Vestager, who said regulators are “actively working on making” Google’s Android choice screen for search and browser rivals work.

She is also reported to have said that market share “is changing a bit but we’re working on it”.

In additional comments to us, the Commission reiterated that it’s “committed to a full and effective implementation of the decision, saying: “We are therefore monitoring closely the implementation of the choice screen mechanism.”

“We have been discussing the choice screen mechanism with Google, following relevant feedback from the market, in particular in relation to the presentation and mechanics of the choice screen and to the selection mechanism of rival search providers,” it added.

DuckDuckGo declined to go into detail on any chats it’s having with EU regulators on how to reform the choice screen — saying that it can’t comment on discussions with the Commission. But founder Gabriel Weinberg pointed out other jurisdictions are eyeing how to remedy Google’s dominance, adding that “major countries are actively considering search preference menus right now”.

The US Justice Department, meanwhile, filed its antitrust lawsuit against Google last October. And US states are also challenging the tech giant in court.

“We believe a ‘choice screen’ that only appears once at start up will not meaningfully increase market competition or give consumers the freedom and simplicity they deserve to chose Google alternatives,” Weinberg also told us. “On the other hand, a properly designed preference menu gives users true one-click access to making Google competitors the default search on their device, without having to take the absurd step of factory reseting their phone.”

In its blog post, DDG has some plain words of advice for how regulators can beat Google at its own game and prevent it gaming search competition on Android.

“The sensible approach is to give users an easy pathway to the search preference menu by letting them tap a link from a search engine app or website within the default browser (e.g., Chrome). With that simple tap, the user is whisked directly to the search preference menu,” it writes.

“Not allowing competing search engines to easily guide consumers back to the search preference menu is a pretty big dark pattern because it is requiring users to make an important choice when they often aren’t ready to do so, and then not giving them the option to easily change their mind later while using a competing search engine.”

“So, to anyone considering implementing a search preference menu, or drafting regulations covering search preference menus, please ensure that consumers can access it at any time, especially after a consumer has just chosen to use a competing search engine,” it adds. “Functionality that allows competing search engines to guide consumers directly to the preference menu is necessary for consumer empowerment and search market competition.”

Amazon’s market power to be tested in Germany in push for “early action” over antitrust risks

By Natasha Lomas

Germany’s Federal Cartel Office (FCO) is seeking to make swift use of a new competition tool to target big tech — announcing today that it’s opened a proceeding against ecommerce giant Amazon.

If the FCO confirms that Amazon is of “paramount significance for competition across markets” — as defined by an amendment to the German Competition Act which came into force in January (aka, the GWB Digitalisation Act) — the authority will have greater powers to proactively impose conditions on how it can operate in order to control the risk of market abuse.

Section 19a of the GWB enables the FCO to intervene earlier, and the idea is more effectively, against the practices of large digital companies.

The provision gives the authority the power to prohibit digital giants from engaging in anti-competitive practices like self-preferencing; or using tying or bundling strategies intended to penetrate new markets “by way of non-performance based anti-competitive means”; or creating or raising barriers to market entry by processing data relevant for competition.

The FCO already has two other proceedings ongoing against Amazon — one looking at the extent to which Amazon is influencing the pricing of sellers on Amazon Marketplace by means of price control mechanisms and algorithms; and a second examining to agreements between Amazon and brand manufacturers to check whether exclusions placed on third-party sellers on Amazon Marketplace constitute a violation of competition rules — but a finding of “paramount significance” would enable the authority to “take early action against and prohibit possible anti-competitive practices by Amazon”, as it puts it.

Amazon has been contacted for comment on the FCO’s latest proceeding.

It’s the second such application by the Bundeskartellamt to determine whether it can apply the new law to a tech giant.

In January the authority sought to extend the scope of an existing abuse proceeding, opened against Facebook in December — related to Facebook tying Oculus use to Facebook accounts — saying it would look at whether the social media giant is subject to the GWB’s “paramount significance” rules, and whether, therefore, its linking of Oculus use to a Facebook account should be assessed on that basis.

Commenting on its latest move against Amazon in a statement, FCO president Andreas Mundt said: “In the past few years we have had to deal with Amazon on several occasions and also obtained far-reaching improvements for sellers on Amazon Marketplace. Two other proceedings are still ongoing. Parallel to these proceedings we are now also applying our extended competences in abuse control.”

“In this particular case we are first of all examining whether Amazon is of paramount significance for competition across markets. An ecosystem which extends across various markets and thus constitutes an almost unchallengeable position of economic power is particularly characteristic in this respect,” he added. “This could apply to Amazon with its online marketplaces and many other, above all digital offers. If we find that the company does have such a market position, we could take early action against and prohibit possible anti-competitive practices by Amazon.”

In January Mundt made stronger comments vis-a-vis Facebook — describing its social networking ecosystem as “particularly characteristic” of the bar set by the new digital law for proactive interventions, and adding that: “In view of Facebook’s strong market presence with the eponymous social network, WhatsApp and Instagram such a position may be deemed to exist.”

The FCO proceeding to confirm whether or not Facebook falls under the law remains ongoing. (It also has a pioneering case against Facebook’s ‘superprofiling’ of users that’s headed for Europe’s top court — which could result in an order to Facebook to stop combining EU users’ data without consent, if judges agreed with its approach linking privacy and competition.)

Zooming out, the Bundeskartellamt’s moves to acquire more proactive powers at the national level to tackle big tech foreshadow planned updates to pan-European Union competition law. And specifically the ex ante regime which is set to apply to so-called “digital gatekeepers” in future — under the Digital Markets Act (DMA).

The DMA will mean that Internet intermediaries with major market power must comply with behavioural ‘dos and don’ts’ set by Brussels, risking major penalties if they don’t play by the rules.

In recent years lawmakers across Europe have been looking at how to update competition powers so regulators can respond effectively to digital markets — which are prone to anti-competitive phenomena such as networking effects and tipping — while continuing to pursue antitrust investigations against big tech. (The Commission laid out a first set of charges against Amazon in November, for example, relating to its use of third party merchant data.)

The problem is the painstaking pace of competition investigations into digital business vs the blistering speed of these players (and the massive market power they’ve amassed) — hence the push to tool up with more proactive antitrust powers.

Earlier, EU lawmakers also toyed with the idea of a new competition tool for digital markets but quietly dropped the idea — going on propose their ex ante regime for gatekeeper platforms, under the DMA, at the end of last year. However the proposal is in the process of being debated by the other EU institutions under the bloc’s co-legislative approach — which means it’s still likely years away from being adopted and applied as pan-EU law.

That in turn means German’s FCO could have an outsized role in clipping big tech’s wings in the meanwhile.

In the UK, now outside the bloc — where it too may have an influential role in reforming regional competition rules to rebalance digital market power — the government is also working on a pro-competition regime aimed at big tech.

This year it set up a dedicated unit, the DMU, within the national Competition and Markets Authority which will be tasked with overseeing a regime that will apply to platforms which are identified as having “strategic market status” (akin to the German approach of “paramount significance for competition across markets”). And while the UK is taking a similar tack to the EU’s DMA, it has said the domestic regime will not sum to a single set of rules for all gatekeeper-style platforms — but rather there will be bespoke provisions per platform deemed to fall under the ex ante regulations.

 

European AI needs strategic leadership, not overregulation

By Annie Siebert
Mark Minevich Contributor
Mark Minevich is president of Going Global Ventures, an adviser at Boston Consulting Group, a digital fellow at IPsoft and a leading global AI expert and digital cognitive strategist and venture capitalist.

The EU Commission recently proposed a new set of stringent rules to regulate AI, citing an urgent need. With the global race to regulate AI officially on, the EU published a detailed proposal on how AI should be regulated, explicitly banning some uses and defining those it considers “high-risk,” planning to ban the use of AI that threatens people’s rights and safety.

We can all agree with the sentiment of Margrethe Vestager, the European Commission executive vice president, when she said that when it comes to “artificial intelligence, trust is a must, not a nice to have,” but is regulation the most effective and efficient way to secure this reality?

The takeaways from the commission are incredibly in-depth, but the ones that make the most sense to me are those that stress regulated AI should aim to increase human well-being. However, regulation should not overly constrain experimentation and development of AI systems.

High-risk AI systems should always have unalterable built-in human oversight and control mechanisms. AI systems intended to interact with people or to generate content, whether high-risk or not, should be subject to specific transparency obligations. In addition, AI-based remote biometric systems in publicly accessible places shall be only authorized by EU or member state law and serve the objective of preventing, detecting or investigating serious crime and terrorism.

Partnership between AI and humanity

The set of laws and legal framework enacted in Europe will have a profound impact on AI regulation around the world, similar to the effects the GDPR regulations created over the past decade. But will these laws assist us in moving away from the EU-wide haphazard regulatory approach toward a singularity of common classification?

In my opinion, this will cripple AI development in the EU while China and the United States leap forward. It would limit the use cases and innovation of artificial intelligence and put the EU in a technologically inferior position globally. In the U.S, AI is being optimized to maximize corporate profitability and efficiency. In China, AI is being optimized to maximize the government’s grip on the population with the preservation of power. The overly regulated environment in the EU will lead to complete chaos when regulations in various EU bodies start contradicting.

Negative effects on EU entrepreneurship

A lack of investment in AI in the EU is a major factor why the EU is losing the AI race to the U.S. and China. There are currently about 446 million people living in the EU and 331 million people living in the U.S. But in the EU, $2 billion was invested in AI in 2020, while in the U.S., $23.6 billion was invested.

If the EU continues pushing with aggressive regulations and lack of funding, it will enjoy global leadership in AI regulations, but I won’t be surprised if many European entrepreneurs decide to launch their startups in more AI-friendly countries.

To create an EU that is friendly to innovation and entrepreneurs, we must create a collaborative network of AI pioneers to lead the way.

In turn, other nations will take advantage of the EU’s push toward strict regulations by fostering innovation and generating a stronger hold on the future of global technology. A recent World Bank report showed the EU launched 38% of investigations into data compliance in 2019, compared to only 12% in North America. With policies this stringent and burdensome to companies, it should be no surprise if innovators and entrepreneurs begin to move to more business-friendly parts of the globe.

Regulation leads to relegation

The regulation proposal suggests fines of up to €20 million, or up to 4% of total annual turnover of the AI provider for noncompliance. If we consider prior EU legislation and subsequent lack of digital innovation, these proposed regulations will cause chronic stagnation of digital innovation and adoption in the EU bloc.

In short, if these regulations become law, the EU will not become a pioneer but a laggard. The “real” use cases of AI are yet to emerge, uncovering the true potential of AI. The massive bureaucracy for high-risk use cases will undercut any entrepreneurship or bottom-up innovation efforts. With historical markers trending to the EU heading to a recession, now is not the time to stifle innovation.

Put a human face on global AI … and show its value

If AI is to be broadly accepted, we need a human face showing AI helping people solve their problems and challenges. We must highlight engaging stories that are true and showcase the real people behind them. For the population at large to accept the potential of AI, they must see people like themselves benefiting from the goodness of AI.

AI funding means, above all, startup funding. Startups form the bridge between the discovery and development of disruptive technologies to their everyday use by the general public. Europe is already doing a significant amount of planning, but must accelerate.

European venture capital is lagging behind the U.S. model. Fast-growing startups are mostly dependent on American and Asian investors. This requires a rethinking of the investment culture and sensible promotion of a dynamic investment environment; for example, through the targeted relaxation of investment restrictions on the part of institutional investors.

We’re living during the age of “moonshots,” a time when entrepreneurs and scientists are able to go further than ever before. Competing in the next economy requires playing a new innovation game, one whose goal is to boost innovation tenfold.

In order to reach this level, incremental optimizations do not help. The focus needs to pivot to big innovations — moonshots. Taking risk is acceptable and implementation of a large and risky idea should become normal.

To create an EU that is friendly to innovation and entrepreneurs, we must create a collaborative network of AI pioneers to lead the way. Entrepreneurs and data science leaders must use their energies to focus on AI for good to improve the world in the longer term and advocate for deregulation. To accomplish this, we need to set up a global AI pioneers council on AI for good, consisting of participants from leading research institutions, businesses, the public sector and civil society to develop a shared understanding of best practices.

AI is no longer just a tool for optimizing corporate systems and societal infrastructures; its potential reaches much further into solving the various crises facing mankind, from climate change to uncontrolled pandemics. Responsible AI and AI for good application across all the world’s superpowers can address these crises.

The EU cannot afford to be the region of the globe disincentivizing innovation and discouraging entrepreneurship. The EU must move not toward super regulation, but toward strategic leadership of AI based on AI for good. The path of overregulation leads to the depths of stagnation. It is up to the EU to decide what it wants its future to look like.

Facebook loses last-ditch attempt to derail DPC decision on its EU-US data flows

By Natasha Lomas

Facebook has failed in its bid to prevent its lead EU data protection regulator from pushing ahead with a decision on whether to order suspension of its EU-US data flows.

The Irish High Court has just issued a ruling dismissing the company’s challenge to the Irish Data Protection Commission’s (DPC) procedures.

The case has huge potential operational significance for Facebook which may be forced to store European users’ data locally if it’s ordered to stop taking their information to the U.S. for processing.

Last September Irish data watchdog made a preliminary order warning Facebook it may have to suspend EU-US data flows. Facebook responding by filing for a judicial review and obtaining a stay on the DPC’s procedure. That block is now being unblocked.

We understand the involved parties have been given a few days to read the High Court judgement ahead of another hearing on Thursday — when the court is expected to formally lift Facebook’s stay on the DPC’s investigation (and settle the matter of case costs).

The DPC declined to comment on today’s ruling in any detail — or on the timeline for making a decision on Facebook’s EU-US data flows — but deputy commissioner Graham Doyle told us it “welcomes today’s judgment”.

Its preliminary suspension order last fall followed a landmark judgement by Europe’s top court in the summer — when the CJEU struck down a flagship transatlantic agreement on data flows, on the grounds that US mass surveillance is incompatible with the EU’s data protection regime.

The fall-out from the CJEU’s invalidation of Privacy Shield (as well as an earlier ruling striking down its predecessor Safe Harbor) has been ongoing for years — as companies that rely on shifting EU users’ data to the US for processing have had to scramble to find valid legal alternatives.

While the CJEU did not outright ban data transfers out of the EU, it made it crystal clear that data protection agencies must step in and suspend international data flows if they suspect EU data is at risk. And EU to US data flows were signalled as at clear risk given the court simultaneously struck down Privacy Shield.

The problem for some businesses is therefore that there may simply not be a valid legal alternative. And that’s where things look particularly sticky for Facebook, since its service falls under NSA surveillance via Section 702 of the FISA (which is used to authorize mass surveillance programs like Prism).

Facebook lost 100% before Irish High Court: "I refuse all of the reliefs sought by [Facebook Ireland] and dismiss the claims made by it in the proceedings"

➡Judgment (Original) and first statement here: https://t.co/81C7pyCBTd

— Max Schrems 🇪🇺 (@maxschrems) May 14, 2021

So what happens now for Facebook, following the Irish High Court ruling?

As ever in this complex legal saga — which has been going on in various forms since an original 2013 complaint made by European privacy campaigner Max Schrems — there’s still some track left to run.

After this unblocking the DPC will have two enquiries in train: Both the original one, related to Schrems’ complaint, and an own volition enquiry it decided to open last year — when it said it was pausing investigation of Schrems’ original complaint.

Schrems, via his privacy not-for-profit noyb, filed for his own judicial review of the DPC’s proceedings. And the DPC quickly agreed to settle — agreeing in January that it would ‘swiftly’ finalize Schrems’ original complaint. So things were already moving.

The tl;dr of all that is this: The last of the bungs which have been used to delay regulatory action in Ireland over Facebook’s EU-US data flows are finally being extracted — and the DPC must decide on the complaint.

Or, to put it another way, the clock is ticking for Facebook’s EU-US data flows. So expect another wordy blog post from Nick Clegg very soon.

Schrems previously told TechCrunch he expects the DPC to issue a suspension order against Facebook within months — perhaps as soon as this summer (and failing that by fall).

In a statement reacting to the Court ruling today he reiterated that position, saying: “After eight years, the DPC is now required to stop Facebook’s EU-US data transfers, likely before summer. Now we simply have two procedures instead of one.”

When Ireland (finally) decides it won’t mark the end of the regulatory procedures, though.

A decision by the DPC on Facebook’s transfers would need to go to the other EU DPAs for review — and if there’s disagreement there (as seems highly likely, given what’s happened with draft DPC GDPR decisions) it will trigger a further delay (weeks to months) as the European Data Protection Board seeks consensus.

If a majority of EU DPAs can’t agree the Board may itself have to cast a deciding vote. So that could extend the timeline around any suspension order. But an end to the process is, at long last, in sight.

And, well, if a critical mass of domestic pressure is ever going to build for pro-privacy reform of U.S. surveillance laws now looks like a really good time…

“We now expect the DPC to issue a decision to stop Facebook’s data transfers before summer,” added Schrems. “This would require Facebook to store most data from Europe locally, to ensure that Facebook USA does not have access to European data. The other option would be for the US to change its surveillance laws.”

Facebook has been contacted for comment on the Irish High Court ruling.

Update: The company has now sent us this statement:

“Today’s ruling was about the process the IDPC followed. The larger issue of how data can move around the world remains of significant importance to thousands of European and American businesses that connect customers, friends, family and employees across the Atlantic. Like other companies, we have followed European rules and rely on Standard Contractual Clauses, and appropriate data safeguards, to provide a global service and connect people, businesses and charities. We look forward to defending our compliance to the IDPC, as their preliminary decision could be damaging not only to Facebook, but also to users and other businesses.”

Google Analytics prepares for life after cookies

By Frederic Lardinois

As consumer behavior and expectations around privacy have shifted — and operating systems and browsers have adapted to this — the age of cookies as a means of tracking user behavior is coming to an end. Few people will bemoan this, but advertisers and marketers rely on having insights into how their efforts translate into sales (and publishers like to know how their content performs, as well). Google is obviously aware of this, and it is now looking to machine learning to ready its tools like Google Analytics for this post-cookie future.

headshot of Vidhya Srinivasan, VP/GM, Advertising at Google

Vidhya Srinivasan, VP/GM, Advertising at Google. Image Credits: Google

Last year, the company brought several machine learning tools to Google Analytics. At the time, the focus was on alerting users to significant changes in their campaign performance, for example. Now, it is taking this a step further by using its machine learning systems to model user behavior when cookies are not available.

It’s hard to underestimate the importance of this shift, but according to Vidhya Srinivasan, Google’s VP and GM for Ads Buying, Analytics and Measurement who joined the company after a long stint at Amazon two years ago (and IBM before that), it’s also the only way to go.

“The principles we outlined to drive our measurement roadmap are based on shifting consumer expectations and ecosystem paradigms. Bottom line: The future is consented. It’s modeled. It’s first-party. So that’s what we’re using as our guide for the next gen of our products and solutions,” she said in her first media interview after joining Google.

It’s still early days and a lot of users may yet consent and opt in to tracking and sharing their data in some form or another. But the early indications are that this will be a minority of users. Unsurprisingly, first-party data and the data Google can gather from users who consent becomes increasingly valuable in this context.

Because of this, Google is now also making it easier to work with this so-called “consented data” and create better first-party data through improved integrations with tools like the Google Tag Manager.

Last year, Google launched Consent Mode, which helps advertisers manage cookie behavior based on local data-protection laws and user preferences. For advertisers in the EU and in the U.K., Consent Mode allows them to adjust their Google tags based on a user’s choices and soon, Google will launch a direct integration with Tag Manager to make it easier to modify and customize these tags.

How Consent Mode works today. Image Credits: Google

What’s maybe more important, though, is that Consent Mode will now use conversion modeling for users who don’t consent to cookies. Google says this can recover about 70% of ad-click-to-conversion journeys that would otherwise be lost to advertisers.

In addition, Google is also making it easier for bring in first-party data (in a privacy-forward way) to Google Analytics to improve measurements and its models.

“Revamping a popular product with a long history is something people are going to have opinions about — we know that. But we felt strongly that we needed Google Analytics to be relevant to changing consumer behavior and ready for a cookie-less world — so that’s what we’re building,” Srinivasan said. “The machine learning that Google has invested in for years — that experience is what we’re putting in action to drive the modeling underlying this tech. We take having credible insights and reporting in the market seriously. We know that doing the work on measurement is critical to market trust. We don’t take the progress we’ve made for granted and we’re looking to continue iterating to ensure scale, but above all we’re prioritizing user trust.”

 

 

Europe charges Apple with antitrust breach, citing Spotify App Store complaint

By Natasha Lomas

The European Commission has announced that it’s issued formal antitrust charges against Apple, saying today that its preliminary view is Apple’s app store rules distort competition in the market for music streaming services by raising the costs of competing music streaming app developers.

The Commission begun investigating competition concerns related to iOS App Store (and also Apple Pay) last summer.

“The Commission takes issue with the mandatory use of Apple’s own in-app purchase mechanism imposed on music streaming app developers to distribute their apps via Apple’s App Store,” it wrote today. “The Commission is also concerned that Apple applies certain restrictions on app developers preventing them from informing iPhone and iPad users of alternative, cheaper purchasing possibilities.”

Commenting in a statement, EVP and competition chief Margrethe Vestager, said: “App stores play a central role in today’s digital economy. We can now do our shopping, access news, music or movies via apps instead of visiting websites. Our preliminary finding is that Apple is a gatekeeper to users of iPhones and iPads via the App Store. With Apple Music, Apple also competes with music streaming providers. By setting strict rules on the App store that disadvantage competing music streaming services, Apple deprives users of cheaper music streaming choices and distorts competition. This is done by charging high commission fees on each transaction in the App store for rivals and by forbidding them from informing their customers of alternative subscription options.”

Apple sent us this statement in response:

“Spotify has become the largest music subscription service in the world, and we’re proud for the role we played in that. Spotify does not pay Apple any commission on over 99% of their subscribers, and only pays a 15% commission on those remaining subscribers that they acquired through the App Store. At the core of this case is Spotify’s demand they should be able to advertise alternative deals on their iOS app, a practice that no store in the world allows. Once again, they want all the benefits of the App Store but don’t think they should have to pay anything for that. The Commission’s argument on Spotify’s behalf is the opposite of fair competition.”

Vestager is due to hold a press conference shortly — so stay tuned for updates.

This story is developing… 

A number of complaints against Apple’s practices have been lodged with the EU’s competition division in recent years — including by music streaming service Spotify; video games maker Epic Games; and messaging platform Telegram, to name a few of the complainants who have gone public (and been among the most vocal).

The main objection is over the (up to 30%) cut Apple takes on sales made through third parties’ apps — which critics rail against as an ‘Apple tax’ — as well as how it can mandate that developers do not inform users how to circumvent its in-app payment infrastructure, i.e. by signing up for subscriptions via their own website instead of through the App Store. Other complaints include that Apple does not allow third party app stores on iOS.

Apple, meanwhile, has argued that its App Store does not constitute a monopoly. iOS’ global market share of mobile devices is a little over 10% vs Google’s rival Android OS — which is running on the lion’s share of the world’s mobile hardware. But monopoly status depends on how a market is defined by regulators (and if you’re looking at the market for iOS apps then Apple has no competitors).

The iPhone maker also likes to point out that the vast majority of third party apps pay it no commission (as they don’t monetize via in-app payments). While it argues that restrictions on native apps are necessary to protect iOS users from threats to their security and privacy.

Last summer the European Commission said its App Store probe was focused on Apple’s mandatory requirement that app developers use its proprietary in-app purchase system, as well as restrictions applied on the ability of developers to inform iPhone and iPad users of alternative cheaper purchasing possibilities outside of apps.

It also said it was investigating Apple Pay: Looking at the T&Cs and other conditions Apple imposes for integrating its payment solution into others’ apps and websites on iPhones and iPads, and also on limitations it imposes on others’ access to the NFC (contactless payment) functionality on iPhones for payments in stores.

The EU’s antitrust regulator also said then that it was probing allegations of “refusals of access” to Apple Pay.

In March this year the UK also joined the Apple App Store antitrust investigation fray — announcing a formal investigation into whether it has a dominant position and if it imposes unfair or anti-competitive terms on developers using its app store.

US lawmakers have, meanwhile, also been dialling up attention on app stores, plural — and on competition in digital markets more generally — calling in both Apple and Google for questioning over how they operate their respective mobile app marketplaces in recent years.

Last month, for example, the two tech giants’ representatives were pressed on whether their app stores share data with their product development teams — with lawmakers digging into complaints against Apple especially that Cupertino frequently copies others’ apps, ‘sherlocking’ their businesses by releasing native copycats (as the practice has been nicknamed).

Back in July 2020 the House Antitrust Subcommittee took testimony from Apple CEO Tim Cook himself — and went on, in a hefty report on competition in digital markets, to accuse Apple of leveraging its control of iOS and the App Store to “create and enforce barriers to competition and discriminate against and exclude rivals while preferencing its own offerings”.

“Apple also uses its power to exploit app developers through misappropriation of competitively sensitive information and to charge app developers supra-competitive prices within the App Store,” the report went on. “Apple has maintained its dominance due to the presence of network effects, high barriers to entry, and high switching costs in the mobile operating system market.”

The report did not single Apple out — also blasting Google-owner Alphabet, Amazon and Facebook for abusing their market power. And the Justice Department went on to file suit against Google later the same month. So, over in the U.S., the stage is being set for further actions against big tech. Although what, if any, federal charges Apple could face remains to be seen.

At the same time, a number of state-level tech regulation efforts are brewing around big tech and antitrust — including a push in Arizona to relieve developers from Apple and Google’s hefty cut of app store profits.

While an antitrust bill introduced by Republican Josh Hawley earlier this month takes aim at acquisitions, proposing an outright block on big tech’s ability to carry out mergers and acquisitions.

Although that bill looks unlikely to succeed, a flurry of antitrust reform bills are set to introduced as U.S. lawmakers on both sides of the aisle grapple with how to cut big tech down to a competition-friendly size.

In Europe lawmakers are already putting down draft laws with the same overarching goal.

In the EU the Commission has proposed an ex ante regime to prevent big tech from abusing its market power, with the Digital Markets Act set to impose conditions on intermediating platforms who are considered ‘gatekeepers’ to others’ market access.

In the UK, which now sits outside the bloc, the government is also drafting new laws in response to tech giants’ market power — saying it will create a ‘pro-competition’ regime that will apply to platforms with so-called  ‘strategic market status’ — but instead of a set list of requirements it wants to target specific measures per platform.

Click Studios asks customers to stop tweeting about its Passwordstate data breach

By Zack Whittaker

Australian security software house Click Studios has told customers not to post emails sent by the company about its data breach, which allowed malicious hackers to push a malicious update to its flagship enterprise password manager Passwordstate to steal customer passwords.

Last week, the company told customers to “commence resetting all passwords” stored in its flagship password manager after the hackers pushed the malicious update to customers over a 28-hour window between April 20-22. The malicious update was designed to contact the attacker’s servers to retrieve malware designed to steal and send the password manager’s contents back to the attackers.

In an email to customers, Click Studios did not say how the attackers compromised the password manager’s update feature, but included a link to a security fix.

But news of the breach only became public after Danish cybersecurity firm CSIS Group published a blog post with details of the attack hours after Click Studios emailed its customers.

Click Studios claims Passwordstate is used by “more than 29,000 customers,” including in the Fortune 500, government, banking, defense and aerospace, and most major industries.

In an update on its website, Click Studios said in a Wednesday advisory that customers are “requested not to post Click Studios correspondence on Social Media.” The email adds: “It is expected that the bad actor is actively monitoring Social Media, looking for information they can use to their advantage, for related attacks.”

“It is expected the bad actor is actively monitoring social media for information on the compromise and exploit. It is important customers do not post information on Social Media that can be used by the bad actor. This has happened with phishing emails being sent that replicate Click Studios email content,” the company said.

Besides a handful of advisories published by the company since the breach was discovered, the company has refused to comment or respond to questions.

It’s also not clear if the company has disclosed the breach to U.S. and EU authorities where the company has customers, but where data breach notification rules obligate companies to disclose incidents. Companies can be fined up to 4% of their annual global revenue for falling foul of Europe’s GDPR rules.

Click Studios chief executive Mark Sandford has not responded to repeated requests (from TechCrunch) for comment. Instead, TechCrunch received the same canned autoresponse from the company’s support email saying that the company’s staff are “focused only on assisting customers technically.”

TechCrunch emailed Sandford again on Thursday for comment on the latest advisory, but did not hear back.

EU adopts rules on one-hour takedowns for terrorist content

By Natasha Lomas

The European Parliament approved a new law on terrorist content takedowns yesterday, paving the way for one-hour removals to become the legal standard across the EU.

The regulation “addressing the dissemination of terrorist content online” will come into force shortly after publication in the EU’s Official Journal — and start applying 12 months after that.

The incoming regime means providers serving users in the region must act on terrorist content removal notices from Member State authorities within one hour of receipt, or else provide an explanation why they have been unable to do so.

There are exceptions for educational, research, artistic and journalistic work — with lawmakers aiming to target terrorism propaganda being spread on online platforms like social media sites.

The types of content they want speedily removed under this regime includes material that incites, solicits or contributes to terrorist offences; provides instructions for such offences; or solicits people to participate in a terrorist group.

Material posted online that provides guidance on how to make and use explosives, firearms or other weapons for terrorist purposes is also in scope.

However concerns have been raised over the impact on online freedom of expression — including if platforms use content filters to shrink their risk, given the tight turnaround times required for removals.

The law does not put a general obligation on platforms to monitor or filter content but it does push service providers to prevent the spread of proscribed content — saying they must take steps to prevent propagation.

It is left up to service providers how exactly they do that, and while there’s no legal obligation to use automated tools it seems likely filters will be what larger providers reach for, with the risk of unjustified, speech chilling takedowns fast-following. 

Another concern is how exactly terrorist content is being defined under the law — with civil rights groups warning that authoritarian governments within Europe might seek to use it to go after critics based elsewhere in the region.

The law does include transparency obligations — meaning providers must publicly report information about content identification and takedown actions annually.

On the sanctions side, Member States are responsible for adopting rules on penalties but the regulation sets a top level of fines for repeatedly failing to comply with provisions at up to 4% of global annual turnover.

EU lawmakers proposed the new rules back in 2018  when concern was riding high over the spread of ISIS content online.

Platforms were pressed to abide by an informal one-hour takedown rule in March of the same year. But within months the Commission came with a more expansive proposal for a regulation aimed at “preventing the dissemination of terrorist content online”.

Negotiations over the proposal have seen MEPs and Member States (via the Council) tweaking provisions — with the former, for example, pushing for a provision that requires competent authority to contact companies that have never received a removal order a little in advance of issuing the first order to remove content — to provide them with information on procedures and deadlines — so they’re not caught entirely on the hop.

The impact on smaller content providers has continued to be a concern for critics, though.

The Council adopted its final position in March. The approval by the Parliament yesterday concludes the co-legislative process.

Commenting in a statement, MEP Patryk JAKI, the rapporteur for the legislation, said: “Terrorists recruit, share propaganda and coordinate attacks on the internet. Today we have established effective mechanisms allowing member states to remove terrorist content within a maximum of one hour all around the European Union. I strongly believe that what we achieved is a good outcome, which balances security and freedom of speech and expression on the internet, protects legal content and access to information for every citizen in the EU, while fighting terrorism through cooperation and trust between states.”

❌