Facebook is announcing some new capabilities for video advertisers on Facebook and Instagram, as well as new numbers about the potential audience that those ads might reach.
Numbers first: The company says that there are now 2 billion people each month who watch videos that eligible for in-stream ads. It also says that 70 percent of in-stream ads are watched to completion, with its studies showing that by adding a Facebook In-Stream campaign to ad purchases that already include News Feed and Stories, advertisers saw a median 1.5x increase in ad recall.
When discussing the news with Carolyn Everson, the vice president of Facebook’s global business group, I wondered whether traditional advertisers are comfortable with the company’s metrics. (Back in 2016, the company had to admit that due to an error, it had been inflating video view times, and is still facing criticism about how it handled the situation.)
Everson said Facebook is aiming to be “very specific” with its numbers. She also noted that the company only places in-stream ads in videos that are three minutes or longer, with the ad only playing after a viewer has watched at least 45 seconds (or more, depending on the video).
“I do believe that we are going to be very competitive and consistent with the marketplace,” she said. “Everyone measures these things a little bit differently, but these are numbers that people are going to be very excited about.”
Image Credits: Facebook
On the product side, the company is starting a global test of In-Stream Video Topics, which will allow advertisers to target their ads not just by audience, but also based on the topic of a given video. In a blog post, Facebook says the initial targeting will include “over 20 Video Topics, like Sports, and over 700 hundred sub-topics such as Baseball, Basketball, Golf, or Swimming.”
Everson said the company will use machine learning technology to classify eligible videos, as well as to ensure that they meet Facebook’s brand safety guidelines.
In addition, Facebook is announcing that it will start testing ads in its short-form Instagram Reels format, initially in India, Brazil, Germany and Australia. These ads can be up to 30 seconds long, and users can interact with them in the same ways they interact with organic Reels content (liking, sharing, skipping).
Facebook sticker ads
And Facebook is testing the sticker ads that it announced last month, which will allow brands to create custom stickers, which creators can then include in their Facebook Stories.
Looking at all the announcements together, Everson (who joined Facebook in 2011) said, “Frankly, for the last 10 years, I’ve been so excited for the moment where we are absolutely ready for prime time in our discussions of online video solutions for marketers. With our news that we are announcing today, we have more than arrived.”
One never knows how a confirmation hearing will go these days, especially one for a young outsider nominated to an important position despite challenging the status quo and big business. Lina Khan, just such a person up for the position of FTC Commissioner, had a surprisingly pleasant time of it during today’s Senate Commerce Committee confirmation hearing — possibly because her iconoclastic approach to antitrust makes for good politics these days.
Khan, an associate professor of law at Columbia, is best known in the tech community for her incisive essay “Amazon Antitrust’s Paradox,” which laid out the failings of regulatory doctrine that have allowed the retail giant to progressively dominate more and more markets. (She also recently contributed to a House report on tech policy.)
When it was published, in 2018, the feeling that Amazon had begun to abuse its position was, though commonplace in some circles, not really popular in the Capitol. But the growing sense that laissez-faire or insufficient regulations have created monsters in Amazon, Google, and Facebook (to start) has led to a rare bipartisan agreement that we must find some way, any way will do, of putting these upstart corporations back in their place.
This in turn led to a sense of shared purpose and camaraderie in the confirmation hearing, which was a triple header: Khan joined Bill Nelson, nominated to lead NASA, and Leslie Kiernan, who would join the Commerce Department as General Counsel, for a really nice little three-hour chat.
Khan is one of several in the Biden administration who signal a new approach to taking on Big Tech and other businesses that have gotten out of hand, and the questions posed to her by Senators from both sides of the aisle seemed genuine and got genuinely satisfactory answers from a confident Khan.
She deftly avoided a few attempts to bait her — including one involving Section 230; wrong Commission, Senator — and her answers primarily reaffirmed her professional opinion that the FTC should be better informed and more preemptive in its approach to regulating these secretive, powerful corporations.
Here are a few snippets representative of the questioning and indicative of her positions on a few major issues (answers lightly edited for clarity):
On the FTC getting involved in the fight between Google, Facebook, and news providers:
“Everything needs to be on the table. Obviously local journalism is in crisis, and i think the current COVID moment has really underscored the deep democratic emergency that is resulting when we don’t have reliable sources of local news.”
She also cited the increasing concentration of ad markets and the arbitrary nature of, for example, algorithm changes that can have wide-ranging effects on entire industries.
On Clarence Thomas’s troubling suggestion that social media companies should be considered “common carriers”:
“I think it prompted a lot of interesting discussion,” she said, very diplomatically. “In the Amazon article, I identified two potential pathways forward when thinking about these dominant digital platforms. One is enforcing competition laws and ensuring that these markets are competitive.” (i.e. using antitrust rules)
“The other is, if we instead recognize that perhaps there are certain economies of scale, network externalities that will lead these markets to stay dominated by a very few number of companies, then we need to apply a different set of rules. We have a long legal tradition of thinking about what types of checks can be applied when there’s a lot of concentration and common carriage is one of those tools.”
“I should clarify that some of these firms are now integrated in so many markets that you may reach for a different set of tools depending on which specific market you’re looking at.”
(This was a very polite way of saying common carriage and existing antitrust rules are totally unsuitable for the job.)
On potentially reviewing past mergers the FTC approved:
“The resources of the commission have not really kept pace with the increasing size of the economy, as well as the increasing size and complexity of the deals the commission is reviewing.”
“There was an assumption that digital markets in particular are fast moving so we don’t need to be concerned about potential concentration in the markets, because any exercise of power will get disciplined by entry and new competition. Now of course we know that in the markets you actually have significant network externalities in ways that make them more sticky. In hindsight there’s a growing sense that those merger reviews were a missed opportunity.”
(Here Senator Blackburn (R-TN) in one of the few negative moments fretted about Khan’s “lack of experience in coming to that position” before asking about a spectrum plan — wrong Commission, Senator.)
On the difficulty of enforcing something like an order against Facebook:
“One of the challenges is the deep information asymmetry that exists between some of these firms and enforcers and regulators. I think it’s clear that in some instances the agencies have been a little slow to catch up to the underlying business realities and the empirical realities of how these markets work. So at the very least ensuring the agencies are doing everything they can to keep pace is gonna be important.”
“In social media we have these black box algorithms, proprietary algorithms that can sometimes make it difficult to know what’s really going on. The FTC needs to be using its information gathering capacities to mitigate some of these gaps.”
On extending protections for children and other vulnerable groups online:
Some of these dangers are heightened given some of the ways in which the pandemic has rendered families and children especially dependent on some of these [education] technologies. So I think we need to be especially vigilant here. The previous rules should be the floor, not the ceiling.
Overall there was little partisan bickering and a lot of feeling from both sides that Khan was, if not technically experienced at the job (not rare with a coveted position like FTC Commissioner), about as competent a nominee as anyone could ask for. Not only that but her highly considered and fairly assertive positions on matters of antitrust and competition could help put Amazon and Google, already in the regulatory doghouse, on the defensive for once.
Facebook’s self-styled and handpicked ‘Oversight Board’ will make a decision on whether or not to overturn an indefinite suspension of the account of former president Donald Trump within “weeks”, it said in a brief update statement on the matter today.
The high profile case appears to have attracted major public interest, with the FOB tweeting that it’s received more than 9,000 responses so far to its earlier request for public feedback.
It added that its commitment to “carefully reviewing all comments” after an earlier extension of the deadline for feedback is responsible for the extension of the case timeline.
The Board’s statement adds that it will provide more information “soon”.
(2/2): The Board’s commitment to carefully reviewing all comments has extended the case timeline, in line with the Board’s bylaws. We will share more information soon.
— Oversight Board (@OversightBoard) April 16, 2021
Trump’s indefinite suspension from Facebook and Instagram was announced by Facebook founder Mark Zuckerberg on January 7, after the then-president of the U.S. incited his followers to riot at the nation’s Capitol — an insurrection that led to chaotic and violent scenes and a number of deaths as his supporters clashed with police.
However Facebook quickly referred the decision to the FOB for review — opening up the possibility that the ban could be overturned in short order as Facebook has said it will be bound by the case review decisions issued by the Board.
After the FOB accepted the case for review it initially said it would issue a decision within 90 days of January 21 — a deadline that would have fallen next Wednesday.
However it now looks like the high profile, high stakes call on Trump’s social media fate could be pushed into next month.
It’s a familiar development in Facebook-land. Delay has been a long time feature of the tech giant’s crisis PR response in the face of a long history of scandals and bad publicity attached to how it operates its platform. So the tech giant is unlikely to be uncomfortable that the FOB is taking its time to make a call on Trump’s suspension.
After all, devising and configuring the bespoke case review body — as its proprietary parody of genuine civic oversight — is a process that has taken Facebook years already.
In related FOB news this week, Facebook announced that users can now request the board review its decisions not to remove content — expanding the Board’s potential cases to include reviews of ‘keep ups’ (not just content takedowns).
This report was updated with a correction: The FOB previously extended the deadline for case submissions; it has not done so again as we originally stated
Facebook is to be sued in Europe over the major leak of user data that dates back to 2019 but which only came to light recently after information on 533M+ accounts was found posted for free download on a hacker forum.
Today Digital Rights Ireland (DRI) announced it’s commencing a “mass action” to sue Facebook, citing the right to monetary compensation for breaches of personal data that’s set out in the European Union’s General Data Protection Regulation (GDPR).
Article 82 of the GDPR provides for a ‘right to compensation and liability’ for those affected by violations of the law. Since the regulation came into force, in May 2018, related civil litigation has been on the rise in the region.
The Ireland-based digital rights group is urging Facebook users who live in the European Union or European Economic Area to check whether their data was breach — via the haveibeenpwned website (which lets you check by email address or mobile number) — and sign up to join the case if so.
Information leaked via the breach includes Facebook IDs, location, mobile phone numbers, email address, relationship status and employer.
Facebook has been contacted for comment on the litigation.
The tech giant’s European headquarters is located in Ireland — and earlier this week the national data watchdog opened an investigation, under EU and Irish data protection laws.
A mechanism in the GDPR for simplifying investigation of cross-border cases means Ireland’s Data Protection Commission (DPC) is Facebook’s lead data regulator in the EU. However it has been criticized over its handling of and approach to GDPR complaints and investigations — including the length of time it’s taking to issue decisions on major cross-border cases. And this is particularly true of Facebook.
With the three-year anniversary of the GDPR fast approaching, the DPC has multiple open investigations into various aspects of Facebook’s business but has yet to issue a single decision against the company.
(The closest it’s come is a preliminary suspension order issued last year, in relation to Facebook’s EU to US data transfers. However that complaint long predates GDPR; and Facebook immediately filed to block the order via the courts. A resolution is expected later this year after the litigant filed his own judicial review of the DPC’s processes).
Since May 2018 the EU’s data protection regime has — at least on paper — baked in fines of up to 4% of a company’s global annual turnover for the most serious violations.
Again, though, the sole GDPR fine issued to date by the DPC against a tech giant (Twitter) is very far off that theoretical maximum. Last December the regulator announced a €450k (~$547k) sanction against Twitter — which works out to around just 0.1% of the company’s full-year revenue.
That penalty was also for a data breach — but one which, unlike the Facebook leak, had been publicly disclosed when Twitter found it in 2019. So Facebook’s failure to disclose the vulnerability it discovered and claimed to fix by September 2019 — which led to the leak of 533M accounts now — suggests it should face a higher sanction from the DPC than Twitter received.
However even if Facebook ends up with a more substantial GDPR penalty for this breach the watchdog’s caseload backlog and plodding procedural pace makes it hard to envisage a swift resolution to an investigation that’s only now a few days old.
Judging by past performance it’ll be years before the DPC decides on this 2019 Facebook leak — which likely explains why the DRI sees value in instigating class-action style litigation in parallel to the regulatory investigation.
“Compensation is not the only thing that makes this mass action worth joining. It is important to send a message to large data controllers that they must comply with the law and that there is a cost to them if they do not,” DRI writes on its website.
It also submitted a complaint about the Facebook breach to the DPC earlier this month, writing then that it was “also consulting with its legal advisors on other options including a mass action for damages in the Irish Courts”.
It’s clear that the GDPR enforcement gap is creating a growing opportunity for litigation funders to step in in Europe and take a punt on suing for data-related compensation damages — with a number of other mass actions announced last year.
In the case of DRI its focus is evidently on seeking to ensure that digital rights are upheld. But it told RTE that it believes compensation claims which force tech giants to pay money to users whose privacy rights have been violated is the best way to make them legally compliant.
Facebook, meanwhile, has sought to play down the breach it failed to disclose — claiming it’s ‘old data’ — a deflection that ignores the fact that dates of birth don’t change (nor do most people routinely change their mobile number or email address).
Plenty of the ‘old’ data exposed in this latest massive Facebook data leak will be very handy for spammers and fraudsters to target Facebook users — and also now for litigators to target Facebook for data-related damages.
Pakistan has temporarily blocked several social media services in the South Asian nation, according to users and a notice reviewed by TechCrunch.
In an order titled “Complete Blocking of Social Media Platforms,” the Pakistani government ordered Pakistan Telecommunication Authority to block social media platforms including Twitter, Facebook, WhatsApp, YouTube, and Telegram from 11am to 3pm (9.30am GMT) Friday.
The move comes as Pakistan looks to crackdown against a violent terrorist group and prevent troublemakers from disrupting Friday prayers congregations following days of violent protests.
Earlier this week Pakistan banned the Islamist group Tehrik-i-Labaik Pakistan after arresting its leader, which prompted protests, according to local media reports.
An entrepreneur based in Pakistan told TechCrunch that even though the order is supposed to expire at 3pm local time, similar past moves by the government suggests that the disruption will likely last for longer.
Though Pakistan, like its neighbor India, has temporarily cut phone calls access in the nation in the past, this is the first time Islamabad has issued a blanket ban on social media in the country.
Pakistan has explored ways to assume more control over content on digital services operating in the country in recent years. Some activists said the country was taking extreme measures without much explanations.
What kind of national emergency we are dealing with that govt banned entire social media temporarily? These arbitrary decisions of blocking and banning have never done any good instead opened ways to blanket bans.
— Nighat Dad (@nighatdad) April 16, 2021
Subscription pricing is landing on Facebook’s Oculus Store, giving VR developers another way to monetize content on Facebook’s Oculus Quest headset.
Developers will be allowed to add premium subscriptions to paid or free apps, with Facebook assumedly dragging in their standard percentage fee at the same time. Oculus and the developers on its platform have been riding the success of the company’s recent Quest 2 headset, which Facebook hasn’t detailed sales numbers on but has noted that the months-old $299 headset has already outsold every other Oculus headset sold to date.
Subscription pricing is an unsurprising development but signals that some developers believe they have a loyal enough group of subscribers to bring in sizable bits of recurring revenue. Facebook shipped the first Oculus Rift just over five years ago, and it’s been a zig-zagging path to finding early consumer success during that time. A big challenge for them has been building a dynamic developer ecosystem that offer something engaging to users while ensuring that VR devs can operate sustainably.
At launch, there are already a few developers debuting subscriptions for a number of different app types, spanning exercise, meditation, social, productivity and DJing. In addition to subscriptions, the new monetization path also allows developers to let users try out paid apps on a free trial basis.
The central question is how many Quest users there are that utilize their devices enough to justify a number of monthly subscriptions, but for developers looking to monetize their hardcore users, this is another utility that they likely felt was missing from the Oculus Store.
Facebook announced this morning it will begin testing a new experience for discovering businesses in its News Feed in the U.S. When live, users to tap on topics they’re interested in underneath posts and ads in their News Feed in order to explore related content from businesses. The change comes at a time when Facebook has been arguing how Apple’s App Tracking Transparency update will impact its small business customers — a claim many have dismissed as misleading, but nevertheless led some mom and pop shops to express concern about the impacts to their ad targeting capabilities, as a result. This new test is an example of how easily Facebook can tweak its News Feed to build out more data on its users, if needed.
The company suggests users may see the change under posts and ads from businesses selling beauty products, fitness or clothing, among other things.
The idea here is that Facebook would direct users to related businesses through a News Feed feature, when they take a specific action to discover related content. This, in turn, could help Facebook create a new set of data on its users, in terms of which users clicked to see more, and what sort of businesses they engaged with, among other things. Over time, it could turn this feature into an ad unit, if desired, where businesses could pay for higher placement.
“People already discover businesses while scrolling through News Feed, and this will make it easier to discover and consider new businesses they might not have found on their own,” the company noted in a brief announcement.
Facebook didn’t detail its further plans with the test, but said as it learned from how users interacted with the feature, it will expand the experience to more people and businesses.
Image Credits: Facebook
Along with news of the test, Facebook said it will roll out more tools for business owners this month, including the ability to create, publish and schedule Stories to both Facebook and Instagram; make changes and edits to Scheduled Posts; and soon, create and manage Facebook Photos and Albums from Facebook’s Business Suite. It will also soon add the ability to create and save Facebook and Instagram posts as drafts from the Business Suite mobile app.
Related to the businesses updates, Facebook updated features across ad products focused on connecting businesses with customer leads, including Lead Ads, Call Ads, and Click to Messenger Lead Generations.
Facebook earlier this year announced a new Facebook Page experience that gave businesses the ability to engage on the social network with their business profile for things like posting, commenting and liking, and access to their own, dedicated News Feed. And it had removed the Like button in favor of focusing on Followers.
It is not a coincidence that Facebook is touting its tools for small businesses at a time when there’s concern — much of it loudly shouted by Facebook itself — that its platform could be less useful to small business owners in the near future, when ad targeting capabilities becomes less precise as users vote ‘no’ when Facebook’s iOS app asks if it can track them.
Instagram today will begin a new test around hiding Like counts on users’ posts, following its experiments in this area which first began in 2019. This time, however, Instagram is not enabling or disabling the feature for more users. Instead, it will begin to explore a new option where users get to decide what works best for them — either choosing to see the Like counts on others’ posts, or not. Users will also be able to turn off Like counts on their own posts, if they choose. Facebook additionally confirmed it will begin to test a similar experience on its own social network.
Instagram says tests involving Like counts were deprioritized after Covid-19 hit, as the company focused on other efforts needed to support its community. (Except for that brief period this March where Instagram accidentally hid Likes for more users due to a bug.)
The company says it’s now revisiting the feedback it collected from users during the tests and found a wide range of opinions. Originally, the idea with hiding Like counts was about reducing the anxiety and embarrassment that surrounds posting content on the social network. That is, people would stress over whether their post would receive enough Likes to be deemed “popular.” This problem was particularly difficult for Instagram’s younger users, who care much more about what their peers think — so much so that they would take down posts that didn’t receive “enough” Likes.
In addition, the removal of Likes helped reduce the sort of herd mentality that drives people to like things that are already popular, as opposed to judging the content for themselves.
But during tests, not everyone agreed the removal of Likes was a change for the better. Some people said they still wanted to see Like counts so they could track what was trending and popular. The argument for keeping Likes was more prevalent among the influencer community, where creators used the metric in order to communicate their value to partners, like brands and advertisers. Here, lower engagement rates on posts could directly translate to lower earnings for these creators.
Both arguments for and against Likes have merit, which is why Instagram’s latest test will put the choice back into users’ own hands.
This new test will be enabled for a small percentage of users globally on Instagram, the company says.
If you’ve been opted in, you’ll find a new option to hide the Likes from within the app’s Settings. This will prevent you from seeing Likes on other people’s posts as you scroll through your Instagram Feed. As a creator, you’ll be able to hide Likes on a per-post basis via the three-dot “…” menu at the top. Even if Likes are disabled publicly, creators are still able to view Like counts and other engagements through analytics, just as they did before.
The tests on Facebook, which has also been testing Like count removals for some time, have not yet begun. Facebook tells TechCrunch those will roll out in the weeks ahead.
Making Like counts an choice may initially seem like it could help to address everyone’s needs. But in reality, if the wider influencer community chooses to continue to use Likes as a currency that translates to popularity and job opportunities, then other users will continue to do the same.
Ultimately, communities themselves have to decide what sort of tone they want to set, preferably from the outset — before you’ve attracted millions of users who will be angry when you later try to change course.
There’s also a question as to whether social media users are really hungry for an “Like-free” safer space. For years we’ve seen startups focused on building an “anti-Instagram” of sorts, where they drop one or more Instagram features, like algorithmic feeds, Likes and other engagement mechanisms, such as Minutiae, Vero, Dayflash, Oggl, and now, newcomers like troubled Dispo, or under-the-radar Herd. But Instagram has yet to fail because of an anti-Instagram rival. If anything is a threat, it’s a new type of social network entirely, like TikTok –where it should be noted getting Likes and engagements is still very important for creator success.
Instagram didn’t say how long the new tests would last or if and when the features would roll out more broadly.
“We’re testing this on Instagram to start, but we’re also exploring a similar experience for Facebook. We will learn from this new small test and have more to share soon,” a Facebook company spokesperson said.
Facebook’s lead data supervisor in the European Union has opened an investigation into whether the tech giant violated data protection rules vis-a-vis the leak of data reported last week.
Here’s the Irish Data Protection Commission’s statement:
“The Data Protection Commission (DPC) today launched an own-volition inquiry pursuant to section 110 of the Data Protection Act 2018 in relation to multiple international media reports, which highlighted that a collated dataset of Facebook user personal data had been made available on the internet. This dataset was reported to contain personal data relating to approximately 533 million Facebook users worldwide. The DPC engaged with Facebook Ireland in relation to this reported issue, raising queries in relation to GDPR compliance to which Facebook Ireland furnished a number of responses.
The DPC, having considered the information provided by Facebook Ireland regarding this matter to date, is of the opinion that one or more provisions of the GDPR and/or the Data Protection Act 2018 may have been, and/or are being, infringed in relation to Facebook Users’ personal data.
Accordingly, the Commission considers it appropriate to determine whether Facebook Ireland has complied with its obligations, as data controller, in connection with the processing of personal data of its users by means of the Facebook Search, Facebook Messenger Contact Importer and Instagram Contact Importer features of its service, or whether any provision(s) of the GDPR and/or the Data Protection Act 2018 have been, and/or are being, infringed by Facebook in this respect.”
The move comes after the European Commission intervened to apply pressure on Ireland’s data protection commissioner. Justice commissioner, Didier Reynders, tweeted on Monday that he had spoken with Helen Dixon about the Facebook data leak.
“The Commission continues to follow this case closely and is committed to supporting national authorities,” he added, going on to urge Facebook to “cooperate actively and swiftly to shed light on the identified issues”.
Facebook has been contacted for comment.
Today I spoke with Helen Dixon @DPCIreland about the #FacebookLeak. The Commission continues to follow this case closely and is committed to supporting national authorities. We also call on @Facebook to cooperate actively and swiftly to shed light on the identified issues.
— Didier Reynders (@dreynders) April 12, 2021
A spokeswoman for the Commission confirmed the virtual meeting between Reynders and Dixon, saying: “Dixon informed the Commissioner about the issues at stake and the different tracks of work to clarify the situation.
“They both urge Facebook to cooperate swiftly and to share the necessary information. It is crucial to shed light on this leak that has affected millions of European citizens.”
“It is up to the Irish data protection authority to assess this case. The Commission remains available if support is needed. The situation will also have to be further analyzed for the future. Lessons should be learned,” she added.
The revelation that a vulnerability in Facebook’s platform enabled unidentified ‘malicious actors’ to extract the personal data (including email addresses, mobile phone numbers and more) of more than 500 million Facebook accounts up until September 2019 — when Facebook claims it fixed the issue — only emerged in the wake of the data being found for free download on a hacker forum earlier this month.
All 533,000,000 Facebook records were just leaked for free.
This means that if you have a Facebook account, it is extremely likely the phone number used for the account was leaked.
— Alon Gal (Under the Breach) (@UnderTheBreach) April 3, 2021
Despite the European Union’s data protection framework (the GDPR) baking in a regime of data breach notifications — with the risk of hefty fines for compliance failure — Facebook did not inform its lead EU data supervisory when it found and fixed the issue. Ireland’s Data Protection Commission (DPC) was left to find out in the press, like everyone else.
Nor has Facebook individually informed the 533M+ users that their information was taken without their knowledge or consent, saying last week it has no plans to do so — despite the heightened risk for affected users of spam and phishing attacks.
Privacy experts have, meanwhile, been swift to point out that the company has still not faced any regulatory sanction under the GDPR — with a number of investigations ongoing into various Facebook businesses and practices and no decisions yet issued in those cases by Ireland’s DPC.
Last month the European Parliament adopted a resolution on the implementation of the GDPR which expressed “great concern” over the functioning of the mechanism — raising particular concern over the Irish data protection authority by writing that it “generally closes most cases with a settlement instead of a sanction and that cases referred to Ireland in 2018 have not even reached the stage of a draft decision pursuant to Article 60(3) of the GDPR”.
The latest Facebook data scandal further amps up the pressure on the DPC — providing further succour to critics of the GDPR who argue the regulation is unworkable under the current foot-dragging enforcement structure, given the major bottlenecks in Ireland (and Luxembourg) where many tech giants choose to locate regional HQ.
— Max Schrems (@maxschrems) April 10, 2021
On Thursday Reynders made his concern over Ireland’s response to the Facebook data leak public, tweeting to say the Commission had been in contact with the DPC.
He does have reason to be personally concerned. Earlier last week Politico reported that Reynders’ own digits had been among the cache of leaked data, along with those of the Luxembourg prime minister Xavier Bettel — and “dozens of EU officials”. However the problem of weak GDPR enforcement affects everyone across the bloc — some 446M people whose rights are not being uniformly and vigorously upheld.
“A strong enforcement of GDPR is of key importance,” Reynders also remarked on Twitter, urging Facebook to “fully cooperate with Irish authorities”.
Last week Italy’s data protection commission also called on Facebook to immediately offer a service for Italian users to check whether they had been affected by the breach. But Facebook made no public acknowledgment or response to the call. Under the GDPR’s one-stop-shop mechanism the tech giant can limit its regulatory exposure by direct dealing only with its lead EU data supervisor in Ireland.
A two-year Commission review of how the data protection regime is functioning, which reported last summer, already drew attention to problems with patchy enforcement. So a lack of progress on unblocking GDPR bottlenecks is a growing problem for the Commission — which is in the midst of proposing a package of additional digital regulations. That makes the enforcement point a very pressing one as EU lawmakers are being asked how new digital rules will be upheld if existing ones keep being trampled on?
It’s certainly notable that the EU’s executive has proposed a different, centralized enforcement structure for incoming pan-EU legislation targeted at digital services and tech giants. Albeit, getting agreement from all the EU’s institutions and elected representatives on how to reshape platform oversight looks challenging.
And in the meanwhile the data leaks continue: Motherboard reported Friday on another alarming leak of Facebook data it found being made accessible via a bot on the Telegram messaging platform that gives out the names and phone numbers of users who have liked a Facebook page (in exchange for a fee unless the page has had less than 100 likes).
The publication said this data appears to be separate to the 533M+ scraped dataset — after it ran checks against the larger dataset via the breach advice site, haveibeenpwned. It also asked Alon Gal, the person who discovered the aforementioned leaked Facebook dataset being offered for free download online, to compare data obtained via the bot and he did not find any matches.
We contacted Facebook about the source of this leaked data and will update this report with any response.
In his tweet about the 500M+ Facebook data leak last week, Reynders made reference to the Europe Data Protection Board (EDPB), a steering body comprised of representatives from Member State data protection agencies which works to ensure a consistent application of the GDPR.
However the body does not lead on GDPR enforcement — so it’s not clear why he would invoke it. Optics is one possibility, if he was trying to encourage a perception that the EU has vigorous and uniform enforcement structures where people’s data is concerned.
“Under the GDPR, enforcement and the investigation of potential violations lies with the national supervisory authorities. The EDPB does not have investigative powers per se and is not involved in investigations at the national level. As such, the EDPB cannot comment on the processing activities of specific companies,” an EDPB spokeswoman told us when we enquired about Reynders’ remarks.
But she also noted the Commission attends plenary meetings of the EDPB — adding it’s possible there will be an exchange of views among members about the Facebook leak case in the future, as attending supervisory authorities “regularly exchange information on cases at the national level”.
European Union lawmakers who are drawing up rules for applying artificial intelligence are considering fines of up to 4% of global annual turnover (or €20M, if greater) for a set of prohibited use-cases, according to a leaked draft of the AI regulation — reported earlier by Politico — that’s expected to be officially unveiled next week.
The plan to regulate AI has been on the cards for a while. Back in February 2020 the European Commission published a white paper, sketching plans for regulating so-called “high risk” applications of artificial intelligence.
At the time EU lawmakers were toying with a sectoral focus — envisaging certain sectors like energy and recruitment as vectors for risk. However that approach appears to have been rethought, per the leaked draft — which does not limit discussion of AI risk to particular industries or sectors.
Instead, the focus is on compliance requirements for high risk AI applications, wherever they may occur (weapons/military uses are specifically excluded, however, as such use-cases fall outside the EU treaties). Although it’s not abundantly clear from this draft exactly how ‘high risk’ will be defined.
The overarching goal for the Commission here is to boost public trust in AI, via a system of compliance checks and balances steeped in “EU values” in order to encourage uptake of so-called “trustworthy” and “human-centric” AI. So even makers of AI applications not considered to be ‘high risk’ will still be encouraged to adopt codes of conduct — “to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems”, as the Commission puts it.
Another chunk of the regulation deals with measures to support AI development in the bloc — pushing Member States to establish regulatory sandboxing schemes in which startups and SMEs can be proritized for support to develop and test AI systems before bringing them to market.
Competent authorities “shall be empowered to exercise their discretionary powers and levers of proportionality in relation to artificial intelligence projects of entities participating the sandbox, while fully preserving authorities’ supervisory and corrective powers,” the draft notes.
Under the planned rules, those intending to apply artificial intelligence will need to determine whether a particular use-case is ‘high risk’ and thus whether they need to conduct a mandatory, pre-market compliance assessment or not.
“The classification of an AI system as high-risk should be based on its intended purpose — which should refer to the use for which an AI system is intended, including the specific context and conditions of use and — and be determined in two steps by considering whether it may cause certain harms and, if so, the severity of the possible harm and the probability of occurrence,” runs one recital in the draft.
“A classification of an AI system as high-risk for the purpose of this Regulation may not necessarily mean that the system as such or the product as a whole would necessarily be considered as ‘high-risk’ under the criteria of the sectoral legislation,” the text also specifies.
Examples of “harms” associated with high-risk AI systems are listed in the draft as including: “the injury or death of a person, damage of property, systemic adverse impacts for society at large, significant disruptions to the provision of essential services for the ordinary conduct of critical economic and societal activities, adverse impact on financial, educational or professional opportunities of persons, adverse impact on the access to public services and any form of public assistance, and adverse impact on [European] fundamental rights.”
Several examples of high risk applications are also discussed — including recruitment systems; systems that provide access to educational or vocational training institutions; emergency service dispatch systems; creditworthiness assessment; systems involved in determining taxpayer-funded benefits allocation; decision-making systems applied around the prevention, detection and prosecution of crime; and decision-making systems used to assist judges.
So long as compliance requirements — such as establishing a risk management system and carrying out post-market surveillance, including via a quality management system — are met such systems would not be barred from the EU market under the legislative plan.
Other requirements include in the area of security and that the AI achieves consistency of accuracy in performance — with a stipulation to report to “any serious incidents or any malfunctioning of the AI system which constitutes a breach of obligations” to an oversight authority no later than 15 days after becoming aware of it.
“High-risk AI systems may be placed on the Union market or otherwise put into service subject to compliance with mandatory requirements,” the text notes.
“Mandatory requirements concerning high-risk AI systems placed or otherwise put into service on the Union market should be complied with taking into account the intended purpose of the AI system and according to the risk management system to be established by the provider.
“Among other things, risk control management measures identified by the provider should be based on due consideration of the effects and possible interactions resulting from the combined application of the mandatory requirements and take into account the generally acknowledged state of the art, also including as reflected in relevant harmonised standards or common specifications.”
Certain AI “practices” are listed as prohibited under Article 4 of the planned law, per this leaked draft — including (commercial) applications of mass surveillance systems and general purpose social scoring systems which could lead to discrimination.
AI systems that are designed to manipulate human behavior, decisions or opinions to a detrimental end (such as via dark pattern design UIs), are also listed as prohibited under Article 4; as are systems that use personal data to generate predictions in order to (detrimentally) target the vulnerabilities of persons or groups of people.
A casual reader might assume the regulation is proposing to ban, at a stroke, practices like behavioral advertising based on people tracking — aka the business models of companies like Facebook and Google. However that assumes adtech giants will accept that their tools have a detrimental impact on users.
On the contrary, their regulatory circumvention strategy is based on claiming the polar opposite; hence Facebook’s talk of “relevant” ads. So the text (as written) looks like it will be a recipe for (yet) more long-drawn out legal battles to try to make EU law stick vs the self-interested interpretations of tech giants.
The rational for the prohibited practices is summed up in an earlier recital of the draft — which states: “It should be acknowledged that artificial intelligence can enable new manipulative, addictive, social control and indiscriminate surveillance practices that are particularly harmful and should be prohibited as contravening the Union values of respect for human dignity, freedom, democracy, the rule of law and respect for human rights.”
It’s notable that the Commission has avoided proposing a ban on the use of facial recognition in public places — as it had apparently been considering, per a leaked draft early last year, before last year’s White Paper steered away from a ban.
In the leaked draft “remote biometric identification” in public places is singled out for “stricter conformity assessment procedures through the involvement of a notified body” — aka an “authorisation procedure that addresses the specific risks implied by the use of the technology” and includes a mandatory data protection impact assessment — vs most other applications of high risk AIs (which are allowed to meet requirements via self-assessment).
“Furthermore the authorising authority should consider in its assessment the likelihood and severity of harm caused by inaccuracies of a system used for a given purpose, in particular with regard to age, ethnicity, sex or disabilities,” runs the draft. “It should further consider the societal impact, considering in particular democratic and civic participation, as well as the methodology, necessity and proportionality for the inclusion of persons in the reference database.”
AI systems “that may primarily lead to adverse implications for personal safety” are also required to undergo this higher bar of regulatory involvement as part of the compliance process.
The envisaged system of conformity assessments for all high risk AIs is ongoing, with the draft noting: “It is appropriate that an AI system undergoes a new conformity assessment whenever a change occurs which may affect the compliance of the system with this Regulation or when the intended purpose of the system changes.”
“For AI systems which continue to ‘learn’ after being placed on the market or put into service (i.e. they automatically adapt how functions are carried out) changes to the algorithm and performance which have not been pre-determined and assessed at the moment of the conformity assessment shall result in a new conformity
assessment of the AI system,” it adds.
The carrot for compliant businesses is to get to display a ‘CE’ mark to help them win the trust of users and friction-free access across the bloc’s single market.
“High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can move freely within the Union,” the text notes, adding that: “Member States should not create obstacles to the placing on the market or putting into service of AI systems that comply with the requirements laid down in this Regulation.”
As well as seeking to outlaw some practices and establish a system of pan-EU rules for bringing ‘high risk’ AI systems to market safely — with providers expected to make (mostly self) assessments and fulfil compliance obligations (such as around the quality of the data-sets used to train the model; record-keeping/documentation; human oversight; transparency; accuracy) prior to launching such a product into the market and conduct ongoing post-market surveillance — the proposed regulation seeks shrink the risk of AI being used to trick people.
It does this by suggesting “harmonised transparency rules” for AI systems intended to interact with natural persons (aka voice AIs/chat bots etc); and for AI systems used to generate or manipulate image, audio or video content (aka deepfakes).
“Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use of these systems should therefore be subject to specific transparency obligations without prejudice to the requirements and obligations for high-risk AI systems,” runs the text.
“In particular, natural persons should be notified that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. Moreover, users, who use an AI system to generate or manipulate image, audio or video content that appreciably resembles existing persons, places or events and would falsely appear to a reasonable person to be authentic, should disclose that the content has been artificially created or manipulated by labelling the artificial intelligence output accordingly and disclosing its artificial origin.
“This labelling obligation should not apply where the use of such content is necessary for the purposes of safeguarding public security or for the exercise of a legitimate right or freedom of a person such as for satire, parody or freedom of arts and sciences and subject to appropriate safeguards for the rights and freedoms of third parties.”
While the proposed AI regime hasn’t yet been officially unveiled by the Commission — so details could still change before next week — a major question mark looms over how a whole new layer of compliance around specific applications of (often complex) artificial intelligence can be effectively oversee and any violations enforced, especially given ongoing weaknesses in the enforcement of the EU’s data protection regime (which begun being applied back in 2018).
So while providers of high risk AIs are required to take responsibility for putting their system/s on the market (and therefore for compliance with all the various stipulations, which also include registering high risk AI systems in an EU database the Commission intends to maintain), the proposal leaves enforcement in the hands of Member States — who will be responsible for designating one or more national competent authorities to supervise application of the oversight regime.
We’ve seen how this story plays out with the General Data Protection Regulation. The Commission itself has conceded GDPR enforcement is not consistently or vigorously applied across the bloc — so a major question is how these fledgling AI rules will avoid the same forum-shopping fate?
“Member States should take all necessary measures to ensure that the provisions of this Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their infringement. For certain specific infringements, Member States should take into account the margins and criteria set out in this Regulation,” runs the draft.
The Commission does add a caveat — about potentially stepping in in the event that Member State enforcement doesn’t deliver. But there’s no near term prospect of a different approach to enforcement, suggesting the same old pitfalls will likely appear.
“Since the objective of this Regulation, namely creating the conditions for an ecosystem of trust regarding the placing on the market, putting into service and use of artificial intelligence in the Union, cannot be sufficiently achieved by the Member States and can rather, by reason of the scale or effects of the action, be better achieved at Union level, the Union may adopt measures, in accordance with the principle of subsidiarity as set out in Article 5 of the Treaty on European Union,” is the Commission’s back-stop for future enforcement failure.
The oversight plan for AI includes setting up a mirror entity akin to the GDPR’s European Data Protection Board — to be called the European Artificial Intelligence Board — which will similarly support application of the regulation by issuing relevant recommendations and opinions for EU lawmakers, such as around the list of prohibited AI practices and high-risk systems.
After a relatively quiet couple of months from Oculus on the software front, Facebook’s VR unit is sharing some details on new functionality coming to its Quest 2 standalone headset.
The features, which include wireless Oculus Link support, “Infinite Office” functionality and upcoming 120hz support will be rolling out in the Quest 2’s upcoming v28 software update. There’s no exact word on when that update is coming but the language in the blog seems to intimate that the rollout is imminent.
The big addition here is a wireless version of Oculus Link which will allow Quest 2 users to stream content from their PCs directly to their standalone headsets, enabling more graphics-intensive titles that were previously only available on the now pretty much defunct Rift platform. Air Link is a feature that will enable users to ditch the tethered experience of Oculus Link, though many users have been relying on third-party software to do this already, utilizing Virtual Desktop.
It appears this upgrade is only coming to Quest 2 users in a new experimental mode, but not owners of the original Quest headset. Users will need to update the Oculus software on both their Quest 2 and PC to the v28 version in order to use this feature.
Accompanying the release of Air Link in this update is new features coming to “Infinite Office” a VR office play that aims to bring your keyboard and mouse into VR and allow users to engage with desktop-style software. Facebook debuted it back at their VR-focused Facebook Connect conference, but they haven’t said much about it since.
Today’s updates include added keyboard support that not only allows users to link their device but see it inside VR, this support is limited to a single model from a single manufacturer (the Logitech K830) but Facebook says they’ll be adding support down the road to other keyboards. Users with this keyboard will be able to see outlines of their hands as well as a rendering of the keyboard in its real position, enabling users to accurately type (theoretically). Infinite Office will also allow users to designate where their real world desk is, a feature that will likely help users orient themselves. Even with a keyboard, there’s not much users can do at the moment beyond accessing the Oculus Browser it seems.
Lastly, Oculus is allowing developers to sample out 120hz frame rate support for their titles. Facebook says that there isn’t actually anything available with that frame rate yet, not even system software, but that support is here for developers in an experimental fashion.
Oculus says the new software update will be rolling out “gradually” to users.
Facebook confirmed it’s testing a video speed-dating app called Sparked, after the app’s website was spotted by The Verge. Unlike dating app giants such as Tinder, Sparked users don’t swipe on people they like or direct message others. Instead, they cycle through a series of short video dates during an event to make connections with others. The product itself is being developed by Facebook’s internal R&D group, the NPE Team, but had not been officially announced.
“Sparked is an early experiment by New Product Experimentation,” a spokesperson for Facebook’s NPE Team confirmed to TechCrunch. “We’re exploring how video-first speed dating can help people find love online.”
They also characterized the app as undergoing a “small, external beta test” designed to generate insights about how video dating could work, in order to improve people’s experiences with Facebook products. The app is not currently live on app stores, only the web.
Sparked is, however, preparing to test the experience at a Chicago Date Night event on Wednesday, The Verge’s report noted.
Image Credits: Facebook
During the sign-up process, Sparked tells users to “be kind,” “keep this a safe space,” and “show up.” A walkthrough of how the app also works explains that participants will meet face to face during a series of 4-minute video dates, which they can then follow up with a 10-minute date if all goes well. They can additionally choose to exchange contact info, like phone numbers, emails, or Instagram handles.
Facebook, of course, already offers a dating app product, Facebook Dating.
That experience, which takes place inside Facebook itself, first launched in 2018 outside the U.S., and then arrived in the U.S. the following year. In the early days of the pandemic, Facebook announced it would roll out a sort of virtual dating experience that leveraged Messenger for video chats — a move came at a time when many other dating apps in the market also turned to video to serve users under lockdowns. These video experiences could potentially compete with Sparked, unless the new product’s goal is to become another option inside Facebook Dating itself.
Image Credits: Facebook
Despite the potential reach, Facebook’s success in the dating market is not guaranteed, some analysts have warned. People don’t think of Facebook as a place to go meet partners, and the dating product today is still separated from the main Facebook app for privacy purposes. That means it can’t fully leverage Facebook’s network effects to gain traction, as users in this case may not want their friends and family to know about their dating plans.
Facebook’s competition in dating is fierce, too. Even the pandemic didn’t slow down the dating app giants, like Match Group or newly IPO’d Bumble. Tinder’s direct revenues increased 18% year-over-year to $1.4 billion in 2020, Match Group reported, for instance. Direct revenues from the company’s non-Tinder brands collectively increased 16%. And Bumble topped its revenue estimates in its first quarter as a public company, pulling in $165.6 million in the fourth quarter.
Image Credits: Facebook
Facebook, on the other hand, has remained fairly quiet about its dating efforts. Though the company cited over 1.5 billion matches in the 20 countries it’s live, a “match” doesn’t indicate a successful pairing — in fact, that sort of result may not be measured. But it’s early days for the product, which only rolled out to European markets this past fall.
The NPE Team’s experiment in speed dating could ultimately help to inform Facebook of what sort of new experiences a dating app user may want to use, and how.
The company didn’t say if or when Sparked would roll out more broadly.
Facebook’s self-styled ‘Oversight Board’ (FOB) has announced an operational change that looks intended to respond to criticism of the limits of the self-regulatory content-moderation decision review body: It says it’s started accepting requests from users to review decisions to leave content up on Facebook and Instagram.
The move expands the FOB’s remit beyond reviewing (and mostly reversing) content takedowns — an arbitrary limit that critics said aligns it with the economic incentives of its parent entity, given that Facebook’s business benefits from increased engagement with content (and outrageous content drives clicks and makes eyeballs stick).
“So far, users have been able to appeal content to the Board which they think should be restored to Facebook or Instagram. Now, users can also appeal content to the Board which they think should be removed from Facebook or Instagram,” the FOB writes, adding that it will “use its independent judgment to decide what to leave up and what to take down”.
“Our decisions will be binding on Facebook,” it adds.
The ability to request an appeal on content Facebook wouldn’t take down has been added across all markets, per Facebook. But the tech giant said it will take some “weeks” for all users to get access as it said it’s rolling out the feature “in waves to ensure stability of the product experience”.
While the FOB can now get individual pieces of content taken down from Facebook/Instagram — i.e. if the Board believes it’s justified in reversing an earlier decision by the company not to remove content — it cannot make Facebook adopt any associated suggestions vis-a-vis its content moderation policies generally.
That’s because Facebook has never said it will be bound by the FOB’s policy recommendations; only by the final decision made per review.
That in turn limits the FOB’s ability to influence the shape of the tech giant’s approach to speech policing. And indeed the whole effort remains inextricably bound to Facebook which devised and structured the FOB — writing the Board’s charter and bylaws, and hand picking the first cohort of members. The company thus continues to exert inescapable pull on the strings linking its self-regulatory vehicle to its lucrative people-profiling and ad-targeting empire.
The FOB getting the ability to review content ‘keep ups’ (if we can call them that) is also essentially irrelevant when you consider the ocean of content Facebook has ensured the Board won’t have any say in moderating — because its limited resources/man-power mean it can only ever consider a fantastically tiny subset of cases referred to it for review.
For an oversight body to provide a meaningful limit on Facebook’s power it would need to have considerably more meaty (i.e. legal) powers; be able to freely range across all aspects of Facebook’s business (not just review user generated content); and be truly independent of the adtech mothership — as well as having meaningful powers of enforcement and sanction.
So, in other words, it needs to be a public body, functioning in the public interest.
Instead, while Facebook applies its army of in house lawyers to fight actual democratic regulatory oversight and compliance, it has splashed out to fashion this bespoke bureaucracy that can align with its speech interests — handpicking a handful of external experts to pay to perform a content review cameo in its crisis PR drama.
Unsurprisingly, then, the FOB has mostly moved the needle in a speech-maximizing direction so far — while expressing some frustration at the limited deck of cards Facebook has dealt it.
Most notably, the Board still has a decision pending on whether to reverse Facebook’s indefinitely ban on former US president Donald Trump. If it reverses that decision Facebook users won’t have any recourse to appeal the restoration of Trump’s account.
The only available route would, presumably, be for users to report future Trump content to Facebook for violating its policies — and if Facebook refuses to take that stuff down, users could try to request a FOB review. But, again, there’s no guarantee the FOB will accept any such review requests. (Indeed, if the board chooses to reinstate Trump that may make it harder for it to accept requests to review Trump content, at least in the short term (in the interests of keeping a diverse case file, so… )
To request the FOB review a piece of content that’s been left up a user of Facebook/Instagram first has to report the content to Facebook/Instagram.
If the company decides to keep the content up Facebook says the reporting person will receive an Oversight Board Reference ID (a ten-character string that begins with ‘FB’) in their Support Inbox — which they can use to appeal its ‘no takedown’ decision to the Oversight Board.
There are several hoops to jump through to make an appeal: Following on-screen instructions Facebook says the user will be taken to the Oversight Board website where they need to log in with the account to which the reference ID was issued.
They will then be asked to provide responses to a number of questions about their reasons for reporting the content (to “help the board understand why you think Facebook made the wrong decision”).
Once an appeal has been submitted, the Oversight Board will decide whether or not to review it. The board only selects a certain number of “eligible appeals” to review; and Facebook has not disclosed the proportion of requests the Board accepts for review vs submissions it receives — per case or on aggregate. So how much chance of submission success any user has for any given piece of content is an unknown (and probably unknowable) quantity.
Users who have submitted an appeal against content that was left up can check the status of their appeal via the FOB’s website — again by logging in and using the reference ID.
A further limitation is time, as Facebook notes there’s a time limit on appealing decisions to the FOB
“Bear in mind that there is a time limit on appealing decisions to the Oversight Board. Once the window to appeal a decision has expired, you will no longer be able to submit it,” it writes in its Help Center, without specifying how long users have to get their appeal in.
Ethical investing remains something of a confusing maze, with a great deal of ‘greenwashing’ going on. A new UK startup is hoping to fix that with the launch of its new app and platform for retail investors.
Clim8 Invest has raised $8 million from 7pc Ventures (early backers of Oculus, acquired by Facebook), British Business Bank Future Fund and a numbers of technology entrepreneurs and executives including Marcus Exall (Monese), Marcus Mosen (N26), Paul Willmott (Lego Digital, McKinsey), Doug Scott (Redbrain), Matt Wilkins (Thought Machine), Andrew Cocker (Skyscanner), Steve Thomson (Redbrain), Monica Kalia (Neyber, Goldman Sachs), Doug Monro (Adzuna), Erik Nygard (Limejump).
Consumers will be able to invest in companies and supply chains that are focused on tackling climate change. It will be competing with similar startups in the space such as London-based Tickr (backed by $3m from Ada Ventures), Helios in Paris, and Yova in Zurich.
Duncan Grierson, CEO of Clim8 said in a statement: “We are launching at an exciting time for sustainable investing. 2020 was an exceptional year for environmentally-focused investment offerings, as investors looked harder at climate-related opportunities. Sustainable investments have continued to outperform markets since the beginning of the Covid-19 Crisis and we believe this will continue.”
Grierson has 20 years of experience in the green space and was a winner of the EY Entrepreneur of Year Cleantech award.
The startup will take advantage of new, higher EU rules around the disclosure requirements for sustainable investment funds. Users can choose between either stocks and shares ISAs (up to £20k) or a taxable general investment account.
Facebook has removed 16,000 groups that were trading fake reviews on its platform after another intervention by the UK’s Competition and Markets Authority (CMA), the regulator said today.
The CMA has been leaning on tech giants to prevent their platforms being used as thriving marketplaces for selling fake reviews since it began investigating the issue in 2018 — pressuring both eBay and Facebook to act against fake review sellers back in 2019.
The two companies pledged to do more to tackle the insidious trade last year, after coming under further pressure from the regulator — which found that Facebook-owned Instagram was also a thriving hub of fake review trades.
The latest intervention by the CMA looks considerably more substantial than last year’s action — when Facebook removed a mere 188 groups and disabled 24 user accounts. Although it’s not clear how many accounts the tech giant has banned and/or suspended this time it has removed orders of magnitude more groups. (We’ve asked.)
Facebook was contacted with questions but it did not answer what we asked directly, sending us this statement instead:
“We have engaged extensively with the CMA to address this issue. Fraudulent and deceptive activity is not allowed on our platforms, including offering or trading fake reviews. Our safety and security teams are continually working to help prevent these practices.”
Since the CMA has been raising the issue of fake review trading, Facebook has been repeatedly criticised for not doing enough to clean up its platforms, plural.
Today the regulator said the social media giant has made further changes to the systems it uses for “identifying, removing and preventing the trading of fake and/or misleading reviews on its platforms to ensure it is fulfilling its previous commitments”.
It’s not clear why it’s taken Facebook well over a year — and a number of high profile interventions — to dial up action against the trade in fake reviews. But the company suggested that the resources it has available to tackle the problem had been strained as a result of the COVID-19 pandemic and associated impacts, such as home working. (Facebook’s full year revenue increased in 2020 but so too did its expenses.)
According to the CMA changes Facebook has made to its system for combating traders of fake reviews include:
Again it’s not clear why Facebook would not have already been suspending or banning repeat offenders — at least, not if it was actually taking good faith action to genuinely quash the problem, rather than seeing if it could get away with doing the bare minimum.
Commenting in a statement, Andrea Coscelli, chief executive of the CMA, essentially makes that point, saying: “Facebook has a duty to do all it can to stop the trading of such content on its platforms. After we intervened again, the company made significant changes — but it is disappointing it has taken them over a year to fix these issues.”
“We will continue to keep a close eye on Facebook, including its Instagram business. Should we find it is failing to honour its commitments, we will not hesitate to take further action,” Coscelli added.
A quick search on Facebook’s platform for UK groups trading in fake reviews appears to return fewer obviously dubious results than when we’ve checked in on this problem in 2019 and 2020. Although the results that were returned included a number of private groups so it was not immediately possible to verify what content is being solicited from members.
We did also find a number of Facebook groups offering Amazon reviews intended for other European markets, such as France and Spain (and in one public group aimed at Amazon Spain we found someone offering a “fee” via PayPal for a review; see below screengrab) — suggesting Facebook isn’t applying the same level of attention to tackling fake reviews that are being traded by users in markets where it’s faced fewer regulatory pokes than it has in the UK.
Cybercriminals have taken out a number of Facebook ads masquerading as a Clubhouse app for PC users in order to target unsuspecting victims with malware, TechCrunch has learned.
TechCrunch was alerted Wednesday to Facebook ads tied to several Facebook pages impersonating Clubhouse, the drop-in audio chat app only available on iPhones. Clicking on the ad would open a fake Clubhouse website, including a mocked-up screenshot of what the non-existent PC app looks like, with a download link to the malicious app.
When opened, the malicious app tries to communicate with a command and control server to obtain instructions on what to do next. One sandbox analysis of the malware showed the malicious app tried to infect the isolated machine with ransomware.
But overnight, the fake Clubhouse websites — which were hosted in Russia — went offline. In doing so, the malware also stopped working. Guardicore’s Amit Serper, who tested the malware in a sandbox on Thursday, said the malware received an error from the server and did nothing more.
The fake website was set up to look like Clubhouse’s real website, but featuring a malicious PC app. (Image: TechCrunch)
It’s not uncommon for cybercriminals to tailor their malware campaigns to piggyback off the successes of wildly popular apps. Clubhouse reportedly topped more than 8 million global downloads to date despite an invite-only launch. That high demand prompted a scramble to reverse-engineer the app to build bootleg versions of it to evade Clubhouse’s gated walls, but also government censors where the app is blocked.
Each of the Facebook pages impersonating Clubhouse only had a handful of likes, but were still active at the time of publication. When reached, Facebook wouldn’t say how many account owners had clicked on the ads pointing to the fake Clubhouse websites.
At least nine ads were placed this week between Tuesday and Thursday. Several of the ads said Clubhouse “is now available for PC,” while another featured a photo of co-founders Paul Davidson and Rohan Seth. Clubhouse did not return a request for comment.
Are Facebook, Instagram and WhatsApp down for you right now? Us too! And lots and lots of other people too, it seems.
We’re getting reports left and right of outages across the three Facebook properties, with no indication so far as to the cause. It’s all down so hard that Facebook’s own server status page won’t even load to explain what’s up. Some of the respective mobile apps appear to load, but are just loading cached data; refresh or try to pull in a new page, and things probably won’t load correctly.
When Facebook on the web does load, it’s largely throwing the following error message:
This outage comes just a few weeks after one that took out Instagram and WhatsApp in March.
(Update, 3:19 PM: It appears things are coming back online, about an hour after the outage first began.)
Sophie Zhou Novati worked as a senior engineer at Facebook and then Nextdoor, where she struggled to hire great engineers for her team.
Frustrated, she decided to try training engineers to meet her team’s hiring standards by mentoring at a local coding bootcamp. After two and a half years of mentoring on nights and weekends, Novati decided to turn her passion into a career.
She and her husband, Michael, founded Formation with a couple of goals in mind. For one, they wanted to offer personalized training to help people not just learn to code, but to become “exceptional” software engineers. Sophie was also struck by the diversity of the people she witnessed going through coding bootcamps, but she realized that those graduates weren’t getting access to the same opportunities that students from traditional universities do.
With Formation, her goal is to personalize the training experience via a remote fellowship program that combines automated instruction with access to a “network of top tier mentors” from companies such as Facebook and Google. After one year in beta, Formation is unveiling its Engineering Fellowship, where every fellow gets a “personalized training plan tailored to their unique career ambitions.” So far, it’s placed just over 30 people in engineering roles at companies such as Facebook, Microsoft and Lyft with an average starting salary of $120,000.
Formation aims to offer an experience beyond bootcamps, which Sophie argues “have gotten too big, too fast, churning hundreds or thousands of students through fixed curriculums without individualized attention.”
The startup attracted the attention of Andreessen Horowitz, which just led its $4 million seed round. Designer Fund, Combine, Lachy Groom, Slow Ventures and engineers from Airbnb, Notion, Rippling and others also participated in the financing.
“The first thing that really struck me about this community is just how diverse it is. Forty-four percent of graduates are reporting that they identify as nonmale, and the percentage of Black and Latinx graduates is nearly double the national average at traditional universities,” Sophie told TechCrunch. “But the problem is that only about 55% of bootcamp grads are getting a job as a software engineer, and of the ones that do, their median salary is only about $65,000. At the same time, companies everywhere are just desperately looking for ways to diversify their talent pool.”
Instead of having students follow a fixed curriculum, Formation leverages adaptive learning technology to build a personalized training plan tailored to each student’s specific skillset and career goals. The platform continuously assesses their skills and adapts their roadmap, according to Sophie.
About half of the people participating in Formation’s program are current engineers already working in the industry in some capacity.
Connie Chan, general partner at Andreessen Horowitz, said she’s been examining the edtech space for a while, including companies building new tools for teaching and upleveling coding skills.
Formation stood out to her as the “only true tech-based and scalable solution that optimizes each student’s mastery of important skills.” Its ability to dynamically change based on a student’s performance in particular was compelling.
“The founder-product fit is also super clear — Sophie brings her own best-in-class engineering experience to Formation, as well as her long-time passion for mentoring,” Chan wrote via email.
Facebook’s internal R&D group, NPE Team, has today launched its latest experiment, Hotline, into public beta testing. The web-based application could be described as a mashup of Instagram Live and Clubhouse, as it allows creators to speak to an audience who can then ask questions through either text or audio. However, unlike Clubhouse, creators can opt to turn their cameras on for the event, instead of being audio-only.
Real estate investor Nick Huber is the first to publicly try out the product with a livestream today that began at 10 AM PT (1 PM ET). Huber represents the sort of creator Facebook wants to work with for Hotline, Facebook told us, which is someone who helps people expand their professional skills or their finances. In Huber’s case, he’s talking about investing in industrial real estate as a second income stream.
At Facebook, Hotline is being led by Eric Hazzard, who joined Facebook when it acquired his app tbh, a positivity-focused Q&A app that grew to 2.5 million daily active users in nine weeks and saw over 1 billion poll answers before exiting. With Hotline, Hazzard is once again developing a product in the Q&A space.
But this time, the new app is taking inspiration from an up-and-coming social network, Clubhouse. In fact, Hotline’s user interface will look familiar to anyone who’s already used Clubhouse, Twitter Spaces or any of the other audio-only social networks, when it’s viewed on mobile. At the top on mobile (or to the left side on desktop), there’s a speaker section where the event host is featured in a round profile icon or live video stream. Below (or to the side on desktop) are the event’s listeners.
But there are also several differences between Hotline and existing apps, like Clubhouse.
Image Credits: Facebook
For starters, the app today has users sign in with Twitter then verify their identity via SMS.
The listeners’ section, for example, is divided up between those who are just watching the event, as represented by their profile icons, and those who are asking questions. At the top of this section, you’re presented with the lists of questions that users have asked, which others can upvote or downvote accordingly. The creator can then look to this section to find out which questions to answer next and can pull listeners onto the stage area with them for a conversation.
As the questions are asked, users can react with emoji including clapping hands, fire, heart, laughter, surprise and thumbs up.
Image Credits: Hotline
Hosts have full control over the experience, and can remove inappropriate questions from the queue or remove people from their Hotline session. For the initial tests, Facebook employees will moderate events and remove anyone that violates Facebook’s Community Standards, Terms of Service, Data Policy or the NPE Team’s Supplemental Terms.
Another notable difference between Hotline and Clubhouse is that Hotline events are recorded.
Today, Clubhouse favors more casual chats where people understand there’s no transcript or recording taking place (unless indicated by the host in the room’s title). This, Clubhouse believes, allows participants to speak more freely and with less fear. But Hotline automatically produces recordings. After the event, the host will receive two recordings of the session — one as an mp3 and another as an mp4. The creator can then upload these to other networks, like YouTube or Facebook, edit them into short-form content for apps like TikTok or turn the audio recording into a podcast, or anything else.
At launch, anyone can join a Hotline for free and there’s no limit on audience size, though this could change as the experiment progresses.
Despite the similarities with Clubhouse, Hotline has a different vibe because of its use of video, text-based questions, upvoting and because it’s recorded. This makes it feel less like a casual hangout and more like a professional event where an expert is leading a session and inviting an audience to ask questions.
Hotline is now one of several apps that Facebook’s NPE team has launched in the creator space to experiment with different ideas around audio and video. The group is continuing to test a creator app called Super, similar to Cameo, which is web-based and entirely video. It also previously tested an audio-only calling app, CatchUp, which shut down last year, as well as another Q&A product known as Venue, which is more of a Twitter-like companion for live events. More recently, it has launched TikTok-esque video apps Collab and BARS, which focused on collaborative music and raps, respectively.
Over time, the goal of NPE projects isn’t necessarily to stand them up on their own as individual apps — though that could happen, if they gained enough traction. More broadly, the learnings from the tests and experiments can help inform future Facebook product development, as it builds out new features for existing products, like Messenger Rooms or Facebook Live, among other things.
Facebook didn’t make an official announcement about Hotline’s launch, but offered a statement about today’s test.
“With Hotline, we’re hoping to understand how interactive, live multimedia Q&As can help people learn from experts in areas like professional skills, just as it helps those experts build their businesses,” a spokesperson said. “New Product Experimentation has been testing multimedia products like CatchUp, Venue, Collab, and BARS, and we’re encouraged to see the formats continue to help people connect and build community,” they added.
Hotline isn’t Facebook’s only attempt to challenge Clubhouse. The company is also in the process of developing a Clubhouse rival within the Messenger Rooms product experience, Facebook recently confirmed.
The question of whether Facebook will face any regulatory sanction over the latest massive historical platform privacy fail to come to light remains unclear. But the timeline of the incident looks increasingly awkward for the tech giant.
While it initially sought to play down the data breach revelations published by Business Insider at the weekend by suggesting that information like people’s birth dates and phone numbers was “old”, in a blog post late yesterday the tech giant finally revealed that the data in question had in fact been scraped from its platform by malicious actors “in 2019” and “prior to September 2019”.
That new detail about the timing of this incident raises the issue of compliance with Europe’s General Data Protection Regulation (GDPR) — which came into application in May 2018.
Under the EU regulation data controllers can face fines of up to 2% of their global annual turnover for failures to notify breaches, and up to 4% of annual turnover for more serious compliance violations.
The European framework looks important because Facebook indemnified itself against historical privacy issues in the US when it settled with the FTC for $5BN back in July 2019 — although that does still mean there’s a period of several months (June to September 2019) which could fall outside that settlement.
Not only is @Facebook past the indemnification period of the FTC settlement (June 12 2019), they also may have violated the terms of the settlement requiring them to report breaches of covered information (ht @JustinBrookman ) https://t.co/182LEf4rNO pic.twitter.com/utCnQ4USHI
— ashkan soltani (@ashk4n) April 7, 2021
Yesterday, in its own statement responding to the breach revelations, Facebook’s lead data supervisor in the EU said the provenance of the newly published dataset wasn’t entirely clear, writing that it “seems to comprise the original 2018 (pre-GDPR) dataset” — referring to an earlier breach incident Facebook disclosed in 2018 which related to a vulnerability in its phone lookup functionality that it had said occurred between June 2017 and April 2018 — but also writing that the newly published dataset also looked to have been “combined with additional records, which may be from a later period”.
Facebook followed up the Irish Data Protection Commission (DPC)’s statement by confirming that suspicion — admitting that the data had been extracted from its platform in 2019, up until September of that year.
Another new detail that emerged in Facebook’s blog post yesterday was the fact users’ data was scraped not via the aforementioned phone lookup vulnerability — but via another method altogether: A contact importer tool vulnerability.
This route allowed an unknown number of “malicious actors” to use software to imitate Facebook’s app and upload large sets of phone numbers to see which ones matched Facebook users.
In this way a spammer (for example), could upload a database of potential phone numbers and link them to not only names but other data like birth date, email address, location — all the better to phish you with.
In its PR response to the breach, Facebook quickly claimed it had fixed this vulnerability in August 2019. But, again, that timing places the incident squarely in the period of GDPR being active.
As a reminder, Europe’s data protection framework bakes in a data breach notification regime that requires data controllers to notify a relevant supervisory authority if they believe a loss of personal data is likely to constitute a risk to users’ rights and freedoms — and to do so without undue delay (ideally within 72 hours of becoming aware of it).
Yet Facebook made no disclosure at all of this incident to the DPC. Indeed, the regulator made it clear yesterday that it had to proactively seek information from Facebook in the wake of BI’s report. That’s the opposite of how EU lawmakers intended the regulation to function.
Data breaches, meanwhile, are broadly defined under the GDPR. It could mean personal data being lost or stolen and/or accessed by unauthorized third parties. It can also relate to deliberate or accidental action or inaction by a data controller which exposes personal data.
Legal risk attached to the breach likely explains why Facebook has studiously avoided describing this latest data protection failure, in which the personal information of more than half a billion users was posted for free download on an online forum, as a ‘breach’.
And, indeed, why it’s sought to downplay the significance of the leaked information — dubbing people’s personal information “old data”. (Even as few people regularly change their mobile numbers, email address, full names and biographical information and so on, and no one (legally) gets a new birth date… )
Its blog post instead refers to data being scraped; and to scraping being “a common tactic that often relies on automated software to lift public information from the internet that can end up being distributed in online forums” — tacitly implying that the personal information leaked via its contact importer tool was somehow public.
The self-serving suggestion being peddled here by Facebook is that hundreds of millions of users had both published sensitive stuff like their mobile phone numbers on their Facebook profiles and left default settings on their accounts — thereby making this personal information ‘publicly available for scraping/no longer private/uncovered by data protection legislation’.
This is an argument as obviously absurd as it is viciously hostile to people’s rights and privacy. It’s also an argument that EU data protection regulators must quickly and definitively reject or be complicit in allowing Facebook (ab)use its market power to torch the very fundamental rights that regulators’ sole purpose is to defend and uphold.
Even if some Facebook users affected by this breach had their information exposed via the contact importer tool because they had not changed Facebook’s privacy-hostile defaults that still raises key questions of GPDR compliance — because the regulation also requires data controllers to adequately secure personal data and apply privacy by design and default.
Facebook allowing hundreds of millions of accounts to have their info freely pillaged by spammers (or whoever) doesn’t sound like good security or default privacy.
In short, it’s the Cambridge Analytica scandal all over again.
Facebook is trying to get away with continuing to be terrible at privacy and data protection because it’s been so terrible at it in the past — and likely feels confident in keeping on with this tactic because it’s faced relatively little regulatory sanction for an endless parade of data scandals. (A one-time $5BN FTC fine for a company than turns over $85BN+ in annual revenue is just another business expense.)
We asked Facebook why it failed to notify the DPC about this 2019 breach back in 2019, when it realized people’s information was once again being maliciously extracted from its platform — or, indeed, why it hasn’t bothered to tell affected Facebook users themselves — but the company declined to comment beyond what it said yesterday.
Then it told us it would not be commenting on its communications with regulators.
Under the GDPR, if a breach poses a high risk to users’ rights and freedoms a data controller is required to notify affected individuals — with the rational being that prompt notification of a threat can help people take steps to protect themselves from the risks of their data being breached, such as fraud and ID theft.
Yesterday Facebook also said it does not have plans to notify users either.
Perhaps the company’s trademark ‘thumbs up’ symbol would be more aptly expressed as a middle finger raised at everyone else.