Discovering and drilling for the important minerals used for industry and the technology sector remains incredibly important as existing mines are becoming depleted. If the mining industry can’t become more efficient at finding these important deposits, then more unnecessary, harmful drilling and exploration takes place. Applying AI to this problem would seem like a no-brainer for the environment.
Joining this field is now Earth AI, a mineral targeting startup which is using AI to predict the location of new ore bodies far more cheaply, faster, and with more precision (it claims) than previous methods.
It’s now closed a funding round of ‘up to’ $2.5 million from Gagarin Capital, A VC firm specializing in AI, and Y Combinator, in the latter’s latest cohort announced this week. Previously, Earth AI had raised $1.7 million in two seed rounds from Australian VCs, AirTree Ventures and Blackbird Ventures and angel investors.
The startup uses machine learning techniques on global data, including remote sensing, radiometry, geophysical and geochemical datasets, to learn the data signatures related to industrial metal deposits (from gold, copper, and lead to rare earth elements), train a neural network, and predict where high-value mineral prospects will be.
In particular, it was used to discover a deposit of Vanadium, which is used to build Vanadium Redox Batteries that are used in large industrial applications. Finding these deposits faster using AI means the planet will thus benefit faster from battery technology.
In 2018, Earth AI field-tested remote unexplored areas and claims to have generated a 50X better success rate than traditional exploration methods, while spending on average $11,000 per prospect discovery. In Australia, for instance, companies often spend several million dollars to arrive at the same result.
Jared Friedman, YCombinator partner comented in a statement: “The possibility of discovering new mineral deposits with AI is a fascinating and thought-provoking idea. Earth AI has the potential not just to become an incredibly profitable company, but to reduce the cost of the metals we need to build our civilization, and that has huge implications for the world.”
“Earth AI is taking a novel approach to a large and important industry — and that approach is already showing tremendous promise”, Mikhail Taver, partner at Gagarin Capital said.
Earth AI was founded by Roman Tesyluk, a geoscientist with eight years of mineral exploration and academic experience. Prior to starting Earth AI, he was a PhD Candidate at The University of Sydney, Australia and obtained a Master’s degree in Geology from Ivan Franko University, Ukraine. “EARTH AI has huge ambitions, and this funding round will supercharge us towards reaching our milestones,” he said.
This latest investment from Gagarin Capital joins a line of other AI-based products and services and investments it’s made into YC companies, such as Wallarm, Gosu.AI and CureSkin. Gagarin’s exits include MSQRD (acquired by Facebook), and AIMatter (acquired by Google).
Hey. This is Week-in-Review, where I give a heavy amount of analysis and/or rambling thoughts on one story while scouring the rest of the hundreds of stories that emerged on TechCrunch this week to surface my favorites for your reading pleasure.
Last week, I talked about how Netflix might have some rough times ahead as Disney barrels towards it.
There is plenty to be said about the potential of smart glasses. I write about them at length for TechCrunch and I’ve talked to a lot of founders doing cool stuff. That being said, I don’t have any idea what Snap is doing with the introduction of a third-generation of its Spectacles video sunglasses.
The first-gen were a marketing smash hit, their sales proved to be a major failure for the company which bet big and seemingly walked away with a landfill’s worth of the glasses.
Snap’s latest version of Spectacles were announced in Vogue this week, they are much more expensive at $380 and their main feature is that they have two cameras which capture images in light depth which can lead to these cute little 3D boomerangs. One one hand, it’s nice to see the company showing perseverance with a tough market, on the other it’s kind of funny to see them push the same rock up the hill again.
Snap is having an awesome 2019 after a laughably bad 2018, the stock has recovered from record lows and is trading in its IPO price wheelhouse. It seems like they’re ripe for something new and exciting, not beautiful yet iterative.
The $150 Spectacles 2 are still for sale, though they seem quite a bit dated-looking at this point. Spectacles 3 seem to be geared entirely towards women, and I’m sure they made that call after seeing the active users of previous generations, but given the write-down they took on the first-generation, something tells me that Snap’s continued experimentation here is borne out of some stubbornness form Spiegel and the higher-ups who want the Snap brand to live in a high fashion world and want to be at the forefront of an AR industry that seems to have already moved onto different things.
On to the rest of the week’s news.
Here are a few big news items from big companies, with green links to all the sweet, sweet added context:
How did the top tech companies screw up this week? This clearly needs its own section, in order of badness:
Adam Neumann (WeWork) at TechCrunch Disrupt NY 2017
Our premium subscription service had another week of interesting deep dives. My colleague Danny Crichton wrote about the “tech” conundrum that is WeWork and the questions that are still unanswered after the company filed documents this week to go public.
…How is margin changing at its older locations? How is margin changing as it opens up in places like India, with very different costs and revenues? How do those margins change over time as a property matures? WeWork spills serious amounts of ink saying that these numbers do get better … without seemingly being willing to actually offer up the numbers themselves…
Here are some of our other top reads this week for premium subscribers. This week, we published a major deep dive into the world’s next music unicorn and we dug deep into marketplace startups.
Sign up for more newsletters in your inbox (including this one) here.
The phrase “pull yourself up by your own bootstraps” was originally meant sarcastically.
It’s not actually physically possible to do — especially while wearing Allbirds and having just fallen off a Bird scooter in downtown San Francisco, but I should get to my point.
This week, Ken Cuccinelli, the acting Director of the United States Citizenship and Immigrant Services Office, repeatedly referred to the notion of bootstraps in announcing shifts in immigration policy, even going so far as to change the words to Emma Lazarus’s famous poem “The New Colossus:” no longer “give me your tired, your poor, your huddled masses yearning to breathe free,” but “give me your tired and your poor who can stand on their own two feet, and who will not become a public charge.”
We’ve come to expect “alternative facts” from this administration, but who could have foreseen alternative poems?
Still, the concept of ‘bootstrapping’ is far from limited to the rhetorical territory of the welfare state and social safety net. It’s also a favorite term of art in Silicon Valley tech and venture capital circles: see for example this excellent (and scary) recent piece by my editor Danny Crichton, in which young VC firms attempt to overcome a lack of the startup capital that is essential to their business model by creating, as perhaps an even more essential feature of their model, impossible working conditions for most everyone involved. Often with predictably disastrous results.
It is in this context of unrealistic expectations about people’s labor, that I want to introduce my most recent interviewee in this series of in-depth conversations about ethics and technology.
Mary L. Gray is a Fellow at Harvard University’s Berkman Klein Center for Internet and Society and a Senior Researcher at Microsoft Research. One of the world’s leading experts in the emerging field of ethics in AI, Mary is also an anthropologist who maintains a faculty position at Indiana University. With her co-author Siddharth Suri (a computer scientist), Gray coined the term “ghost work,” as in the title of their extraordinarily important 2019 book, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass.
Ghost Work is a name for a rising new category of employment that involves people scheduling, managing, shipping, billing, etc. “through some combination of an application programming interface, APIs, the internet and maybe a sprinkle of artificial intelligence,” Gray told me earlier this summer. But what really distinguishes ghost work (and makes Mary’s scholarship around it so important) is the way it is presented and sold to the end consumer as artificial intelligence and the magic of computation.
In other words, just as we have long enjoyed telling ourselves that it’s possible to hoist ourselves up in life without help from anyone else (I like to think anyone who talks seriously about “bootstrapping” should be legally required to rephrase as “raising oneself from infancy”), we now attempt to convince ourselves and others that it’s possible, at scale, to get computers and robots to do work that only humans can actually do.
Ghost Work’s purpose, as I understand it, is to elevate the value of what the computers are doing (a minority of the work) and make us forget, as much as possible, about the actual messy human beings contributing to the services we use. Well, except for the founders, and maybe the occasional COO.
But if working people are supposed to be ghosts, then when they speak up or otherwise make themselves visible, they are “haunting” us. And maybe it can be haunting to be reminded that you didn’t “bootstrap” yourself to billions or even to hundreds of thousands of dollars of net worth.
Sure, you worked hard. Sure, your circumstances may well have stunk. Most people’s do.
But none of us rise without help, without cooperation, without goodwill, both from those who look and think like us and those who do not. Not to mention dumb luck, even if only our incredible good fortune of being born with a relatively healthy mind and body, in a position to learn and grow, here on this planet, fourteen billion years or so after the Big Bang.
I’ll now turn to the conversation I recently had with Gray, which turned out to be surprisingly more hopeful than perhaps this introduction has made it seem.
Greg Epstein: One of the most central and least understood features of ghost work is the way it revolves around people constantly making themselves available to do it.
Mary Gray: Yes, [What Siddarth Suri and I call ghost work] values having a supply of people available, literally on demand. Their contributions are collective contributions.
It’s not one person you’re hiring to take you to the airport every day, or to confirm the identity of the driver, or to clean that data set. Unless we’re valuing that availability of a person, to participate in the moment of need, it can quickly slip into ghost work conditions.
US legislator David Cicilline will be joining the next meeting of the International Grand Committee on Disinformation and ‘Fake News’, it has been announced. The meeting will be held in Dublin on November 7.
Chair of the committee, the Irish Fine Gael politician Hildegarde Naughton, announced Cicilline’s inclusion today.
The congressman — who is chairman of the US House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee — will attend as an “ex officio member” which will allow him to question witnesses, she added.
Exactly who the witnesses in front of the grand committee will be is tbc. But the inclusion of a US legislator in the ranks of a non-US committee that’s been seeking answers about reining in online disinformation will certainly make any invitations that get extended to senior executives at US-based tech giants much harder to ignore.
Naughton points out that the addition of American legislators also means the International Grand Committee represents ~730 million citizens — and “their right to online privacy and security”.
“The Dublin meeting will be really significant in that it will be the first time that US legislators will participate,” she said in a statement. “As all the major social media/tech giants were founded and are headquartered in the United States it is very welcome that Congressman Cicilline has agreed to participate. His own Committee is presently conducting investigations into Facebook, Google, Amazon and Apple and so his attendance will greatly enhance our deliberations.”
“Greater regulation of social media and tech giants is fast becoming a priority for many countries throughout the world,” she added. “The International Grand Committee is a gathering of international parliamentarians who have a particular responsibility in this area. We will coordinate actions to tackle online election interference, ‘fake news’, and harmful online communications, amongst other issues while at the same time respecting freedom of speech.”
The international committee met for its first session in London last November — when it was forced to empty-chair Facebook founder Mark Zuckerberg who had declined to attend in person, sending UK policy VP Richard Allan in his stead.
Lawmakers from nine countries spent several hours taking Allan to task over Facebook’s lack of accountability for problems generated by the content it distributes and amplifies, raising myriad examples of ongoing failure to tackle the democracy-denting, society-damaging disinformation — from election interference to hate speech whipping up genocide.
A second meeting of the grand committee was held earlier this year in Canada — taking place over three days in May.
Again Zuckerberg failed to show. Facebook COO Sheryl Sandberg also gave international legislators zero facetime, with the company opting to send local head of policy, Kevin Chan, and global head of policy, Neil Potts, as stand ins.
Lawmakers were not amused. Canadian MPs voted to serve Zuckerberg and Sandberg with an open summons — meaning they’ll be required to appear before it the next time they step foot in the country.
Parliamentarians in the UK also issued a summons for Zuckerberg last year after repeat snubs to testify to the Digital, Culture, Media and Sport committee’s enquiry into fake news — a decision that essentially gave birth to the international grand committee, as legislators in multiple jurisdictions united around a common cause of trying to find ways to hold social media giants to accounts.
— Damian Collins (@DamianCollins) August 15, 2019
While it’s not clear who the grand committee will invite to the next session, Facebook’s founder seems highly unlikely to have dropped off their list. And this time Zuckerberg and Sandberg may find it harder to turn down an invite to Dublin, given the committee’s ranks will include a homegrown lawmaker.
In a statement on joining the next meeting, Cicilline said: “We are living in a critical moment for privacy rights and competition online, both in the United States and around the world. As people become increasingly connected by what seem to be free technology platforms, many remain unaware of the costs they are actually paying.
“The Internet has also become concentrated, less open, and growingly hostile to innovation. This is a problem that transcends borders, and it requires multinational cooperation to craft solutions that foster competition and safeguard privacy online. I look forward to joining the International Grand Committee as part of its historic effort to identify problems in digital markets and chart a path forward that leads to a better online experience for everyone.”
Multiple tech giants (including Facebook) have their international headquarters in Ireland — making the committee’s choice of location for their next meeting a strategic one. Should any tech CEOs thus choose to snub an invite to testify to the committee they might find themselves being served with an open summons to testify by Irish parliamentarians — and not being able to set foot in a country where their international HQ is located would be more than a reputational irritant.
Ireland’s privacy regulator is also sitting on a stack of open investigations against tech giants — again with Facebook and Facebook owned companies producing the fattest file (some 11 investigations). But there are plenty of privacy and security concerns to go around, with the DPC’s current case file also touching tech giants including Apple, Google, LinkedIn and Twitter.
If you have ever worked at any sizable company, the word “IT” probably doesn’t conjure up many warm feelings. If you’re working for an old, traditional enterprise company, you probably don’t expect anything else, though. If you’re working for a modern tech company, though, chances are your expectations are a bit higher. And once you’re at the scale of a company like Facebook, a lot of the third-party services that work for smaller companies simply don’t work anymore.
To discuss how Facebook thinks about its IT strategy and why it now builds most of its IT tools in-house, I sat down with the company’s CIO, Atish Banerjea, at its Menlo Park headquarter.
Before joining Facebook in 2016 to head up what it now calls its “Enterprise Engineering” organization, Banerjea was the CIO or CTO at companies like NBCUniversal, Dex One and Pearson.
“If you think about Facebook 10 years ago, we were very much a traditional IT shop at that point,” he told me. “We were responsible for just core IT services, responsible for compliance and responsible for change management. But basically, if you think about the trajectory of the company, were probably about 2,000 employees around the end of 2010. But at the end of last year, we were close to 37,000 employees.”
Traditionally, IT organizations rely on third-party tools and software, but as Facebook grew to this current size, many third-party solutions simply weren’t able to scale with it. At that point, the team decided to take matters into its own hands and go from being a traditional IT organization to one that could build tools in-house. Today, the company is pretty much self-sufficient when it comes to running its IT operations, but getting to this point took a while.
“We had to pretty much reinvent ourselves into a true engineering product organization and went to a full ‘build’ mindset,” said Banerjea. That’s not something every organization is obviously able to do, but, as Banerjea joked, one of the reasons why this works at Facebook “is because we can — we have that benefit of the talent pool that is here at Facebook.”
The company then took this talent and basically replicated the kind of team it would help on the customer side to build out its IT tools, with engineers, designers, product managers, content strategies, people and research. “We also made the decision at that point that we will hold the same bar and we will hold the same standards so that the products we create internally will be as world-class as the products we’re rolling out externally.”
One of the tools that wasn’t up to Facebook’s scaling challenges was video conferencing. The company was using a third-party tool for that, but that just wasn’t working anymore. In 2018, Facebook was consuming about 20 million conference minutes per month. In 2019, the company is now at 40 million per month.
Besides the obvious scaling challenge, Facebook is also doing this to be able to offer its employees custom software that fits their workflows. It’s one thing to adapt existing third-party tools, after all, and another to build custom tools to support a company’s business processes.
Banerjea told me that creating this new structure was a relatively easy sell inside the company. Every transformation comes with its own challenges, though. For Facebook’s Enterprise Engineering team, that included having to recruit new skill sets into the organization. The first few months of this process were painful, Banerjea admitted, as the company had to up-level the skills of many existing employees and shed a significant number of contractors. “There are certain areas where we really felt that we had to have Facebook DNA in order to make sure that we were actually building things the right way,” he explained.
Facebook’s structure creates an additional challenge for the team. When you’re joining Facebook as a new employee, you have plenty of teams to choose from, after all, and if you have the choice of working on Instagram or WhatsApp or the core Facebook app — all of which touch millions of people — working on internal tools with fewer than 40,000 users doesn’t sound all that exciting.
“When young kids who come straight from college and they come into Facebook, they don’t know any better. So they think this is how the world is,” Banerjea said. “But when we have experienced people come in who have worked at other companies, the first thing I hear is ‘oh my goodness, we’ve never seen internal tools of this caliber before.’ The way we recruit, the way we do performance management, the way we do learning and development — every facet of how that employee works has been touched in terms of their life cycle here.”
Facebook first started building these internal tools around 2012, though it wasn’t until Banerjea joined in 2016 that it rebranded the organization and set up today’s structure. He also noted that some of those original tools were good, but not up to the caliber employees would expect from the company.
“The really big change that we went through was up-leveling our building skills to really become at the same caliber as if we were to build those products for an external customer. We want to have the same experience for people internally.”
The company went as far as replacing and rebuilding the commercial Enterprise Resource Planning (ERP) system it had been using for years. If there’s one thing that big companies rely on, it’s their ERP systems, given they often handle everything from finance and HR to supply chain management and manufacturing. That’s basically what all of their backend tools rely on (and what companies like SAP, Oracle and others charge a lot of money for). “In that 2016/2017 time frame, we realized that that was not a very good strategy,” Banerjea said. In Facebook’s case, the old ERP handled the inventory management for its data centers, among many other things. When that old system went down, the company couldn’t ship parts to its data centers.
“So what we started doing was we started peeling off all the business logic from our backend ERP and we started rewriting it ourselves on our own platform,” he explained. “Today, for our ERP, the backend is just the database, but all the business logic, all of the functionality is actually all custom written by us on our own platform. So we’ve completely rewritten our ERP, so to speak.”
In practice, all of this means that ideally, Facebook’s employees face far less friction when they join the company, for example, or when they need to replace a broken laptop, get a new phone to test features or simply order a new screen for their desk.
One classic use case is onboarding, where new employees get their company laptop, mobile phones and access to all of their systems, for example. At Facebook, that’s also the start of a six-week bootcamp that gets new engineers up to speed with how things work at Facebook. Back in 2016, when new classes tended to still have less than 200 new employees, that was still mostly a manual task. Today, with far more incoming employees, the Enterprise Engineering team has automated most of that — and that includes managing the supply chain that ensures the laptops and phones for these new employees are actually available.
But the team also built the backend that powers the company’s more traditional IT help desks, where employees can walk up and get their issues fixed (and passwords reset).
To talk more about how Facebook handles the logistics of that, I sat down with Koshambi Shah, who heads up the company’s Enterprise Supply Chain organization, which pretty much handles every piece of hardware and software the company delivers and deploys to its employees around the world (and that global nature of the company brings its own challenges and additional complexity). The team, which has fewer than 30 people, is made up of employees with experience in manufacturing, retail and consumer supply chains.
Typically, enterprises offer their employees a minimal set of choices when it comes to the laptops and phones they issue to their employees, and the operating systems that can run on them tend to be limited. Facebook’s engineers have to be able to test new features on a wide range of devices and operating systems. There are, after all, still users on the iPhone 4s or BlackBerry that the company wants to support. To do this, Shah’s organization actually makes thousands of SKUs available to employees and is able to deliver 98% of them within three days or less. It’s not just sending a laptop via FedEx, though. “We do the budgeting, the financial planning, the forecasting, the supply/demand balancing,” Shah said. “We do the asset management. We make sure the asset — what is needed, when it’s needed, where it’s needed — is there consistently.”
In many large companies, every asset request is double guessed. Facebook, on the other hand, places a lot of trust in its employees, it seems. There’s a self-service portal, the Enterprise Store, that allows employees to easily request phones, laptops, chargers (which get lost a lot) and other accessories as needed, without having to wait for approval (though if you request a laptop every week, somebody will surely want to have a word with you). Everything is obviously tracked in detail, but the overall experience is closer to shopping at an online retailer than using an enterprise asset management system. The Enterprise Store will tell you where a device is available, for example, so you can pick it up yourself (but you can always have it delivered to your desk, too, because this is, after all, a Silicon Valley company).
For accessories, Facebook also offers self-service vending machines, and employees can walk up to the help desk.
The company also recently introduced an Amazon Locker-style setup that allows employees to check out devices as needed. At these smart lockers, employees simply have to scan their badge, choose a device and, once the appropriate door has opened, pick up the phone, tablet, laptop or VR devices they were looking for and move on. Once they are done with it, they can come back and check the device back in. No questions asked. “We trust that people make the right decision for the good of the company,” Shah said. For laptops and other accessories, the company does show the employee the price of those items, though, so it’s clear how much a certain request costs the company. “We empower you with the data for you to make the best decision for your company.”
Talking about cost, Shah told me the Supply Chain organization tracks a number of metrics. One of those is obviously cost. “We do give back about 4% year-over-year, that’s our commitment back to the businesses in terms of the efficiencies we build for every user we support. So we measure ourselves in terms of cost per supported user. And we give back 4% on an annualized basis in the efficiencies.”
Unsurprisingly, the company has by now gathered enough data about employee requests (Shah said the team fulfills about half a million transactions per year) that it can use machine learning to understand trends and be proactive about replacing devices, for example.
Facebooks’ Enterprise Engineering group doesn’t just support internal customers, though. Another interesting aspect to Facebook’s Enterprise Engineering group is that it also runs the company’s internal and external events, including the likes of F8, the company’s annual developer conference. To do this, the company built out conference rooms that can seat thousands of people, with all of the logistics that go with that.
The company also showed me one of its newest meeting rooms where there are dozens of microphones and speakers hanging from the ceiling that make it easier for everybody in the room to participate in a meeting and be heard by everybody else. That’s part of what the organization’s “New Builds” team is responsible for, and something that’s possible because the company also takes a very hands-on approach to building and managing its offices.
Facebook also runs a number of small studios in its Menlo Park and New York offices, where both employees and the occasional external VIP can host Facebook Live videos.
Indeed, live video, it seems, is one of the cornerstones of how Facebook employees collaborate and help employees who work from home. Typically, you’d just use the camera on your laptop or maybe a webcam connected to your desktop to do so. But because Facebook actually produces its own camera system with the consumer-oriented Portal, Banerjea’s team decided to use that.
“What we have done is we have actually re-engineered the Portal,” he told me. “We have connected with all of our video conferencing systems in the rooms. So if I have a Portal at home, I can dial into my video conferencing platform and have a conference call just like I’m sitting in any other conference room here in Facebook. And all that software, all the engineering on the portal, that has been done by our teams — some in partnership with our production teams, but a lot of it has been done with Enterprise Engineering.”
Unsurprisingly, there are also groups that manage some of the core infrastructure and security for the company’s internal tools and networks. All of those tools run in the same data centers as Facebook’s consumer-facing applications, though they are obviously sandboxed and isolated from them.
It’s one thing to build all of these tools for internal use, but now, the company is also starting to think about how it can bring some of these tools it built for internal use to some of its external customers. You may not think of Facebook as an enterprise company, but with its Workplace collaboration tool, it has an enterprise service that it sells externally, too. Last year, for the first time, Workplace added a new feature that was incubated inside of Enterprise Engineering. That feature was a version of Facebook’s public Safety Check that the Enterprise Engineering team had originally adapted to the company’s own internal use.
“Many of these things that we are building for Facebook, because we are now very close partners with our Workplace team — they are in the enterprise software business and we are the enterprise software group for Facebook — and many [features] we are building for Facebook are of interest to Workplace customers.”
As Workplace hit the market, Banerjea ended up talking to the CIOs of potential users, including the likes of Delta Air Lines, about how Facebook itself used Workplace internally. But as companies started to adopt Workplace, they realized that they needed integrations with existing third-party services like ERP platforms and Salesforce. Those companies then asked Facebook if it could build those integrations or work with partners to make them available. But at the same time, those customers got exposed to some of the tools that Facebook itself was building internally.
“Safety Check was the first one,” Banerjea said. “We are actually working on three more products this year.” He wouldn’t say what these are, of course, but there is clearly a pipeline of tools that Facebook has built for internal use that it is now looking to commercialize. That’s pretty unusual for any IT organization, which, after all, tends to only focus on internal customers. I don’t expect Facebook to pivot to an enterprise software company anytime soon, but initiatives like this are clearly important to the company and, in some ways, to the morale of the team.
This creates a bit of friction, too, though, given that the Enterprise Engineering group’s mission is to build internal tools for Facebook. “We are now figuring out the deployment model,” Banerjea said. Who, for example, is going to support the external tools the team built? Is it the Enterprise Engineering group or the Workplace team?
Chances are then, that Facebook will bring some of the tools it built for internal use to more enterprises in the long run. That definitely puts a different spin on the idea of the consumerization of enterprise tech. Clearly, not every company operates at the scale of Facebook and needs to build its own tools — and even some companies that could benefit from it don’t have the resources to do so. For Facebook, though, that move seems to have paid off and the tools I saw while talking to the team definitely looked more user-friendly than any off-the-shelf enterprise tools I’ve seen at other large companies.
If you use Instagram and have noticed a bunch of strangers watching your Stories in recent months — accounts that don’t follow you and seem to be Russian — well, you’re not alone.
Nor are you being primed for a Russian disinformation campaign. At least, probably not. But you’re right to smell a fake.
TechCrunch’s very own director of events, Leslie Hitchcock, flagged the issue to us — complaining of “eerie” views on her Instagram Stories in the last couple of months from random Russian accounts, some seemingly genuine (such as artists with several thousand followers) and others simply “weird” looking.
A thread on Reddit also poses the existential question: “Why do Russian Models (that don’t follow me) keep watching my Instagram stories?” (The answer to which is: Not for the reason you hope.)
Instagram told us it is aware of the issue and is working on a fix.
It also said this inauthentic activity is not related to misinformation campaigns but is rather a new growth hacking tactic — which involves accounts paying third parties to try to boost their profile via the medium of fake likes, followers and comments (in this case by generating inauthentic activity by watching the Instagram Stories of people they have no real interest in in the hopes that’ll help them pass off as real and net them more followers).
Eerie is spot on. Some of these growth hackers probably have banks of phones set up where Instagram Stories are ‘watched’ without being watched. (Which obviously isn’t going to please any advertisers paying to inject ads into Stories… )
A UK social media agency called Hydrogen also noticed the issue back in June — blogging then that: “Mass viewing of Instagram Stories is the new buying followers of 2019”, i.e. as a consequence of the Facebook-owned social network cracking down on bots and paid-for followers on the platform.
So, tl;dr, squashing fakes is a perpetual game of whack-a-mole. Let’s call it Zuckerberg’s bane.
“Our research has found that several small social media agencies are using this as a technique to seem like they are interacting with the public,” Hydrogen also wrote, before going on to offer sage advice that: “This is not a good way to build a community, and we believe that Instagram will begin cracking down on this soon.”
Instagram confirmed to us it is attempting to crack down — saying it’s working to try to get rid of this latest eyeball-faking flavor of inauthentic activity. (We paraphrase.)
It also said that, in the coming months, it will introduce new measures to reduce such activity — specifically from Stories — but without saying exactly what these will be.
We also asked about the Russian element but Instagram was unable to provide any intelligence on why a big proportion of the fake Stories views seem to be coming from Russia (without any love). So that remains a bit of a mystery.
What can you do right now to prevent your Instagram Stories from being repurposed as a virtue-less signalling machine for sucking up naive eyeballs?
Switching your profile to private is the only way to thwart the growth hackers, for now.
Albeit, that means you’re limiting who you can reach on the Instagram platform as well as who can reach you.
When we suggested to Hitchcock she switch her account to private she responded with a shrug, saying: “I like to engage with brands.”
A section in the policy on how the company uses personal data now reads (emphasis ours):
Our processing of personal data for these purposes includes both automated and manual (human) methods of processing. Our automated methods often are related to and supported by our manual methods. For example, our automated methods include artificial intelligence (AI), which we think of as a set of technologies that enable computers to perceive, learn, reason, and assist in decision-making to solve problems in ways that are similar to what people do. To build, train, and improve the accuracy of our automated methods of processing (including AI), we manually review some of the predictions and inferences produced by the automated methods against the underlying data from which the predictions and inferences were made. For example, we manually review short snippets of a small sampling of voice data we have taken steps to de-identify to improve our speech services, such as recognition and translation.
Multiple tech giants’ use of human workers to review users’ audio across a number of products involving AI has grabbed headlines in recent weeks after journalists exposed a practice that had not been clearly conveyed to users in terms and conditions — despite European privacy law requiring clarity about how people’s data is used.
Such workers are typically employed to improve the performance of AI systems by verifying translations and speech in different accents. But, again, this human review component within AI systems has generally been buried rather than transparently disclosed.
Earlier this month a German privacy watchdog told Google it intended to use EU privacy law to order it to halt human reviews of audio captured by its Google Assistant AI in Europe — after press had obtained leaked audio snippets and being able to re-identify some of the people in the recordings.
On learning of the regulator’s planned intervention Google suspended reviews.
Apple also announced it was suspending human reviews of Siri snippets globally, again after a newspaper reported that its contractors could access audio and routinely heard sensitive stuff.
Facebook also said it was pausing human reviews of a speech-to-text AI feature offered in its Messenger app — again after concerns had been raised by journalists.
So far Apple, Google and Facebook have suspended or partially suspended human reviews in response to media disclosures and/or regulatory attention.
While the lead privacy regulator for all three, Ireland’s DPC, has started asking questions.
Microsoft told Motherboard it is not suspending human reviews at this stage.
Users of Microsoft’s voice assistant can delete recordings — but such deletions require action from the user and would be required on a rolling basis as long as the product continues being use. So it’s not the same as having a full and blanket opt out.
We’ve asked Microsoft whether it intends to offer Skype or Cortana users an opt out of their recordings being reviewed by humans.
The company told Motherboard it will “continue to examine further steps we might be able to take”.
U.S. stock markets plummeted today as recession fears continue to grow.
Yesterday’s good news about a reprieve on tariffs for U.S. consumer imports was undone by increasing concerns over economic indicators pointing to a potential global recession coming within the next year.
The Dow Jones Industrial Average dropped more than 800 points on Wednesday — its largest decline of the year — while the S&P 500 fell by 85 points and the tech-heavy Nasdaq dropped 240 points.
The downturn in the markets came a day after the Dow closed up 373 points after the U.S. Trade Representative announced a delay in many of the import taxes the Trump administration planned to impose on Chinese goods.
In the U.S. it was concerns over the news that the yield on 10-year U.S. Treasury notes had dipped below the yield of two-year notes. It’s an indicator that investors think the short-term prospects for a country’s economic outlook are worse than the long-term outlook, so yields are higher for short-term investments.
China’s industrial and retail sectors both slowed significantly in July. Industrial production, including manufacturing, mining and utilities, grew by 4.8% in July (a steep decline from 6.3% growth in June). Meanwhile, retail sales in the country slowed to 7.6%, down from 9.8% in June.
Germany also posted declines over the summer months, indicating that its economy had contracted by 0.1% in the three months leading to June.
Globally, the protracted trade war between the U.S. and China are weighing on economies — as are concerns about what a hard Brexit would mean for the economies in the European Union .
The stocks of Alphabet, Amazon, Apple, Facebook, Microsoft, Netflix and Salesforce were all off by somewhere between 2.5% and 4.5% in today’s trading.
New research into how European consumers interact with the cookie consent mechanisms which have proliferated since a major update to the bloc’s online privacy rules last year casts an unflattering light on widespread manipulation of a system that’s supposed to protect consumer rights.
As Europe’s General Data Protection Regulation (GDPR) came into force in May 2018, bringing in a tough new regime of fines for non-compliance, websites responded by popping up legal disclaimers which signpost visitor tracking activities. Some of these cookie notices even ask for consent to track you.
But many don’t — even now, more than a year later.
The study, which looked at how consumers interact with different designs of cookie pop-ups and how various design choices can nudge and influence people’s privacy choices, also suggests consumers are suffering a degree of confusion about how cookies function, as well as being generally mistrustful of the term ‘cookie’ itself. (With such baked in tricks, who can blame them?)
The researchers conclude that if consent to drop cookies was being collected in a way that’s compliant with the EU’s existing privacy laws only a tiny fraction of consumers would agree to be tracked.
The paper, which we’ve reviewed in draft ahead of publication, is co-authored by academics at Ruhr-University Bochum, Germany, and the University of Michigan in the US — and entitled: (Un)informed Consent: Studying GDPR Consent Notices in the Field.
The researchers ran a number of studies, gathering ~5,000 of cookie notices from screengrabs of leading websites to compile a snapshot (derived from a random sub-sample of 1,000) of the different cookie consent mechanisms in play in order to paint a picture of current implementations.
They also worked with a German ecommerce website over a period of four months to study how more than 82,000 unique visitors to the site interacted with various cookie consent designs which the researchers’ tweaked in order to explore how different defaults and design choices affected individuals’ privacy choices.
Their industry snapshot of cookie consent notices found that the majority are placed at the bottom of the screen (58%); not blocking the interaction with the website (93%); and offering no options other than a confirmation button that does not do anything (86%). So no choice at all then.
A majority also try to nudge users towards consenting (57%) — such as by using ‘dark pattern’ techniques like using a color to highlight the ‘agree’ button (which if clicked accepts privacy-unfriendly defaults) vs displaying a much less visible link to ‘more options’ so that pro-privacy choices are buried off screen.
The GDPR updated the EU’s long-standing digital privacy framework, with key additions including tightening the rules around consent as a legal basis for processing people’s data — which the regulation says must be specific (purpose limited), informed and freely given for consent to be valid.
Even so, since May last year there has been an outgrown in cookie ‘consent’ mechanisms popping up or sliding atop websites that still don’t offer EU visitors the necessary privacy choices, per the research.
“Given the legal requirements for explicit, informed consent, it is obvious that the vast majority of cookie consent notices are not compliant with European privacy law,” the researchers argue.
“Our results show that a reasonable amount of users are willing to engage with consent notices, especially those who want to opt out or do not want to opt in. Unfortunately, current implementations do not respect this and the large majority offers no meaningful choice.”
The researchers also record a large differential in interaction rates with consent notices — of between 5 and 55% — generated by tweaking positions, options, and presets on cookie notices.
This is where consent gets manipulated — to flip visitors’ preference for privacy.
“The results show that nudges and pre-selection had a high impact on user decisions, confirming previous work,” the researchers write. “It also shows that the GDPR requirement of privacy by default should be enforced to make sure that consent notices collect explicit consent.”
Here’s a section from the paper discussing what they describe as “the strong impact of nudges and pre-selections”:
Overall the effect size between nudging (as a binary factor) and choice was CV=0.50. For example, in the rather simple case of notices that only asked users to confirm that they will be tracked, more users clicked the “Accept” button in the nudge condition, where it was highlighted (50.8% on mobile, 26.9% on desktop), than in the non-nudging condition where “Accept” was displayed as a text link (39.2% m, 21.1% d). The effect was most visible for the category-and vendor-based notices, where all checkboxes were pre-selected in the nudging condition, while they were not in the privacy-by-default version. On the one hand, the pre-selected versions led around 30% of mobile users and 10% of desktop users to accept all third parties. On the other hand, only a small fraction (< 0.1%) allowed all third parties when given the opt-in choice and around 1 to 4 percent allowed one or more third parties (labeled “other” in 4). None of the visitors with a desktop allowed all categories. Interestingly, the number of non-interacting users was highest on average for the vendor-based condition, although it took up the largest part of any screen since it offered six options to choose from.
The key implication is that just 0.1% of site visitors would freely choose to enable all cookie categories/vendors — i.e. when not being forced to do so by a lack of choice or via nudging with manipulative dark patterns (such as pre-selections).
Rising a fraction, to between 1-4%, who would enable some cookie categories in the same privacy-by-default scenario.
“Our results… indicate that the privacy-by-default and purposed-based consent requirements put forth by the GDPR would require websites to use consent notices that would actually lead to less than 0.1 % of active consent for the use of third parties,” they write in conclusion.
They do flag some limitations with the study, pointing out that the dataset they used that arrived at the 0.1% figure is biased — given the nationality of visitors is not generally representative of public Internet users, as well as the data being generated from a single retail site. But they supplemented their findings with data from a company (Cookiebot) which provides cookie notices as a SaaS — saying its data indicated a higher accept all clicks rate but still only marginally higher: Just 5.6%.
Hence the conclusion that if European web users were given an honest and genuine choice over whether or not they get tracked around the Internet, the overwhelming majority would choose to protect their privacy by rejecting tracking cookies.
This is an important finding because GDPR is unambiguous in stating that if an Internet service is relying on consent as a legal basis to process visitors’ personal data it must obtain consent before processing data (so before a tracking cookie is dropped) — and that consent must be specific, informed and freely given.
Yet, as the study confirms, it really doesn’t take much clicking around the regional Internet to find a gaslighting cookie notice that pops up with a mocking message saying by using this website you’re consenting to your data being processed how the site sees fit — with just a single ‘Ok’ button to affirm your lack of say in the matter.
It’s also all too common to see sites that nudge visitors towards a big brightly colored ‘click here’ button to accept data processing — squirrelling any opt outs into complex sub-menus that can sometimes require hundreds of individual clicks to deny consent per vendor.
You can even find websites that gate their content entirely unless or until a user clicks ‘accept’ — aka a cookie wall. (A practice that has recently attracted regulatory intervention.)
Nor can the current mess of cookie notices be blamed on a lack of specific guidance on what a valid and therefore legal cookie consent looks like. At least not any more. Here, for example, is a myth-busting blog which the UK’s Information Commissioner’s Office (ICO) published last month that’s pretty clear on what can and can’t be done with cookies.
For instance on cookie walls the ICO writes: “Using a blanket approach such as this is unlikely to represent valid consent. Statements such as ‘by continuing to use this website you are agreeing to cookies’ is not valid consent under the higher GDPR standard.” (The regulator goes into more detailed advice here.)
While France’s data watchdog, the CNIL, also published its own detailed guidance last month — if you prefer to digest cookie guidance in the language of love and diplomacy.
(Those of you reading TechCrunch back in January 2018 may also remember this sage plain english advice from our GDPR explainer: “Consent requirements for processing personal data are also considerably strengthened under GDPR — meaning lengthy, inscrutable, pre-ticked T&Cs are likely to be unworkable.” So don’t say we didn’t warn you.)
Nor are Europe’s data protection watchdogs lacking in complaints about improper applications of ‘consent’ to justify processing people’s data.
Indeed, ‘forced consent’ was the substance of a series of linked complaints by the pro-privacy NGO noyb, which targeted T&Cs used by Facebook, WhatsApp, Instagram and Google Android immediately GDPR started being applied in May last year.
While not cookie notice specific, this set of complaints speaks to the same underlying principle — i.e. that EU users must be provided with a specific, informed and free choice when asked to consent to their data being processed. Otherwise the ‘consent’ isn’t valid.
So far Google is the only company to be hit with a penalty as a result of that first wave of consent-related GDPR complaints; France’s data watchdog issued it a $57M fine in January.
But the Irish DPC confirmed to us that three of the 11 open investigations it has into Facebook and its subsidiaries were opened after noyb’s consent-related complaints. (“Each of these investigations are at an advanced stage and we can’t comment any further as these investigations are ongoing,” a spokeswoman told us. So, er, watch that space.)
The problem, where EU cookie consent compliance is concerned, looks to be both a failure of enforcement and a lack of regulatory alignment — the latter as a consequence of the ePrivacy Directive (which most directly concerns cookies) still not being updated, generating confusion (if not outright conflict) with the shiny new GDPR.
However the ICO’s advice on cookies directly addresses claimed inconsistencies between ePrivacy and GDPR, stating plainly that Recital 25 of the former (which states: “Access to specific website content may be made conditional on the well-informed acceptance of a cookie or similar device, if it is used for a legitimate purpose”) does not, in fact, sanction gating your entire website behind an ‘accept or leave’ cookie wall.
Here’s what the ICO says on Recital 25 of the ePrivacy Directive:
So no cookie wall; and no partial walls that force a user to agree to ad targeting in order to access the content.
It’s worth point out that other types of privacy-friendly online advertising are available with which to monetize visits to a website. (And research suggests targeted ads offer only a tiny premium over non-targeted ads, even as publishers choosing a privacy-hostile ads path must now factor in the costs of data protection compliance to their calculations — as well as the cost and risk of massive GDPR fines if their security fails or they’re found to have violated the law.)
Negotiations to replace the now very long-in-the-tooth ePrivacy Directive — with an up-to-date ePrivacy Regulation which properly takes account of the proliferation of Internet messaging and all the ad tracking techs that have sprung up in the interim — are the subject of very intense lobbying, including from the adtech industry desperate to keep a hold of cookie data. But EU privacy law is clear.
“[Cookie consent]’s definitely broken (and has been for a while). But the GDPR is only partly to blame, it was not intended to fix this specific problem. The uncertainty of the current situation is caused the delay of the ePrivacy regulation that was put on hold (thanks to lobbying),” says Martin Degeling, one of the research paper’s co-authors, when we suggest European Internet users are being subject to a lot of ‘consent theatre’ (ie noisy yet non-compliant cookie notices) — which in turn is causing knock-on problems of consumer mistrust and consent fatigue for all these useless pop-ups. Which work against the core aims of the EU’s data protection framework.
“Consent fatigue and mistrust is definitely a problem,” he agrees. “Users that have experienced that clicking ‘decline’ will likely prevent them from using a site are likely to click ‘accept’ on any other site just because of one bad experience and regardless of what they actually want (which is in most cases: not be tracked).”
“We don’t have strong statistical evidence for that but users reported this in the survey,” he adds, citing a poll the researchers also ran asking site visitors about their privacy choices and general views on cookies.
Degeling says he and his co-authors are in favor of a consent mechanism that would enable web users to specify their choice at a browser level — rather than the current mess and chaos of perpetual, confusing and often non-compliant per site pop-ups. Although he points out some caveats.
“DNT [Do Not Track] is probably also not GDPR compliant as it only knows one purpose. Nevertheless something similar would be great,” he tells us. “But I’m not sure if shifting the responsibility to browser vendors to design an interface through which they can obtain consent will lead to the best results for users — the interfaces that we see now, e.g. with regard to cookies, are not a good solution either.
“And the conflict of interest for Google with Chrome are obvious.”
The EU’s unfortunate regulatory snafu around privacy — in that it now has one modernized, world-class privacy regulation butting up against an outdated directive (whose progress keeps being blocked by vested interests intent on being able to continue steamrollering consumer privacy) — likely goes some way to explaining why Member States’ data watchdogs have generally been loath, so far, to show their teeth where the specific issue of cookie consent is concerned.
At least for an initial period the hope among data protection agencies (DPAs) was likely that ePrivacy would be updated and so they should wait and see.
They have also undoubtedly been providing data processors with time to get their data houses and cookie consents in order. But the frictionless interregnum while GDPR was allowed to ‘bed in’ looks unlikely to last much longer.
Firstly because a law that’s not enforced isn’t worth the paper it’s written on (and EU fundamental rights are a lot older than the GDPR). Secondly, with the ePrivacy update still blocked DPAs have demonstrated they’re not just going to sit on their hands and watch privacy rights be rolled back — hence them putting out guidance that clarifies what GDPR means for cookies. They’re drawing lines in the sand, rather than waiting for ePrivacy to do it (which also guards against the latter being used by lobbyists as a vehicle to try to attack and water down GDPR).
And, thirdly, Europe’s political institutions and policymakers have been dining out on the geopolitical attention their shiny privacy framework (GDPR) has attained.
Much has been made at the highest levels in Europe of being able to point to US counterparts, caught on the hop by ongoing tech privacy and security scandals, while EU policymakers savor the schadenfreude of seeing their US counterparts being forced to ask publicly whether it’s time for America to have its own GDPR.
With its extraterritorial scope, GDPR was always intended to stamp Europe’s rule-making prowess on the global map. EU lawmakers will feel they can comfortably check that box.
However they are also aware the world is watching closely and critically — which makes enforcement a very key piece. It must slot in too. They need the GDPR to work on paper and be seen to be working in practice.
So the current cookie mess is a problematic signal which risks signposting regulatory failure — and that simply isn’t sustainable.
A spokesperson for the European Commission told us it cannot comment on specific research but said: “The protection of personal data is a fundamental right in the European Union and a topic the Juncker commission takes very seriously.”
“The GDPR strengthens the rights of individuals to be in control of the processing of personal data, it reinforces the transparency requirements in particular on the information that is crucial for the individual to make a choice, so that consent is given freely, specific and informed,” the spokesperson added.
“Cookies, insofar as they are used to identify users, qualify as personal data and are therefore subject to the GDPR. Companies do have a right to process their users’ data as long as they receive consent or if they have a legitimate interest.”
All of which suggests that the movement, when it comes, must come from a reforming adtech industry.
With robust privacy regulation in place the writing is now on the wall for unfettered tracking of Internet users for the kind of high velocity, real-time trading of people’s eyeballs that the ad industry engineered for itself when no one knew what was being done with people’s data.
GDPR has already brought greater transparency. Once Europeans are no longer forced to trade away their privacy it’s clear they’ll vote with their clicks not to be ad-stalked around the Internet too.
The current chaos of non-compliant cookie notices is thus a signpost pointing at an underlying privacy lag — and likely also the last gasp signage of digital business models well past their sell-by-date.
The White House is contemplating issuing an executive order that would widen its attack on the operations of social media companies.
The White House has prepared an executive order called “Protecting Americans from Online Censorship” that would give the Federal Communications Commission oversight of how Facebook, Twitter and other tech companies monitor and manage their social networks, according to a CNN report.
Under the order, which has not yet been announced and could be revised, the FCC would be tasked with developing new regulations that would determine when and how social media companies filter posts, videos or articles on their platforms.
The draft order also calls for the Federal Trade Commission to take those new policies into account when investigating or filing lawsuits against technology companies, according to the CNN report.
Social media censorship has been a perennial talking point for President Donald Trump and his administration. In May, the White House set up a tip line for people to provide evidence of social media censorship and a systemic bias against conservative media.
In the executive order, the White House says it received more than 15,000 complaints about censorship by the technology platforms. The order also includes an offer to share the complaints with the Federal Trade Commission.
As part of the order, the Federal Trade Commission would be required to open a public complaint docket and coordinate with the Federal Communications Commission on investigations of how technology companies curate their platforms — and whether that curation is politically agnostic.
Under the proposed rule, any company whose monthly user base includes more than one-eighth of the U.S. population would be subject to oversight by the regulatory agencies. A roster of companies subject to the new scrutiny would include Facebook, Google, Instagram, Twitter, Snap and Pinterest .
At issue is how broadly or narrowly companies are protected under the Communications Decency Act, which was part of the Telecommunications Act of 1996. Social media companies use the Act to shield against liability for the posts, videos or articles that are uploaded from individual users or third parties.
The Trump administration aren’t the only politicians in Washington are focused on the laws that shield social media platforms from legal liability. House Speaker Nancy Pelosi took technology companies to task earlier this year in an interview with Recode.
The criticisms may come from different sides of the political spectrum, but their focus on the ways in which tech companies could use Section 230 of the Act is the same.
The White House’s executive order would ask the FCC to disqualify social media companies from immunity if they remove or limit the dissemination of posts without first notifying the user or third party that posted the material, or if the decision from the companies is deemed anti-competitive or unfair.
The FTC and FCC had not responded to a request for comment at the time of publication.
Sometimes it does seem the entire tech industry could use someone to talk to, like a good therapist or social worker. That might sound like an insult, but I mean it mostly earnestly: I am a chaplain who has spent 15 years talking with students, faculty, and other leaders at Harvard (and more recently MIT as well), mostly nonreligious and skeptical people like me, about their struggles to figure out what it means to build a meaningful career and a satisfying life, in a world full of insecurity, instability, and divisiveness of every kind.
In related news, I recently took a year-long paid sabbatical from my work at Harvard and MIT, to spend 2019-20 investigating the ethics of technology and business (including by writing this column at TechCrunch). I doubt it will shock you to hear I’ve encountered a lot of amoral behavior in tech, thus far.
A less expected and perhaps more profound finding, however, has been what the introspective founder Priyag Narula of LeadGenius tweeted at me recently: that behind the hubris and Machiavellianism one can find in tech companies is a constant struggle with anxiety and an abiding feeling of inadequacy among tech leaders.
In tech, just like at places like Harvard and MIT, people are stressed. They’re hurting, whether or not they even realize it.
So when Harvard’s Berkman Klein Center for Internet and Society recently posted an article whose headline began, “Why AI Needs Social Workers…”… it caught my eye.
The article, it turns out, was written by Columbia University Professor Desmond Patton. Patton is a Public Interest Technologist and pioneer in the use of social media and artificial intelligence in the study of gun violence. The founding Director of Columbia’s SAFElab and Associate Professor of Social Work, Sociology and Data Science at Columbia University.
A trained social worker and decorated social work scholar, Patton has also become a big name in AI circles in recent years. If Big Tech ever decided to hire a Chief Social Work Officer, he’d be a sought-after candidate.
It further turns out that Patton’s expertise — in online violence & its relationship to violent acts in the real world — has been all too “hot” a topic this past week, with mass murderers in both El Paso, Texas and Dayton, Ohio having been deeply immersed in online worlds of hatred which seemingly helped lead to their violent acts.
Fortunately, we have Patton to help us understand all of these issues. Here is my conversation with him: on violence and trauma in tech on and offline, and how social workers could help; on deadly hip-hop beefs and “Internet Banging” (a term Patton coined); hiring formerly gang-involved youth as “domain experts” to improve AI; how to think about the likely growing phenomenon of white supremacists live-streaming barbaric acts; and on the economics of inclusion across tech.
Greg Epstein: How did you end up working in both social work and tech?
Desmond Patton: At the heart of my work is an interest in root causes of community-based violence, so I’ve always identified as a social worker that does violence-based research. [At the University of Chicago] my dissertation focused on how young African American men navigated violence in their community on the west side of the city while remaining active in their school environment.
[From that work] I learned more about the role of social media in their lives. This was around 2011, 2012, and one of the things that kept coming through in interviews with these young men was how social media was an important tool for navigating both safe and unsafe locations, but also an environment that allowed them to project a multitude of selves. To be a school self, to be a community self, to be who they really wanted to be, to try out new identities.
Facebook is facing exposure to billions of dollars in potential damages as a federal appeals court on Thursday rejected Facebook’s arguments to halt a class action lawsuit claiming it illegally collected and stored the biometric data of millions of users.
The class action lawsuit has been working its way through the courts since 2015, when Illinois Facebook users sued the company for alleged violations of the state’s Biometric Information Privacy Act by automatically collecting and identifying people in photographs posted to the service.
Now, thanks to a unanimous decision from the 9th U.S. Circuit Court of Appeals in San Francisco, the lawsuit can proceed.
The most significant language from the decision from the circuit court seems to be this:
We conclude that the development of face template using facial-recognition technology without consent (as alleged here) invades an individual’s private affairs and concrete interests. Similar conduct is actionable at common law.
The American Civil Liberties Union came out in favor of the court’s ruling.
“This decision is a strong recognition of the dangers of unfettered use of face surveillance technology,” said Nathan Freed Wessler, staff attorney with the ACLU Speech, Privacy, and Technology Project, in a statement. “The capability to instantaneously identify and track people based on their faces raises chilling potential for privacy violations at an unprecedented scale. Both corporations and the government are now on notice that this technology poses unique risks to people’s privacy and safety.”
As April Glaser noted in Slate, Facebook already may have the world’s largest database of faces, and that’s something that should concern regulators and privacy advocates.
“Facebook wants to be able to certify identity in a variety of areas of life just as it has been trying to corner the market on identify verification on the web,” Siva Vaidhyanathan told Slate in an interview. “The payoff for Facebook is to have a bigger and broader sense of everybody’s preferences, both individually and collectively. That helps it not only target ads but target and develop services, too.”
That could apply to facial recognition technologies as well. Facebook, thankfully, doesn’t sell its facial recognition data to other people, but it does allow companies to use its data to target certain populations. It also allows people to use its information for research and to develop new services that could target Facebook’s billion-strong population of users.
As our own Josh Constine noted in an article about the company’s planned cryptocurrency wallet, the developer community poses as much of a risk to how Facebook’s products and services are used and abused as Facebook itself.
Facebook has said that it plans to appeal the decision. “We have always disclosed our use of face recognition technology and that people can turn it on or off at any time,” a spokesman said in an email to Reuters.
Now, the lawsuit will go back to the court of U.S. District Judge James Donato in San Francisco who approved the class action lawsuit last April for a possible trial.
Under the privacy law in Illinois, negligent violations could be subject to damages of up to $1,000 and intentional violations of privacy are subject to up to $5,000 in penalties. For the potential 7 million Facebook users that could be included in the lawsuit, those figures could amount to real money.
“BIPA’s innovative protections for biometric information are now enforceable in federal court,” added Rebecca Glenberg, senior staff attorney at the ACLU of Illinois. “If a corporation violates a statute by taking your personal information without your consent, you do not have to wait until your data is stolen or misused to go to court. As our General Assembly understood when it enacted BIPA, a strong enforcement mechanism is crucial to hold companies accountable when they violate our privacy laws. Corporations that misuse Illinoisans sensitive biometric data now do so at their own peril.”
These civil damages could come on top of fines that Facebook has already paid to the U.S. government for violating its agreement with the Federal Trade Commission over its handling of private user data. That resulted in one of the single largest penalties levied against a U.S. technology company. Facebook is potentially on the hook for a $5 billion payout to the U.S. government. That penalty is still subject to approval by the Justice Department.
Welcome to this transcribed edition of The Operators. TechCrunch is beginning to publish podcasts from industry experts, with transcriptions available for Extra Crunch members so you can read the conversation wherever you are.
The Operators features insiders from companies like Airbnb, Brex, Docsend, Facebook, Google, Lyft, Carta, Slack, Uber, and WeWork sharing their stories and tips on how to break into fields like marketing and product management. They also share best practices for entrepreneurs on how to hire and manage experts from domains outside their own.
Airbnb, one of the most valuable private tech companies in the world, has millions of hosts who trust strangers (guests) to come into their homes and hundreds of millions of guests who trust strangers (hosts) to provide a roof over their head. Carta, a $1 Billion+ company formerly known as eShares, is the leading provider of cap table management and valuation software, with thousands of customers and almost a million individual shareholders as users. Customers and users entrust Carta to manage their investments, a very serious responsibility requiring trust and security.
In this episode, Andy and Jared share with Neil how companies like Airbnb, Carta, and LinkedIn think about customer service, how to get into and succeed in the field and tech generally, and how founders should think about hiring and managing the customer support. With their experiences at two of tech’s trusted companies, Airbnb and Carta, this episode is packed with broad perspectives and deep insights.
Neil Devani and Tim Hsia created The Operators after seeing and hearing too many heady, philosophical podcasts about the future of tech, and not enough attention on the practical day-to-day work that makes it all happen.
Tim is the CEO & Founder of Media Mobilize, a media company and ad network, and a Venture Partner at Digital Garage. Tim is an early-stage investor in Workflow (acquired by Apple), Lime, FabFitFun, Oh My Green, Morning Brew, Girls Night In, The Hustle, Bright Cellars, and others.
Neil is an early-stage investor based in San Francisco with a focus on companies building stuff people need, solutions to very hard problems. Companies he’s invested in include Andela, Clearbit, Kudi, Recursion Pharmaceuticals, Solugen, and Vicarious Surgical.
If you’re interested in starting or accelerating your marketing career, or how to hire and manage this function, you can’t miss this episode!
The Operators brings experts with experience at companies like Airbnb, Brex, Docsend, Facebook, Google, Lyft, Carta, Slack, Uber, WeWork, etc. to share insider tips on how to break into fields like marketing and product management. They also share best practices for entrepreneurs on how to hire and manage experts from domains outside their own.
In Episode 5, we’re talking about customer service. Neil interviews Andy Yasutake, Airbnb’s Global Product Director of Customer and Community Support Platform Products, and Jared Thomas, Carta’s Head of Enterprise Relationship Management.
Neil Devani: Hello and welcome to the Operators, where we talk to entrepreneurs and executives from leading technology companies like Google, Facebook, Airbnb, and Carta about how to break into a new field, how to build a successful career, and how to hire and manage talent beyond your own expertise. We skip over the lofty prognostications from venture capitalists and storytime with founders to dig into the nuts and bolts of how it all works here from the people doing the real day to day work, the people who make it all happen, the people who know what it really takes. The Operators.
Today we are talking to two experts in customer service, one with hundreds of millions of individual paying customers and the other being the industry standard for managing equity investments. I’m your host, Neil Devani, and we’re coming to you today from Digital Garage in downtown San Francisco.
Joining me is Jared Thomas, head of Enterprise Relationship Management at Carta, a $1 billion-plus company after a recent round of financing led by Andreessen Horowitz. Carta, formerly known as eShares, is the leading provider of cap table management and valuation software with thousands of customers and almost a million individual shareholders as users. Customers and users trust Carta to manage their investments, a very serious responsibility requiring trust and security.
Also joining us is Andy Yasutake, the Global Product Director of Customer and Community Support Platform Products at Airbnb, one of the most valuable private tech startups today. Airbnb has millions of hosts who are trusting strangers to come into their homes and hundreds of millions of guests who are trusting someone to provide a roof over their head. The number of cases and types of cases that Andy and his team have to think about and manage boggle the mind. Jared and Andy, thank you for joining us.
Andy Yasutake: Thank you for having us.
Jared Thomas: Thank you so much.
Devani: To start, Andy, can you share your background and how you got to where you are today?
Yasutake: Sure. I’m originally from southern California. I was born and raised in LA. I went to USC for undergrad, University of Southern California, and I actually studied psychology and information systems.
Late-90s, the dot com was going on, I’d always been kind of interested in tech, went into management consulting at interstate consulting that became Accenture, and was in consulting for over 10 years and always worked on large systems of implementation of technology projects around customers. So customer service, sales transformation, anything around CRM, as kind of a foundation, but it was always very technical, but really loved the psychology part of it, the people side.
And so I was always on multiple consulting projects and one of the consulting projects with actually here in the Bay Area. I eventually moved up here 10 years ago and joined eBay, and at eBay I was the director of product for the customer services organization as well. And was there for five years.
I left for Linkedin, so another rocket ship that was growing and was the senior director of technology solutions and operations where I had all the kind of business enabling functions as well as the technology, and now have been at Airbnb for about four months. So I’m back to kind of my, my biggest passion around products and in the customer support and community experience and customer service world.
Hyp3r, an apparently trusted marketing partner of Facebook and Instagram, has been secretly collecting and storing location and other data on millions of users, against the policies of the social networks, Business Insider reported today. It’s hard to see how it could do this for years without intervention by the platforms except if the latter were either ignorant or complicit.
After BI informed Instagram, the company confirmed that Hyp3r (styled HYP3R) had violated its policies and has now been removed from the platform. In a statement to TechCrunch, a Facebook spokesperson confirmed the report, saying:
HYP3R’s actions were not sanctioned and violate our policies. As a result, we’ve removed them from our platform. We’ve also made a product change that should help prevent other companies from scraping public location pages in this way.
The company started several years ago as a platform via which advertisers could target users attending a given event, like a baseball game or concert. It used Instagram’s official API to hoover up data originally, the kind of data-gathering that has been happening for years by unsavory firms in tech, most infamously Cambridge Analytica.
The idea of getting an ad because you’re at a ball game isn’t so scary, but if the company maintains a persistent record not just of your exact locations, but objects in your photos and types of places you visit, in order to combine that with other demographics and build a detailed shadow profile… well, that’s a little scary. And so Hyp3r’s business model evolved.
Unfortunately, the API was severely restricted in early 2018, limiting Hyp3r’s access to location and user data. Although we heard reports that this led to layoffs at the company around the time, the company seems to have survived (and raised millions shortly afterwards) not by adapting its business model, but by sneaking around the apparently quite minimal barriers Instagram put in place to prevent location data from being scraped.
Some of this was done by taking advantage of Instagram’s Location pages, which would serve up public accounts visiting them to anyone who asked, logged in or not. (This was one of the features turned off today by Instagram.)
According to BI’s report, Hyp3r built tools to circumvent limitations on both location collection and saving of personal accounts’ stories — content meant to disappear after 24 hours. If a user posted anything at one of thousands of locations and regions monitored by Hyp3r, their data would be sucked up and added to their shadow profile.
To be clear, it only collected information from public stories and accounts. Naturally these people opted out of a certain amount of privacy by choosing a public account, but as the Cambridge Analytica case and others have shown, no one expects or should have to expect that their data is being secretly and systematically assembled into a personal profile by a company they’ve never heard of.
Facebook and Instagram, however, had definitely heard of Hyp3r. In fact, Hyp3r could until today be found in the official Facebook Marketing Partners directory, a curated list of companies it recommends for various tasks and services that advertisers might need.
And Hyp3r has been quite clear about what it is doing, though not about the methods by which it is doing it. It wasn’t a secret that the company was building profiles based around tracking locations and brands — that was presumably what Facebook listed it for. It was only when this report surfaced that Hyp3r had its Facebook Marketing Partner privileges rescinded.
It’s unclear how Hyp3r could exist as a privileged member of Facebook’s stable of recommended companies and simultaneously be in such blatant violation of its policies. If these partners receive even cursory reviews of their products and methods, wouldn’t it have been obvious to any informed auditor that there was no legitimate source for the location and other data that Hyp3r was collecting? Wouldn’t it have been obvious that it was engaging in Automated Data Collection, which is specifically prohibited without Facebook’s permission?
I’ve asked Facebook for more detail on how and when its Marketing Partners are reviewed, and how this seemingly fundamental violation of the prohibition against automated data collection could have gone undetected for so long.
Zendesk has always been all about customer service. Last spring it purchased Smooch to move more deeply into messaging app integration. Today, the company announced it was integrating WhatsApp, the popular messaging tool, into the Zendesk customer service toolkit.
Smooch was an early participant in the WhatsApp Business API program. What that does in practice says Warren Levitan, who came over as part of the Smooch deal, is provide a direct WhatsApp phone number for businesses using Zendesk . Given how many people, especially in Asia and Latin America, use WhatsApp as a primary channel for communication, this is a big deal.
“The WhatsApp Business API Connector is now fully integrated into Zendesk support. It will allow any Zendesk support customer to be up and running with a new WhatsApp number quicker than ever before, allowing them to connect to the 1.5 billion WhatsApp users worldwide, communicating with them on their channel of choice,” Levitan explained.
Levitan says the entire WhatsApp interaction experience is now fully integrated into the same Zendesk interface that customer service reps are used to using. WhatsApp simply becomes another channel for them.
“They can access WhatsApp conversations from within the same workspace and agent desktop, where they handle all of their other conversations. From an agent perspective, there are no new tools, no new workflows, no new reporting. And that’s what really allows them to get up and running quickly,” he said.
Customers may click or touch a button to dial the WhatsApp number, or they may use a QR code, which is a popular way of accessing WhatsApp customer service. As an example, Levitan says Four Seasons hotels prints a QR code on room key cards, and if customers want to access customer service, they can simply scan the code and the number dials automatically.
Zendesk has been able to get 1000 businesses up and running as part of the early access program, but now it really wants to scale that and allow many more businesses to participate. Up until now, Facebook has taken a controlled approach to on-boarding, having to approve each brand’s number before allowing it on the platform. Zendesk has been working to streamline that.
“We’ve worked tightly with Facebook (the owner of WhatsApp), so that we can have an integrated brand approval and on-boarding/activation to get their number lit up. We can now launch customers at scale, and have them up and running in days, whereas before it was more typically a multi-week process,” Levitan said.
For now, when the person connects to customer service via WhatsApp, it’s only via text messaging, There is no voice connection, and no plans for any for the time being, according to Levitan. Zendesk-WhatsApp integration is available starting today worldwide.
Facebook has filed lawsuits against two app developers accused of generating fraudulent revenue using the social media giant’s advertising platform.
The company announced the legal action in a blog post Tuesday.
“The developers made apps available on the Google Play store to infect their users’ phones with malware,” said Jessica Romero, director of platform enforcement and litigation. “The malware created fake user clicks on Facebook ads that appeared on the users’ phones, giving the impression that the users had clicked on the ads.”
The scheme uses a technique known as click injection, which relies on apps fraudulently generating ad clicks without the user’s knowledge to artificially inflate the amount of ad revenue. It’s a problem previously noted by security researchers. Often, developers create junk or easy-to-make apps which get downloaded millions of times, while in the background they’re clicking on invisible ads without the user’s knowledge.
Facebook said in this case two developers, LionMobi — based in Hong Kong, and JediMobi — based in Singapore — generated “unearned payouts” from the social media giant’s advertisement system.
By our count, the app developers have seen more than 207 million installs to date. The apps remain on Google’s app store. Google did not immediately comment.
Facebook said it refunded impacted advertisers.
A Facebook spokesperson did not immediately respond to a request for comment.
In June both Facebook and eBay were warned by the UK’s Competition and Markets Authority (CMA) they needed to do more to tackle the sale of fake product reviews. On eBay sellers were offering batches of five-star product reviews in exchange for cash, while Facebook’s platform was found hosting multiple groups were members solicited writers of fake reviews in exchange for free products or cash (or both).
A follow-up look at the two platforms by Which? has found a “significant improvement” in the number of eBay listings selling five-star reviews — with the group saying it found just one listing selling five-star reviews after the CMA’s intervention.
But little appears to have been done to prevent Facebook groups trading in fake reviews — with Which? finding dozens of Facebook groups that it said “continue to encourage incentivised reviews on a huge scale”.
Here’s a sample ad we found doing a ten-second search of Facebook groups… (one of a few we saw that specify they’re after US reviewers)
Which? says it found more than 55,000 new posts across just nine Facebook groups trading fake reviews in July, which it said were generating hundreds “or even thousands” of posts per day.
It points out the true figure is likely to be higher because Facebook caps the number of posts it quantifies at 10,000 (and three of the ten groups had hit that ceiling).
Which? also found Facebook groups trading fake reviews that had sharply increased their membership over a 30-day period, adding that it was “disconcertingly easy to find dozens of suspicious-looking groups in minutes”.
We also found a quick search of Facebook’s platform instantly serves a selection of groups soliciting product reviews…
Which? says looked in detail at ten groups (it doesn’t name the groups), all of which contained the word ‘Amazon’ in their group name, finding that all of them had seen their membership rise over a 30-day period — with some seeing big spikes in members.
“One Facebook group tripled its membership over a 30-day period, while another (which was first started in April 2018) saw member numbers double to more than 5,000,” it writes. “One group had more than 10,000 members after 4,300 people joined it in a month — a 75% increase, despite the group existing since April 2017.”
Which? speculates that the surge in Facebook group members could be a direct result of eBay cracking down on fake reviews sellers on its own platform.
“In total, the 10 [Facebook] groups had a staggering 105,669 members on 1 August, compared with a membership of 85,647 just 30 days prior to that — representing an increase of nearly 19%,” it adds.
Across the ten groups it says there were more than 3,500 new posts promoting inventivised reviews in a single day. Which? also notes that Facebook’s algorithm regularly recommended similar groups to those that appeared to be trading in fake reviews — on the ‘suggested for you’ page.
It also says it found admins of groups it joined listing alternative groups to join in case the original is shut down.
Commenting in a statement, Natalie Hitchins, Which?’s head of products and services, said: ‘Our latest findings demonstrate that Facebook has systematically failed to take action while its platform continues to be plagued with fake review groups generating thousands of posts a day.
“It is deeply concerning that the company continues to leave customers exposed to poor-quality or unsafe products boosted by misleading and disingenuous reviews. Facebook must immediately take steps to not only address the groups that are reported to it, but also proactively identify and shut down other groups, and put measures in place to prevent more from appearing in the future.”
“The CMA must now consider enforcement action to ensure that more is being done to protect people from being misled online. Which? will be monitoring the situation closely and piling on the pressure to banish these fake review groups,” she added.
Responding to Which?‘s findings in a statement, CMA senior director George Lusty said: “It is unacceptable that Facebook groups promoting fake reviews seem to be reappearing. Facebook must take effective steps to deal with this problem by quickly removing the material and stop it from resurfacing.”
“This is just the start – we’ll be doing more to tackle fake and misleading online reviews,” he added. “Lots of us rely on reviews when shopping online to decide what to buy. It is important that people are able to trust they are genuine, rather than something someone has been paid to write.”
In a statement Facebook claimed it has removed 9 out of ten of the groups Which? reported to it and claimed to be “investigating the remaining group”.
“We don’t allow people to use Facebook to facilitate or encourage false reviews,” it added. “We continue to improve our tools to proactively prevent this kind of abuse, including investing in technology and increasing the size of our safety and security team to 30,000.”
All U.S. stock markets were down severely today, and tech stocks were hit especially hard, as China retaliated to increasing U.S. tariffs by halting imports on U.S. agricultural goods and finally acceded to market pressures by letting the yuan slide in value against the dollar.
At one point, the Dow was down nearly 900 points before staging a late afternoon rally to close off by roughly 760 points. The Nasdaq, the marketplace which is home to a number of technology stocks, saw its value drop by 3.4%, or 277.10 points.
Shares of Alphabet (the parent company of Google), Amazon, Apple, Facebook, Microsoft, Netflix and Twitter were all down for the day. Indeed, as CNBC reported, the biggest tech stocks — Microsoft, Amazon, Apple, Facebook and Alphabet — lost a combined $162 billion in market value.
Declines came as China allowed its currency to fall below what was once considered to be a red-line in the country’s currency peg against the dollar. That means that Chinese goods start to look more attractive globally as their prices decline in relation to the dollar. It could also trigger a wave of currency devaluations and protectionist measures across the globe — further putting downward pressure on global economic growth.
Stocks also continued to feel the pinch from the threat that President Donald Trump would make good on his threat to impose new tariffs on goods from China beginning September 1, 2019. Those tariffs are expected to take a bite out of every-day consumer goods and clothing, which adversely affects tech companies.
The big concern for these tech companies is the looming threat of that tariff expansion from the U.S. If those tariffs go into effect it would have significant consequences in these companies’ home market.
“Assuming smartphones, tablets, smart watches, and computer systems are not categorically excluded from the final $300B tranche, we expect there will be material impact to Apple hardware product earnings,” analysts from Cowen & Co. wrote in a note quoted by CNBC .
For the first time in more than half a decade, Facebook wants to inform you that it owns Instagram, the hyper-popular rival social networking app it acquired for a $1BN steal back in 2012.
Ditto messaging platform WhatsApp — which Mark Zuckerberg splurged $19BN on a couple of years later to keep feeding eyeballs into his growth engine.
Facebook is adding its own brand name alongside the other two — in the following format: ‘Instagram from Facebook’; ‘WhatsApp from Facebook.’
The cheap perfume style rebranding was first reported by The Information which cites three people familiar with the matter who told it employees for the two apps were recently notified internally of the plan to rebrand.
“The move to add Facebook’s name to the apps has been met with surprise and confusion internally, reflecting the autonomy that the units have operated under,” it said. Although it also reported that CEO Mark Zuckerberg has also been frustrated that Facebook doesn’t get more credit for the growth of Instagram and WhatsApp.
So it sounds like Facebook may be hoping for a little reverse osmosis brand-washing — aka leveraging the popularity of its cleaner social apps to detoxify the scandal-hit mothership.
Not that Facebook is saying anything like that publicly, of course.
In a statement to The Information confirming the rebranding it explained it thus: “We want to be clearer about the products and services that are part of Facebook.”
The rebranding also comes at a time when Facebook is facing at least two antitrust investigations on its home turf — where calls for Facebook and other big tech giants to be broken up are now a regular feature of the campaign trail…
We can only surmise the legal advice Facebook must be receiving vis-a-vis what it should do to try to close down break up arguments that could deprive it of its pair of golden growth geese.
Arguments such as the fact most Instagram (and WhatsApp) users don’t even know they’re using a Facebook-owned app. Hence, as things stand, it would be pretty difficult for Facebook’s lawyers to successfully argue Instagram and WhatsApp users would be harmed if the apps were cut free by a break-up order.
But now — with the clumsy ‘from Facebook’ construction — Facebook can at least try to make a case that users are in a knowing relationship with Facebook in which they willingly, even if not lovingly, place their eyeballs in Zuckerberg’s bucket.
In which case Facebook is not telling you the Instagram user that it owns Instagram for your benefit. Not even slightly.
Note, for example, the use of the comparative adjective “clearer” in Facebook’s statement to explain its intent for the rebranding — rather than a simple statement: ‘we want to be clear’.
It’s definitely not saying it’s going to individually broadcast its ownership of Instagram and WhatsApp to each and every user on those networks. More like it’s going to try to creep the Facebook brand in. Which is far more in corporate character.
At the time of writing a five day old update of of Instagram’s iOS app already features the new construction — although it looks far more dark pattern than splashy rebrand, with just the faintest whisker of grey text at the base of the screen to disclose that you’re about to be sucked into the Facebook empire (vs a giant big blue ‘Create new account’ button winking to be tapped up top… )
Here’s the landing screen — with the new branding. Blink and you’ll miss it…
So not full disclosure then. More like just an easily overlooked dab of the legal stuff — to try to manage antitrust risk vs the risk of Facebook brand toxicity poisoning the (cleaner) wells of Instagram and WhatsApp.
There are signs the company is experimenting in some extremely dilute cross-brand-washing too.
The iOS app description for Instagram includes the new branding — tagged to an ad style slogan that gushes: “Bringing you closer to the people and things you love.” But, frankly, who reads app descriptions?
Up until pretty recently, both Instagram and WhatsApp had a degree of independence from their rapacious corporate parent — granted brand and operational independence under the original acquisition terms and leadership of their original founders.
Zuckerberg lieutenants and/or long time Facebookers are now running both app businesses. The takeover is complete.
Facebook is also busy working on entangling the backends of its three networks — under a claimed ‘pivot to privacy‘ which it announced earlier this year.
This also appears intended to try to put regulators off by making breaking up Facebook much harder than it would be if you could just split it along existing app lines. Theories of user harm potentially get more complicated if you can demonstrate cross-platform chatter.
The accompanying 3,000+ word screed from Zuckerberg introduced the singular notion of “the Facebook network”; aka one pool for users to splash in, three differently colored slides to funnel you in there.
“In a few years, I expect future versions of Messenger and WhatsApp to become the main ways people communicate on the Facebook network,” he wrote. “If this evolution is successful, interacting with your friends and family across the Facebook network will become a fundamentally more private experience.”
The ‘from Facebook’ rebranding thus looks like just a little light covering fire for the really grand dodge Facebook is hoping to pull off as the break-up bullet speeds down the pipe: Aka Entangling its core businesses at the infrastructure level.
From three networks to one massive Facebook-owned user data pool.
One network to rule them all, one network to find them,
One network to bring them all, and in the regulatory darkness bind them
Grab popcorn. As Internet fights go this one deserves your full attention — because the fight is over your attention. Your eyeballs and the creepy ads that trade data on you to try to swivel ’em.
In the blue corner, the Internet Advertising Association’s CEO, Randall Rothenberg, who has been taking to Twitter increasingly loudly in recent days to savage Europe’s privacy framework, the GDPR, and bleat dire warnings about California’s Consumer Privacy Act (CCPA) — including amplifying studies he claims show “the negative impact” on publishers.
Exhibit A, tweeted August 1:
NB: The IAB is a mixed membership industry organization which combines advertisers, brands, publishers, data brokers* and adtech platform tech giants — including the dominant adtech duopoly, Google and Facebook, who take home ~60% of digital ad spend. The only entity capable of putting a dent in the duopoly, Amazon, is also in the club. Its membership reflects the sprawling interests attached to the online ad industry, and, well, the personal data that currently feeds it (your eyeballs again!), although some members clearly have pots more money to spend on lobbying against digital privacy regs than others.
In a what now looks to have been deleted tweet last month Rothenberg publicly professed himself proud to have Facebook as a member of his ‘publisher defence’ club. Though, admittedly, per the above tweet, he’s also worried about brands and retailers getting “killed”. He doesn’t need to worry about Google and Facebook’s demise because that would just be ridiculous.
Now, in the — I wish I could call it ‘red top’ corner, except these newspaper guys are anything but tabloid — we find premium publishers biting back at Rothenberg’s attempts to trash-talk online privacy legislation.
Here’s the New York Times‘ data governance & privacy guy, Robin Berjon, demolishing Rothenberg via the exquisite medium of quote-tweet…
One of the primary reasons we need the #GDPR and #CCPA (and more) today is because the @iab, under @r2rothenberg's leadership, has been given 20 years to self-regulate and has used the time to do [checks notes] nothing whatsoever.https://t.co/hBS9d671LU
— Robin Berjon (@robinberjon) August 1, 2019
I’m going to quote Berjon in full because every single tweet packs a beautifully articulated punch:
Next time Facebook talks about how it can self-regulate its access to data I suggest you cc that entire thread.
Also chipping in on Twitter to champion Berjon’s view about the IAB’s leadership vacuum in cleaning up the creepy online ad complex, is Aram Zucker-Scharff, aka the ad engineering director at — checks notes — The Washington Post.
His punch is more of a jab — but one that’s no less painful for the IAB’s current leadership.
“I say this rarely, but this is a must read,” he writes, in a quote tweet pointing to Berjon’s entire thread.
I say this rarely, but this is a must read, Thread: https://t.co/FxKmT9bp7r
— Aram Zucker-Scharff (@Chronotope) August 2, 2019
Another top tier publisher’s commercial chief also told us in confidence that they “totally agree with Robin” — although they didn’t want to go on the record today.
In an interesting twist to this ‘mixed member online ad industry association vs people who work with ads and data at actual publishers’ slugfest, Rothenberg replied to Berjon’s thread, literally thanking him for the absolute battering.
“Yes, thank you – that’s exactly where we’re at & why these pieces are important!” he tweeted, presumably still dazed and confused from all the body blows he’d just taken. “@iab supports the competitiveness of the hundreds of small publishers, retailers, and brands in our global membership. We appreciate the recognition and your explorations,@robinberjon.”
Yes, thank you – that’s exactly where we’re at & why these pieces are important! @iab supports the competitiveness of the hundreds of small publishers, retailers, and brands in our global membership. We appreciate the recognition and your explorations, @robinberjon & @Bershidsky https://t.co/WDxrWIyHXd
— Randall Rothenberg (@r2rothenberg) August 2, 2019
Rothenberg also took the time to thank Bloomberg columnist, Leonid Bershidsky, who’d chipped into the thread to point out that the article Rothenberg had furiously retweeted actually says the GDPR “should be enforced more rigorously against big companies, not that the GDPR itself is bad or wrong”.
Who is Bershidsky? Er, just the author of the article Rothenberg tried to nega-spin. So… uh… owned.
May I point out that the piece that's cited here (mine) says the GDPR should be enforced more rigorously against big companies, not that the GDPR itself is bad or wrong?
— Leonid Bershidsky (@Bershidsky) August 1, 2019
But there’s more! Berjon tweeted a response to Rothenberg’s thanks for what the latter tortuously referred to as “your explorations” — I mean, the mind just boggles as to what he was thinking to come up with that euphemism — thanking him for reversing his position on GDPR, and for reversing his prior leadership vacuum on supporting robustly enforced online privacy laws.
“It’s great to hear that you’re now supporting strong GDPR enforcement,” he writes. “It’s indeed what most helps the smaller players. A good next step to this conversation would be an @iab statement asking to transpose the GDPR to US federal law. Want to start drafting something?”
It's great to hear that you're now supporting strong GDPR enforcement. It's indeed what most helps the smaller players. A good next step to this conversation would be an @iab statement asking to transpose the GDPR to US federal law. Want to start drafting something?
— Robin Berjon (@robinberjon) August 2, 2019
We’ve asked the IAB if, in light of Rothenberg’s tweet, it now wishes to share a public statement in support of transposing the GDPR into US law. We’ll be sure to update this post if it says anything at all.
We’ve also screengrabbed the vinegar strokes of this epic fight — as an insurance policy against any further instances of the IAB hitting the tweet delete button. (Plus, I mean, you might want to print it out and get it framed.)
Some light related reading can be found here: