2019 has been a breakout year for podcasting. According to Edison Research’s Infinite Dial report, more than half of Americans have now listened to a podcast, and an estimated 32% listen monthly (up from 26% last year). This is the largest yearly increase since this data started being tracked in 2008. Podcast creation also continues to grow, with more than 700,000 podcasts and 29 million podcast episodes, up 27% from last year.
Thanks to this growing listener base, big companies are finally starting to pay attention to the space — Spotify plans to spend $500 million on acquisitions this year, and already acquired content studio Gimlet, tech platform Anchor, and true crime network Parcast for a combined $400 million. In the past week, Google added playable podcasts to search results, Spotify released an analytics dashboard for podcasters and Pandora launched a tool for podcasters to submit their shows.
We’ve been going to Podcast Movement, the largest annual industry conference, for three years, and have watched the conference grow along with the industry — reaching 3,000 attendees in 2019. Given the increased buzz around the space, we were expecting this year’s conference to have a new level of energy and professionalism, and we weren’t disappointed. We’ve summarized five top takeaways from the conference, from why podcast ads are hard to scale to why so many celebrities are launching their own shows.
We’ve officially entered the age of celebrity podcasters. After early successes like “WTF with Marc Maron” (2009), Alec Baldwin’s “Here’s The Thing” (2011) and Anna Faris’ “Unqualified” (2015), top talent is flooding into the space. In 2017, 15% of Apple’s top 20 most-downloaded podcasts of the year were hosted by celebrities or influencers — this jumped to 32% of the top 25 in 2018. And of all the new shows that launched in 2018, 48% of the top 25 were celebrity-hosted.
Though podcasts are undermonetized compared to other forms of media, talent agents now consider them to be an important part of a well-rounded content strategy. Dan Ferris from CAA tells his clients to think of podcasting as a way of connecting with fans that is “much more intimate than social media.” Podcasts also help celebrities find a new audience. Ben Davis from WME said that while his client David Dobrik has a smaller audience on his podcast than on YouTube (1.5 million downloads per episode versus 6 million views per video), the podcast helps him reach a new group of listeners who stumble upon his show on the Apple Podcast charts.
While some podcast veterans grumble about the rise of celebrity talk shows, famous podcasters are good for the industry as a whole. Advertisers are drawn to the space by the opportunity to get to access A-list talent at lower prices. One recent example is Endeavor Audio’s fiction show “Blackout,” which starred Rami Malek, who was fresh off an Oscar win. Endeavor’s head of sales, Charlie Emerson, said brands might have to sign a “seven or eight-figure deal” to advertise alongside Malek’s content in other forms of media. Other podcasters also benefit from new listeners brought into the medium by their favorite stars — a Westwood One survey in fall 2018 found that 60% of podcast listeners report discovering shows via social media, where celebrities and influencers have huge existing audiences to push content to.
Paid listening apps represent a fairly small percentage of podcast listenership, with production platform Anchor estimating that Apple Podcasts and Spotify control more than 70% of listenership. A venture-backed company called Luminary is trying to change this — it raised $100 million to launch a “Netflix for podcasts” this spring. Consumers pay $7.99/month to access Luminary-exclusive shows alongside podcasts that are free on other apps. Because podcasts have RSS feeds, distributors like Luminary can easily grab free content and put it behind a paywall. The platform, not the creator, benefits from this monetization.
Within days of Luminary’s launch, prominent podcasters and media companies (The New York Times, Gimlet and more) requested their shows be removed from the app. It’s interesting to note that YouTube has a similar premium plan — for $11.99/month, users can access and download ad-free videos. Unlike Luminary, however, YouTube, pays creators a cut of the revenue from these subscriptions based on how frequently their content is viewed.
Unsurprisingly, creator sentiment is more positive toward platforms like Spotify and Pandora . Though these companies do make money from premium subscribers who listen to podcasts, creators can choose whether or not to submit their shows. And podcasters benefit from making their shows discoverable to the existing user base of these platforms, which already dominate “earshare.” Spotify alone has 232 million MAUs, which dwarfs the 90 million people in the U.S. who listen to a podcast monthly.
Podcast ad revenue has been scaling quickly, with $480 million in spend last year and a projected $680 million this year. Over the past four years, ad revenue has scaled at a 65% CAGR, and this growth is expected to continue. In its early days, the podcast ad market has largely been driven by D2C brands — you’ve probably heard hundreds of Casper, Blue Apron and Madison Reed ads. However, bigger brands are also starting to enter podcasting (Geico, Capital One and Progressive made the top 10 list for June 2019) due to the growing audience scale and increased precision around targeting and attribution.
While many attendees were excited by the massive growth in ad revenue, others worried that it may kill what makes podcasting special. They’re particularly concerned that podcasts may go the way of online video, with annoying, generic, low CPM ads. Podcast hosts typically read their own ads, and are often true fans of the product — they share personal stories instead of reciting brand talking points. This results in premium CPMs compared to most digital media — AdvertiseCast’s 2019 survey found an average CPM of $18 for a 30-second podcast ad and $25 for a 60-second ad, more than 2x the average CPM on other digital platforms.
While these ads are effective, they’re time-consuming and expensive to produce. Big brands interested in podcast ads often expect to reuse radio spots — they aren’t used to the process of crafting and approving a host-read ad that may only reach 10,000 listeners. Podcasters, meanwhile, value their trust with listeners and don’t want to spam them with loud, unoriginal radio ads. The tension between maintaining the quality of ads while scaling quantity was an underlying theme of most monetization discussions, and industry veterans disagree on how it will play out.
Despite the growth in ad revenue and relatively high CPMs, the industry is significantly undermonetized. Using data from Nielsen, IAB and Edison, we calculated that podcasts monetize through advertisements at only $0.01 per listener hour — less than 10 times the rate of radio. Podcast monetization per listener hour has increased over the past year, up 25% by our calculations, but still substantially lags all other forms of media.
Why are podcasts so undermonetized? Unlike many other forms of media, the dominant distribution platform (Apple Podcasts) has no ad marketplace. Creators have historically had to approach brands themselves or sign with podcast networks to construct custom ad deals, and the “long tail” of podcasters were unable to monetize. This is finally changing. Anchor, which reported in January that it powers 40% of new podcasts, has an ad marketplace that has doubled the number of podcasts that are running ads. Other popular platforms like Radio Public have launched programs for small podcasters to opt-in to ad placements.
The second major hurdle in monetization is attribution. Podcasts have historically monetized through direct response campaigns — a podcaster provides a special URL or promo code for listeners to use when making a purchase. However, many people listen to podcasts when exercising or driving, and can’t write down the promo code or visit the URL immediately. These listeners might remember the product and make a purchase later, but the podcaster won’t get the attribution. Thomas Mancusi of Audioboom estimated that this happens in 50-60% of purchases driven by podcast ads.
Startups are trying to bring better adtech into podcasting to fix this issue. Chartable is one example — the company installs trackers to match a listener’s IP address with a purchaser’s IP address, allowing podcasters to claim attribution for listeners who don’t use their URL or promo code. Chartable currently runs on 10,000 shows, and the early results are so promising that ad agencies expect to see higher CPMs and significantly more spend in the space.
As podcasting grows, the listener base is diversifying. Edison Research looked into data on “rookie” listeners (listening for six months or less) and “veteran” listeners (listening for 3+ years), and found significant demographic differences. Only 37% of veterans are female, compared to 53% of rookies. While the plurality of veterans (43%) are age 35-54, 54% of rookies are age 12-34. Rookies are also 1.6x more likely to say they most often listen to podcasts on Spotify, Pandora or SoundCloud (43% versus 27% of veterans). And social media is an important way that rookies discover podcasts — 52% have found a podcast from video and 46% from audio on social media, compared to 41% and 37% for veterans.
These new listeners will have a profound impact on the future of podcasting, in both the type of content produced and the way it’s distributed. Industry experts are already noting significant new demand for female-hosted podcasts, as well as audio dramas that appeal to young people looking for a fast-paced, suspenseful story. They’re advising podcasters to share clips of their content on social media, and to leverage broader listening platforms like YouTube and SoundCloud for distribution.
International markets also represent an enormous opportunity for growth. Most podcast listeners today live in the U.S. or China, but content producers are starting to see significant demand elsewhere. Castbox’s Valentina Kaledina said that many fans abroad have resorted to listening in their non-native language, with the top 100 shows in each country comprising a mix of English and local language. Adonde Media’s Martina Castro, who recently conducted the first listener survey on Spanish-language podcast fans, said that 53% of the survey’s 2,100 respondents reported listening to podcasts in English — and only 20% of them use Apple Podcasts.
Larger podcast producers are beginning to translate shows for non-English-speaking markets. Wondery CEO Hernan Lopez announced at the conference that the company’s hit show Dr. Death is now available in seven languages. Lopez noted that it was an expensive process, and he doesn’t expect the shows to generate profit in the near future. However, he believes that Wondery will eventually see a significant return from investing in the development of new podcast markets — and if they do, other podcast companies will likely follow in their footsteps.
Twitter’s ongoing, long-term efforts to make conversations easier to follow and engage with on its platform is getting a boost with the company’s latest acquihire. The company has picked up the team behind Lightwell, a startup that had built a set of developer tools to build interactive, narrative apps, for an undisclosed sum. Lightwell’s founder and CEO, Suzanne Xie, is becoming a director of product leading Twitter’s Conversations initiative, with the rest of her small four-person team joining her on the conversations project.
(Sidenote: Sara Haider, who had been leading the charge on rethinking the design of Conversations on Twitter, most recently through the release of twttr, Twitter’s newish prototyping app, announced that she would be moving on to a new project at the company after a short break. I understand twttr will continue to be used to openly test conversation tweaks and other potential changes to how the app works. )
The Lightwell/Twitter news was announced late yesterday both by Lightwell itself and Twitter’s VP of product Keith Coleman. A Twitter spokesperson also confirmed the deal to TechCrunch in a short statement today: “We are excited to welcome Suzanne and her team to Twitter to help drive forward the important work we are doing to serve the public conversation,” he said. Interestingly, Twitter is on a product hiring push it seems. Other recent hires Coleman noted were Other recent product hires include Angela Wise and Tom Hauburger. Coincidentally, both joined from autonomous companies, respectively Waymo and Voyage.
To be clear, this is more acqui-hire than hire: only the Lightwell team (of what looks like three people) is joining Twitter. The Lightwell product will no longer be developed, but it is not going away, either. Xie noted in a separate Medium post that apps that have already been built (or plan to be built) on the platform will continue to work. It will also now be free to use.
Lightwell originally started life in 2012 as Hullabalu, as one of the many companies producing original-content interactive children’s stories for smartphones and tablets. In a sea of children-focused storybook apps, Hullabalu’s stories stood out not just because of the distinctive cast of characters that the startup had created, but for how the narratives were presented: part book, part interactive game, the stories engaged children and moved narratives along by getting the users to touch and drag elements across the screen.
After some years, Hullabalu saw an opportunity to package its technology and make it available as a platform for all developers, to be used not just by other creators of children’s content, but advertisers and more. It seems the company shifted at that time to make Lightwell its main focus.
The Hullabalu apps remained live on the App Store, even when the company moved on to focus on Lightwell. However, they hadn’t been updated in two years’ time. Xie says they will remain as is.
In its startup life, the company went through YCombinator, TechStars, and picked up some $6.5 million in funding (per Crunchbase), from investors that included Joanne Wilson, SV Angel, Vayner, Spark Labs, Great Oak, Scout Ventures and more.
If turning Hullabalu into Lightwell was a pivot, then the exit to Twitter can be considered yet another interesting shift in how talent and expertise optimized for one end can be repurposed to meet another.
One of Twitter’s biggest challenges over the years has been trying to create a way to make conversations (also narratives of a kind) easy to follow — both for those who are power users, and for those who are not and might otherwise easily be put off from using the product.
The crux of the problem has been that Twitter’s DNA is about real-time rivers of chatter that flow in one single feed, while conversations by their nature linger around a specific topic and become hard to follow when there are too many people talking. Trying to build a way to fit the two concepts together has foxed the company for a long time now.
At its best, bringing in a new team from the outside will potentially give Twitter a fresh perspective on how to approach conversations on the platform, and the fact that Lightwell has been thinking about creative ways to present narratives gives them some cred as a group that might come up completely new concepts for presenting conversations.
At a time when it seems that the conversation around Conversations had somewhat stagnated, it’s good to see a new chapter opening up.
The new policy was announced just hours after the company identified an information operation involving hundreds of accounts linked to China as part of an effort to “sow political discord” around events in Hong Kong after weeks of protests in the region. Over the weekend more than 1 million Hong Kong residents took to the streets to protest what they see as an encroachment by the mainland Chinese government over their rights.
State-funded media enterprises that do not rely on taxpayer dollars for their financing and don’t operate independently of the governments that finance them will no longer be allowed to advertise on the platform, Twitter said in a statement. That leaves a big exception for outlets like the Associated Press, the British Broadcasting Corp., Public Broadcasting Service and National Public Radio, according to reporting from BBC reporter, Dave Lee.
The affected accounts will be able to use Twitter, but can’t access the company’s advertising products, Twitter said in a statement.
“We believe that there is a difference between engaging in conversation with accounts you choose to follow and the content you see from advertisers in your Twitter experience which may be from accounts you’re not currently following. We have policies for both but we have higher standards for our advertisers,” Twitter said in its statement.
The policy applies to news media outlets that are financially or editorially controlled by the state, Twitter said. The company said it will make its policy determinations on the basis of media freedom and independence, including editorial control over articles and video, the financial ownership of the publication, the influence or interference governments may exert over editors, broadcasters and journalists, and political pressure or control over the production and distribution process.
Twitter said the advertising rules wouldn’t apply to entities that are focused on entertainment, sports or travel, but if there’s news in the mix, the company will block advertising access.
Affected outlets have 30 days before they’re removed from Twitter and the company is halting all existing campaigns.
State media has long been a source of disinformation and was cited as part of the Russian campaign to influence the 2016 election. Indeed, Twitter has booted state-financed news organizations before. In October 2017, the company banned Russia Today and Sputnik from advertising on its platform (although a representative from RT claimed that Twitter encouraged it to advertise ahead of the election).
Facebook is expanding its data abuse bug bounty to Instagram.
The social media giant, which owns Instagram, first rolled out its data abuse bounty in the wake of the Cambridge Analytica scandal, which saw tens of millions of Facebook profiles scraped to help swing undecided voters in favor of the Trump campaign during the U.S. presidential election in 2016.
The idea was that security researchers and platform users alike could report instances of third-party apps or companies that were scraping, collecting and selling Facebook data for other purposes, such as to create voter profiles or build vast marketing lists.
Instagram wasn’t immune either. Just this month Instagram booted a “trusted” marketing partner off its platform after it was caught scraping millions of users’ stories, locations and other data points on millions of users, forcing Instagram to make product changes to prevent future scraping efforts. That came after two other incidents earlier this year where a security researcher found 14 million scraped Instagram profiles sitting on an exposed database — without a password — for anyone to access. Another incident saw another company platform scrape the profile data — including email addresses and phone numbers — of Instagram influencers.
Last year Instagram also choked developers’ access as the company tried to rebuild its privacy image in the aftermath of the Cambridge Analytica scandal.
Dan Gurfinkel, security engineering manager at Instagram, said its new and expanded data abuse bug bounty aims to “encourage” security researchers to report potential abuse.
Instagram said it’s also inviting a select group of trusted security researchers to find flaws in its Checkout service ahead of its international rollout, who will also be eligible for bounty payouts.
Hey. This is Week-in-Review, where I give a heavy amount of analysis and/or rambling thoughts on one story while scouring the rest of the hundreds of stories that emerged on TechCrunch this week to surface my favorites for your reading pleasure.
Last week, I talked about how Netflix might have some rough times ahead as Disney barrels towards it.
There is plenty to be said about the potential of smart glasses. I write about them at length for TechCrunch and I’ve talked to a lot of founders doing cool stuff. That being said, I don’t have any idea what Snap is doing with the introduction of a third-generation of its Spectacles video sunglasses.
The first-gen were a marketing smash hit, their sales proved to be a major failure for the company which bet big and seemingly walked away with a landfill’s worth of the glasses.
Snap’s latest version of Spectacles were announced in Vogue this week, they are much more expensive at $380 and their main feature is that they have two cameras which capture images in light depth which can lead to these cute little 3D boomerangs. One one hand, it’s nice to see the company showing perseverance with a tough market, on the other it’s kind of funny to see them push the same rock up the hill again.
Snap is having an awesome 2019 after a laughably bad 2018, the stock has recovered from record lows and is trading in its IPO price wheelhouse. It seems like they’re ripe for something new and exciting, not beautiful yet iterative.
The $150 Spectacles 2 are still for sale, though they seem quite a bit dated-looking at this point. Spectacles 3 seem to be geared entirely towards women, and I’m sure they made that call after seeing the active users of previous generations, but given the write-down they took on the first-generation, something tells me that Snap’s continued experimentation here is borne out of some stubbornness form Spiegel and the higher-ups who want the Snap brand to live in a high fashion world and want to be at the forefront of an AR industry that seems to have already moved onto different things.
On to the rest of the week’s news.
Here are a few big news items from big companies, with green links to all the sweet, sweet added context:
How did the top tech companies screw up this week? This clearly needs its own section, in order of badness:
Adam Neumann (WeWork) at TechCrunch Disrupt NY 2017
Our premium subscription service had another week of interesting deep dives. My colleague Danny Crichton wrote about the “tech” conundrum that is WeWork and the questions that are still unanswered after the company filed documents this week to go public.
…How is margin changing at its older locations? How is margin changing as it opens up in places like India, with very different costs and revenues? How do those margins change over time as a property matures? WeWork spills serious amounts of ink saying that these numbers do get better … without seemingly being willing to actually offer up the numbers themselves…
Here are some of our other top reads this week for premium subscribers. This week, we published a major deep dive into the world’s next music unicorn and we dug deep into marketplace startups.
Sign up for more newsletters in your inbox (including this one) here.
The phrase “pull yourself up by your own bootstraps” was originally meant sarcastically.
It’s not actually physically possible to do — especially while wearing Allbirds and having just fallen off a Bird scooter in downtown San Francisco, but I should get to my point.
This week, Ken Cuccinelli, the acting Director of the United States Citizenship and Immigrant Services Office, repeatedly referred to the notion of bootstraps in announcing shifts in immigration policy, even going so far as to change the words to Emma Lazarus’s famous poem “The New Colossus:” no longer “give me your tired, your poor, your huddled masses yearning to breathe free,” but “give me your tired and your poor who can stand on their own two feet, and who will not become a public charge.”
We’ve come to expect “alternative facts” from this administration, but who could have foreseen alternative poems?
Still, the concept of ‘bootstrapping’ is far from limited to the rhetorical territory of the welfare state and social safety net. It’s also a favorite term of art in Silicon Valley tech and venture capital circles: see for example this excellent (and scary) recent piece by my editor Danny Crichton, in which young VC firms attempt to overcome a lack of the startup capital that is essential to their business model by creating, as perhaps an even more essential feature of their model, impossible working conditions for most everyone involved. Often with predictably disastrous results.
It is in this context of unrealistic expectations about people’s labor, that I want to introduce my most recent interviewee in this series of in-depth conversations about ethics and technology.
Mary L. Gray is a Fellow at Harvard University’s Berkman Klein Center for Internet and Society and a Senior Researcher at Microsoft Research. One of the world’s leading experts in the emerging field of ethics in AI, Mary is also an anthropologist who maintains a faculty position at Indiana University. With her co-author Siddharth Suri (a computer scientist), Gray coined the term “ghost work,” as in the title of their extraordinarily important 2019 book, Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass.
Ghost Work is a name for a rising new category of employment that involves people scheduling, managing, shipping, billing, etc. “through some combination of an application programming interface, APIs, the internet and maybe a sprinkle of artificial intelligence,” Gray told me earlier this summer. But what really distinguishes ghost work (and makes Mary’s scholarship around it so important) is the way it is presented and sold to the end consumer as artificial intelligence and the magic of computation.
In other words, just as we have long enjoyed telling ourselves that it’s possible to hoist ourselves up in life without help from anyone else (I like to think anyone who talks seriously about “bootstrapping” should be legally required to rephrase as “raising oneself from infancy”), we now attempt to convince ourselves and others that it’s possible, at scale, to get computers and robots to do work that only humans can actually do.
Ghost Work’s purpose, as I understand it, is to elevate the value of what the computers are doing (a minority of the work) and make us forget, as much as possible, about the actual messy human beings contributing to the services we use. Well, except for the founders, and maybe the occasional COO.
But if working people are supposed to be ghosts, then when they speak up or otherwise make themselves visible, they are “haunting” us. And maybe it can be haunting to be reminded that you didn’t “bootstrap” yourself to billions or even to hundreds of thousands of dollars of net worth.
Sure, you worked hard. Sure, your circumstances may well have stunk. Most people’s do.
But none of us rise without help, without cooperation, without goodwill, both from those who look and think like us and those who do not. Not to mention dumb luck, even if only our incredible good fortune of being born with a relatively healthy mind and body, in a position to learn and grow, here on this planet, fourteen billion years or so after the Big Bang.
I’ll now turn to the conversation I recently had with Gray, which turned out to be surprisingly more hopeful than perhaps this introduction has made it seem.
Greg Epstein: One of the most central and least understood features of ghost work is the way it revolves around people constantly making themselves available to do it.
Mary Gray: Yes, [What Siddarth Suri and I call ghost work] values having a supply of people available, literally on demand. Their contributions are collective contributions.
It’s not one person you’re hiring to take you to the airport every day, or to confirm the identity of the driver, or to clean that data set. Unless we’re valuing that availability of a person, to participate in the moment of need, it can quickly slip into ghost work conditions.
Twitter is testing a new way to filter unwanted messages from your Direct Message inbox. Today, Twitter allows users to set their Direct Message inbox as being open to receiving messages from anyone, but this can invite a lot of unwanted messages, including abuse. While one solution is to adjust your settings so only those you follow can send you private messages, that doesn’t work for everyone. Some people — like reporters, for example — want to have an open inbox in order to have private conversations and receive tips.
This new experiment will test a filter that will move unwanted messages, including those with offensive content or spam, to a separate tab.
Unwanted messages aren’t fun. So we’re testing a filter in your DM requests to keep those out of sight, out of mind. pic.twitter.com/Sg5idjdeVv
— Twitter Support (@TwitterSupport) August 15, 2019
Instead of lumping all your messages into a single view, the Message Requests section will include the messages from people you don’t follow, and below that, you’ll find a way to access these newly filtered messages.
Users would have to click on the “Show” button to even read these, which protects them from having to face the stream of unwanted content that can pour in at times when the inbox is left open.
And even upon viewing this list of filtered messages, all the content itself isn’t immediately visible.
In the case that Twitter identifies content that’s potentially offensive, the message preview will say the message is hidden because it may contain offensive content. That way, users can decide if they want to open the message itself or just click the delete button to trash it.
The change could allow Direct Messages to become a more useful tool for those who prefer an open inbox, as well as an additional means of clamping down on online abuse.
It’s also similar to how Facebook Messenger handles requests — those from people you aren’t friends with are relocated to a separate Message Requests area. And those that are spammy or more questionable are in a hard-to-find Filtered section below that.
It’s not clear why a feature like this really requires a “test,” however — arguably, most people would want junk and abuse filtered out. And those who for some reason did not, could just toggle a setting to turn off the filter.
Instead, this feels like another example of Twitter’s slow pace when it comes to making changes to clamp down on abuse. Facebook Messenger has been filtering messages in this way since late 2017. Twitter should just launch a change like this, instead of “testing” it.
The idea of hiding — instead of entirely deleting — unwanted content is something Twitter has been testing in other areas, too. Last month, for example, it began piloting a new “Hide Replies” feature in Canada, which allows users to hide unwanted replies to their tweets so they’re not visible to everyone. The tweets aren’t deleted, but rather placed behind an extra click — similar to this Direct Message change.
Twitter is updating is Direct Message system in other ways, too.
At a press conference this week, Twitter announced several changes coming to its platform, including a way to follow topics, plus a search tool for the Direct Message inbox, as well as support for iOS Live Photos as GIFs, the ability to reorder photos and more.
Dozens of Android adware apps disguised as photo taking and editing apps have been caught serving ads that would take over users’ screens as part of a fraudulent money-making scheme.
Security firm Trend Micro said it found 85 individual apps downloaded more than eight million times from the Google Play — all of which have since been removed from the app store.
More often than not adware apps will run on a user’s device and will silently serve and click ads in the background and without the user’s knowledge to generate ad revenue. But these apps were particularly brazen and sneaky, one of the researchers said.
“It isn’t your run-of-the-mill adware family,” said Ecular Xu, a mobile threat response engineer at Trend Micro. “Apart from displaying advertisements that are difficult to close, it employs unique techniques to evade detection through user behavior and time-based triggers.”
The researchers discovered that the apps would keep a record when they were installed and sit dormant for around half-an-hour. After the delay, the app would hide its icon and create a shortcut on the user’s home screen, the security firm said. That, they say, helped to protect the app from being deleted if the user decided to drag and drop the shortcut to the ‘uninstall’ section of the screen.
“These ads are shown in full screen,” said Xu. “Users are forced to view the whole duration of the ad before being able to close it or go back to app itself.”
When the app unlocked, it displayed ads on the user’s home screen. The code also checks to make sure it doesn’t show the same ad too frequently, the researchers said.
Worse, the ads can be remotely configured by the fraudster, allowing ads to be displayed more frequently than the default five minute intervals.
Trend Micro provided a list of the apps — including Super Selfie Camera, Cos Camera, Pop Camera, and One Stroke Line Puzzle — all of which had a million downloads each.
Users about to install the apps had a dead giveaway: most of the apps had appalling reviews, many of which had as many one-star reviews as they did five-stars, with users complaining about the deluge of pop-up ads.
Google does not typically comment on app removals beyond acknowledging their removal from Google Play.
Is there room for another social media platform? ShareChat, a four-year-old social network in India that serves tens of million of people in regional languages, just answered that question with a $100 million financing round led by global giant Twitter .
Other than Twitter, TrustBridge Partners, and existing investors Shunwei Capital, Lightspeed Venture Partners, SAIF Capital, India Quotient and Morningside Venture Capital also participated in the Series D round of ShareChat.
The new round, which pushes ShareChat’s all-time raise to $224 million, valued the firm at about $650 million, a person familiar with the matter told TechCrunch. ShareChat declined to comment on the valuation.
Screenshot of Sharechat home page on web
“Twitter and ShareChat are aligned on the broader purpose of serving the public conversation, helping the world learn faster and solve common challenges. This investment will help ShareChat grow and provide the company’s management team access to Twitter’s executives as thought partners,” said Manish Maheshwari, managing director of Twitter India, in a prepared statement.
ShareChat serves 60 million users each month in 15 regional languages, Ankush Sachdeva, co-founder and CEO of the firm, told TechCrunch in an interview. The platform currently does not support English, and has no plans to change that, Sachdeva said.
That choice is what has driven users to ShareChat, he explained. In the early days of the social media platform, the firm experimented with English language. It saw most of its users choose English as their preferred language, but this also led to another interesting development: Their engagement with the app significantly reduced.
“For some reason, everyone wanted to converse in English. There was an inherent bias to pick English even when they did not know it.” (Only about 10% of India’s 1.3 billion people speak English. Hindi, a regional language, on the other hand, is spoken by about half a billion people, according to official government figures.)
So ShareChat pulled support for English. Today, an average user spends 22 minutes on the app each day, Sachdeva said. The learning in the early days to remove English is just one of the many things that has shaped ShareChat to what it is today and led to its growth.
In 2014, Sachdeva and two of his friends — Bhanu Singh and Farid Ahsan, all of whom met at the prestigious institute IIT Kanpur — got the idea of building a debate platform by looking at the kind of discussions people were having on Facebook groups.
They identified that cricket and movie stars were popular conversation topics, so they created WhatsApp groups and aggressively posted links to those groups on Facebook to attract users.
It was then when they built chatbots to allow users to discover different genres of jokes, recommendations for phones and food recipes, among other things. But they soon realized that users weren’t interested in most of such offerings.
“Nobody cared about our smartphone recommendations. All they wanted was to download wallpapers, ringtones, copy jokes and move on. They just wanted content.”
So in 2015, Sachdeva and company moved on from chatbots and created an app where users can easily produce, discover and share content in the languages they understand. (Today, user generated content is one of the key attractions of the platform, with about 15% of its user base actively producing content.)
A year later, ShareChat, like tens of thousands of other businesses, was in for a pleasant surprise. India’s richest man, Mukesh Ambani, launched his new telecom network Reliance Jio, which offered users access to the bulk of data at little to no charge for an extended period of time.
This immediately changed the way millions of people in the country, who once cared about each megabyte they consumed online, interacted with the internet. On ShareChat people quickly started to move from sharing jokes and other messages in text format to images and then videos.
That momentum continues to today. ShareChat now plans to give users more incentive — including money — and tools to produce content on the platform to drive engagement. “There remains a huge hunger for content in vernacular languages,” Sachdeva said.
Speaking of money, ShareChat has experimented with ads on the app and its site, but revenue generation isn’t currently its primary focus, Sachdeva said. “We’re in the Series D now so there is obviously an obligation we have to our investors to make money. But we all believe that we need to focus on growth at this stage,” he said.
ShareChat also has many users in Bangladesh, Nepal and the Middle East, where many users speak Indian regional languages. But the startup currently plans to focus largely on expanding its user base in India.
It will use the new capital to strengthen the technology infrastructure and hire more tech talent. Sachdeva said ShareChat is looking to open an office in San Francisco to hire local engineers there.
A handful of local and global giants have emerged in India in recent years to cater to people in small cities and villages, who are just getting online. Pratilipi, a storytelling platform has amassed more than 5 million users, for instance. It recently raised $15 million to expand its user base and help users strike deals with content studios.
Perhaps no other app poses a bigger challenge to ShareChat than TikTok, an app where users share short-form videos. TikTok, owned by one of the world’s most valued startups, has over 120 million users in India and sees content in many Indian languages.
But the app — with its ever growing ambitions — also tends to land itself in hot water in India every few weeks. In all sensitive corners of the country. On that front, ShareChat has an advantage. Over the years, it has emerged as an outlier in the country that has strongly supported proposed laws by the Indian government that seek to make social apps more accountable for content that circulates on their platforms.
US legislator David Cicilline will be joining the next meeting of the International Grand Committee on Disinformation and ‘Fake News’, it has been announced. The meeting will be held in Dublin on November 7.
Chair of the committee, the Irish Fine Gael politician Hildegarde Naughton, announced Cicilline’s inclusion today.
The congressman — who is chairman of the US House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee — will attend as an “ex officio member” which will allow him to question witnesses, she added.
Exactly who the witnesses in front of the grand committee will be is tbc. But the inclusion of a US legislator in the ranks of a non-US committee that’s been seeking answers about reining in online disinformation will certainly make any invitations that get extended to senior executives at US-based tech giants much harder to ignore.
Naughton points out that the addition of American legislators also means the International Grand Committee represents ~730 million citizens — and “their right to online privacy and security”.
“The Dublin meeting will be really significant in that it will be the first time that US legislators will participate,” she said in a statement. “As all the major social media/tech giants were founded and are headquartered in the United States it is very welcome that Congressman Cicilline has agreed to participate. His own Committee is presently conducting investigations into Facebook, Google, Amazon and Apple and so his attendance will greatly enhance our deliberations.”
“Greater regulation of social media and tech giants is fast becoming a priority for many countries throughout the world,” she added. “The International Grand Committee is a gathering of international parliamentarians who have a particular responsibility in this area. We will coordinate actions to tackle online election interference, ‘fake news’, and harmful online communications, amongst other issues while at the same time respecting freedom of speech.”
The international committee met for its first session in London last November — when it was forced to empty-chair Facebook founder Mark Zuckerberg who had declined to attend in person, sending UK policy VP Richard Allan in his stead.
Lawmakers from nine countries spent several hours taking Allan to task over Facebook’s lack of accountability for problems generated by the content it distributes and amplifies, raising myriad examples of ongoing failure to tackle the democracy-denting, society-damaging disinformation — from election interference to hate speech whipping up genocide.
A second meeting of the grand committee was held earlier this year in Canada — taking place over three days in May.
Again Zuckerberg failed to show. Facebook COO Sheryl Sandberg also gave international legislators zero facetime, with the company opting to send local head of policy, Kevin Chan, and global head of policy, Neil Potts, as stand ins.
Lawmakers were not amused. Canadian MPs voted to serve Zuckerberg and Sandberg with an open summons — meaning they’ll be required to appear before it the next time they step foot in the country.
Parliamentarians in the UK also issued a summons for Zuckerberg last year after repeat snubs to testify to the Digital, Culture, Media and Sport committee’s enquiry into fake news — a decision that essentially gave birth to the international grand committee, as legislators in multiple jurisdictions united around a common cause of trying to find ways to hold social media giants to accounts.
— Damian Collins (@DamianCollins) August 15, 2019
While it’s not clear who the grand committee will invite to the next session, Facebook’s founder seems highly unlikely to have dropped off their list. And this time Zuckerberg and Sandberg may find it harder to turn down an invite to Dublin, given the committee’s ranks will include a homegrown lawmaker.
In a statement on joining the next meeting, Cicilline said: “We are living in a critical moment for privacy rights and competition online, both in the United States and around the world. As people become increasingly connected by what seem to be free technology platforms, many remain unaware of the costs they are actually paying.
“The Internet has also become concentrated, less open, and growingly hostile to innovation. This is a problem that transcends borders, and it requires multinational cooperation to craft solutions that foster competition and safeguard privacy online. I look forward to joining the International Grand Committee as part of its historic effort to identify problems in digital markets and chart a path forward that leads to a better online experience for everyone.”
Multiple tech giants (including Facebook) have their international headquarters in Ireland — making the committee’s choice of location for their next meeting a strategic one. Should any tech CEOs thus choose to snub an invite to testify to the committee they might find themselves being served with an open summons to testify by Irish parliamentarians — and not being able to set foot in a country where their international HQ is located would be more than a reputational irritant.
Ireland’s privacy regulator is also sitting on a stack of open investigations against tech giants — again with Facebook and Facebook owned companies producing the fattest file (some 11 investigations). But there are plenty of privacy and security concerns to go around, with the DPC’s current case file also touching tech giants including Apple, Google, LinkedIn and Twitter.
If you have ever worked at any sizable company, the word “IT” probably doesn’t conjure up many warm feelings. If you’re working for an old, traditional enterprise company, you probably don’t expect anything else, though. If you’re working for a modern tech company, though, chances are your expectations are a bit higher. And once you’re at the scale of a company like Facebook, a lot of the third-party services that work for smaller companies simply don’t work anymore.
To discuss how Facebook thinks about its IT strategy and why it now builds most of its IT tools in-house, I sat down with the company’s CIO, Atish Banerjea, at its Menlo Park headquarter.
Before joining Facebook in 2016 to head up what it now calls its “Enterprise Engineering” organization, Banerjea was the CIO or CTO at companies like NBCUniversal, Dex One and Pearson.
“If you think about Facebook 10 years ago, we were very much a traditional IT shop at that point,” he told me. “We were responsible for just core IT services, responsible for compliance and responsible for change management. But basically, if you think about the trajectory of the company, were probably about 2,000 employees around the end of 2010. But at the end of last year, we were close to 37,000 employees.”
Traditionally, IT organizations rely on third-party tools and software, but as Facebook grew to this current size, many third-party solutions simply weren’t able to scale with it. At that point, the team decided to take matters into its own hands and go from being a traditional IT organization to one that could build tools in-house. Today, the company is pretty much self-sufficient when it comes to running its IT operations, but getting to this point took a while.
“We had to pretty much reinvent ourselves into a true engineering product organization and went to a full ‘build’ mindset,” said Banerjea. That’s not something every organization is obviously able to do, but, as Banerjea joked, one of the reasons why this works at Facebook “is because we can — we have that benefit of the talent pool that is here at Facebook.”
The company then took this talent and basically replicated the kind of team it would help on the customer side to build out its IT tools, with engineers, designers, product managers, content strategies, people and research. “We also made the decision at that point that we will hold the same bar and we will hold the same standards so that the products we create internally will be as world-class as the products we’re rolling out externally.”
One of the tools that wasn’t up to Facebook’s scaling challenges was video conferencing. The company was using a third-party tool for that, but that just wasn’t working anymore. In 2018, Facebook was consuming about 20 million conference minutes per month. In 2019, the company is now at 40 million per month.
Besides the obvious scaling challenge, Facebook is also doing this to be able to offer its employees custom software that fits their workflows. It’s one thing to adapt existing third-party tools, after all, and another to build custom tools to support a company’s business processes.
Banerjea told me that creating this new structure was a relatively easy sell inside the company. Every transformation comes with its own challenges, though. For Facebook’s Enterprise Engineering team, that included having to recruit new skill sets into the organization. The first few months of this process were painful, Banerjea admitted, as the company had to up-level the skills of many existing employees and shed a significant number of contractors. “There are certain areas where we really felt that we had to have Facebook DNA in order to make sure that we were actually building things the right way,” he explained.
Facebook’s structure creates an additional challenge for the team. When you’re joining Facebook as a new employee, you have plenty of teams to choose from, after all, and if you have the choice of working on Instagram or WhatsApp or the core Facebook app — all of which touch millions of people — working on internal tools with fewer than 40,000 users doesn’t sound all that exciting.
“When young kids who come straight from college and they come into Facebook, they don’t know any better. So they think this is how the world is,” Banerjea said. “But when we have experienced people come in who have worked at other companies, the first thing I hear is ‘oh my goodness, we’ve never seen internal tools of this caliber before.’ The way we recruit, the way we do performance management, the way we do learning and development — every facet of how that employee works has been touched in terms of their life cycle here.”
Facebook first started building these internal tools around 2012, though it wasn’t until Banerjea joined in 2016 that it rebranded the organization and set up today’s structure. He also noted that some of those original tools were good, but not up to the caliber employees would expect from the company.
“The really big change that we went through was up-leveling our building skills to really become at the same caliber as if we were to build those products for an external customer. We want to have the same experience for people internally.”
The company went as far as replacing and rebuilding the commercial Enterprise Resource Planning (ERP) system it had been using for years. If there’s one thing that big companies rely on, it’s their ERP systems, given they often handle everything from finance and HR to supply chain management and manufacturing. That’s basically what all of their backend tools rely on (and what companies like SAP, Oracle and others charge a lot of money for). “In that 2016/2017 time frame, we realized that that was not a very good strategy,” Banerjea said. In Facebook’s case, the old ERP handled the inventory management for its data centers, among many other things. When that old system went down, the company couldn’t ship parts to its data centers.
“So what we started doing was we started peeling off all the business logic from our backend ERP and we started rewriting it ourselves on our own platform,” he explained. “Today, for our ERP, the backend is just the database, but all the business logic, all of the functionality is actually all custom written by us on our own platform. So we’ve completely rewritten our ERP, so to speak.”
In practice, all of this means that ideally, Facebook’s employees face far less friction when they join the company, for example, or when they need to replace a broken laptop, get a new phone to test features or simply order a new screen for their desk.
One classic use case is onboarding, where new employees get their company laptop, mobile phones and access to all of their systems, for example. At Facebook, that’s also the start of a six-week bootcamp that gets new engineers up to speed with how things work at Facebook. Back in 2016, when new classes tended to still have less than 200 new employees, that was still mostly a manual task. Today, with far more incoming employees, the Enterprise Engineering team has automated most of that — and that includes managing the supply chain that ensures the laptops and phones for these new employees are actually available.
But the team also built the backend that powers the company’s more traditional IT help desks, where employees can walk up and get their issues fixed (and passwords reset).
To talk more about how Facebook handles the logistics of that, I sat down with Koshambi Shah, who heads up the company’s Enterprise Supply Chain organization, which pretty much handles every piece of hardware and software the company delivers and deploys to its employees around the world (and that global nature of the company brings its own challenges and additional complexity). The team, which has fewer than 30 people, is made up of employees with experience in manufacturing, retail and consumer supply chains.
Typically, enterprises offer their employees a minimal set of choices when it comes to the laptops and phones they issue to their employees, and the operating systems that can run on them tend to be limited. Facebook’s engineers have to be able to test new features on a wide range of devices and operating systems. There are, after all, still users on the iPhone 4s or BlackBerry that the company wants to support. To do this, Shah’s organization actually makes thousands of SKUs available to employees and is able to deliver 98% of them within three days or less. It’s not just sending a laptop via FedEx, though. “We do the budgeting, the financial planning, the forecasting, the supply/demand balancing,” Shah said. “We do the asset management. We make sure the asset — what is needed, when it’s needed, where it’s needed — is there consistently.”
In many large companies, every asset request is double guessed. Facebook, on the other hand, places a lot of trust in its employees, it seems. There’s a self-service portal, the Enterprise Store, that allows employees to easily request phones, laptops, chargers (which get lost a lot) and other accessories as needed, without having to wait for approval (though if you request a laptop every week, somebody will surely want to have a word with you). Everything is obviously tracked in detail, but the overall experience is closer to shopping at an online retailer than using an enterprise asset management system. The Enterprise Store will tell you where a device is available, for example, so you can pick it up yourself (but you can always have it delivered to your desk, too, because this is, after all, a Silicon Valley company).
For accessories, Facebook also offers self-service vending machines, and employees can walk up to the help desk.
The company also recently introduced an Amazon Locker-style setup that allows employees to check out devices as needed. At these smart lockers, employees simply have to scan their badge, choose a device and, once the appropriate door has opened, pick up the phone, tablet, laptop or VR devices they were looking for and move on. Once they are done with it, they can come back and check the device back in. No questions asked. “We trust that people make the right decision for the good of the company,” Shah said. For laptops and other accessories, the company does show the employee the price of those items, though, so it’s clear how much a certain request costs the company. “We empower you with the data for you to make the best decision for your company.”
Talking about cost, Shah told me the Supply Chain organization tracks a number of metrics. One of those is obviously cost. “We do give back about 4% year-over-year, that’s our commitment back to the businesses in terms of the efficiencies we build for every user we support. So we measure ourselves in terms of cost per supported user. And we give back 4% on an annualized basis in the efficiencies.”
Unsurprisingly, the company has by now gathered enough data about employee requests (Shah said the team fulfills about half a million transactions per year) that it can use machine learning to understand trends and be proactive about replacing devices, for example.
Facebooks’ Enterprise Engineering group doesn’t just support internal customers, though. Another interesting aspect to Facebook’s Enterprise Engineering group is that it also runs the company’s internal and external events, including the likes of F8, the company’s annual developer conference. To do this, the company built out conference rooms that can seat thousands of people, with all of the logistics that go with that.
The company also showed me one of its newest meeting rooms where there are dozens of microphones and speakers hanging from the ceiling that make it easier for everybody in the room to participate in a meeting and be heard by everybody else. That’s part of what the organization’s “New Builds” team is responsible for, and something that’s possible because the company also takes a very hands-on approach to building and managing its offices.
Facebook also runs a number of small studios in its Menlo Park and New York offices, where both employees and the occasional external VIP can host Facebook Live videos.
Indeed, live video, it seems, is one of the cornerstones of how Facebook employees collaborate and help employees who work from home. Typically, you’d just use the camera on your laptop or maybe a webcam connected to your desktop to do so. But because Facebook actually produces its own camera system with the consumer-oriented Portal, Banerjea’s team decided to use that.
“What we have done is we have actually re-engineered the Portal,” he told me. “We have connected with all of our video conferencing systems in the rooms. So if I have a Portal at home, I can dial into my video conferencing platform and have a conference call just like I’m sitting in any other conference room here in Facebook. And all that software, all the engineering on the portal, that has been done by our teams — some in partnership with our production teams, but a lot of it has been done with Enterprise Engineering.”
Unsurprisingly, there are also groups that manage some of the core infrastructure and security for the company’s internal tools and networks. All of those tools run in the same data centers as Facebook’s consumer-facing applications, though they are obviously sandboxed and isolated from them.
It’s one thing to build all of these tools for internal use, but now, the company is also starting to think about how it can bring some of these tools it built for internal use to some of its external customers. You may not think of Facebook as an enterprise company, but with its Workplace collaboration tool, it has an enterprise service that it sells externally, too. Last year, for the first time, Workplace added a new feature that was incubated inside of Enterprise Engineering. That feature was a version of Facebook’s public Safety Check that the Enterprise Engineering team had originally adapted to the company’s own internal use.
“Many of these things that we are building for Facebook, because we are now very close partners with our Workplace team — they are in the enterprise software business and we are the enterprise software group for Facebook — and many [features] we are building for Facebook are of interest to Workplace customers.”
As Workplace hit the market, Banerjea ended up talking to the CIOs of potential users, including the likes of Delta Air Lines, about how Facebook itself used Workplace internally. But as companies started to adopt Workplace, they realized that they needed integrations with existing third-party services like ERP platforms and Salesforce. Those companies then asked Facebook if it could build those integrations or work with partners to make them available. But at the same time, those customers got exposed to some of the tools that Facebook itself was building internally.
“Safety Check was the first one,” Banerjea said. “We are actually working on three more products this year.” He wouldn’t say what these are, of course, but there is clearly a pipeline of tools that Facebook has built for internal use that it is now looking to commercialize. That’s pretty unusual for any IT organization, which, after all, tends to only focus on internal customers. I don’t expect Facebook to pivot to an enterprise software company anytime soon, but initiatives like this are clearly important to the company and, in some ways, to the morale of the team.
This creates a bit of friction, too, though, given that the Enterprise Engineering group’s mission is to build internal tools for Facebook. “We are now figuring out the deployment model,” Banerjea said. Who, for example, is going to support the external tools the team built? Is it the Enterprise Engineering group or the Workplace team?
Chances are then, that Facebook will bring some of the tools it built for internal use to more enterprises in the long run. That definitely puts a different spin on the idea of the consumerization of enterprise tech. Clearly, not every company operates at the scale of Facebook and needs to build its own tools — and even some companies that could benefit from it don’t have the resources to do so. For Facebook, though, that move seems to have paid off and the tools I saw while talking to the team definitely looked more user-friendly than any off-the-shelf enterprise tools I’ve seen at other large companies.
This week, Automattic revealed it has signed all the paperwork to acquire Tumblr from Verizon, including its full staff of 200. Tumblr has undergone quite a journey since its headline-grabbing acquisition by Marissa Mayer’s Yahoo in 2013 for $1.1 billion, but after six years of neglect, its latest move is its first real start since it stopped being an independent company. Now, it’s in the hands of Matt Mullenweg, the only founder of a major tech company who has repeatedly demonstrated a talent for measured responses, moderation and a willingness to forego reckless explosive growth in favor of getting things ‘just right.’
There’s never been a better acquisition for all parties involved, or at least one in which every party should walk away feeling they got exactly what they needed out of the deal. Yes, that’s in spite of the reported $3 million-ish asking price.
Verizon Media acquired Tumblr through a deal made to buy Yahoo, under a previous media unit strategy and leadership team. Verizon Media has no stake in the company, and so headlines talking about the bath it apparently took relative to the original $1.1 billion acquisition price are either willfully ignorant or just plain dumb.
Six years after another company made that bad deal for a company it clearly didn’t have the right business focus to correctly operate, Verizon made a good one to recoup some money.
If you use Instagram and have noticed a bunch of strangers watching your Stories in recent months — accounts that don’t follow you and seem to be Russian — well, you’re not alone.
Nor are you being primed for a Russian disinformation campaign. At least, probably not. But you’re right to smell a fake.
TechCrunch’s very own director of events, Leslie Hitchcock, flagged the issue to us — complaining of “eerie” views on her Instagram Stories in the last couple of months from random Russian accounts, some seemingly genuine (such as artists with several thousand followers) and others simply “weird” looking.
A thread on Reddit also poses the existential question: “Why do Russian Models (that don’t follow me) keep watching my Instagram stories?” (The answer to which is: Not for the reason you hope.)
Instagram told us it is aware of the issue and is working on a fix.
It also said this inauthentic activity is not related to misinformation campaigns but is rather a new growth hacking tactic — which involves accounts paying third parties to try to boost their profile via the medium of fake likes, followers and comments (in this case by generating inauthentic activity by watching the Instagram Stories of people they have no real interest in in the hopes that’ll help them pass off as real and net them more followers).
Eerie is spot on. Some of these growth hackers probably have banks of phones set up where Instagram Stories are ‘watched’ without being watched. (Which obviously isn’t going to please any advertisers paying to inject ads into Stories… )
A UK social media agency called Hydrogen also noticed the issue back in June — blogging then that: “Mass viewing of Instagram Stories is the new buying followers of 2019”, i.e. as a consequence of the Facebook-owned social network cracking down on bots and paid-for followers on the platform.
So, tl;dr, squashing fakes is a perpetual game of whack-a-mole. Let’s call it Zuckerberg’s bane.
“Our research has found that several small social media agencies are using this as a technique to seem like they are interacting with the public,” Hydrogen also wrote, before going on to offer sage advice that: “This is not a good way to build a community, and we believe that Instagram will begin cracking down on this soon.”
Instagram confirmed to us it is attempting to crack down — saying it’s working to try to get rid of this latest eyeball-faking flavor of inauthentic activity. (We paraphrase.)
It also said that, in the coming months, it will introduce new measures to reduce such activity — specifically from Stories — but without saying exactly what these will be.
We also asked about the Russian element but Instagram was unable to provide any intelligence on why a big proportion of the fake Stories views seem to be coming from Russia (without any love). So that remains a bit of a mystery.
What can you do right now to prevent your Instagram Stories from being repurposed as a virtue-less signalling machine for sucking up naive eyeballs?
Switching your profile to private is the only way to thwart the growth hackers, for now.
Albeit, that means you’re limiting who you can reach on the Instagram platform as well as who can reach you.
When we suggested to Hitchcock she switch her account to private she responded with a shrug, saying: “I like to engage with brands.”
At a conference on the future challenges of intelligence organizations held in 2018, former Director of National Intelligence Dan Coats argued that he transformation of the American intelligence community must be a revolution rather than an evolution. The community must be innovative and flexible, capable of rapidly adopting innovative technologies wherever they may arise.
Intelligence communities across the Western world are now at a crossroads: The growing proliferation of technologies, including artificial intelligence, Big Data, robotics, the Internet of Things, and blockchain, changes the rules of the game. The proliferation of these technologies – most of which are civilian, could create data breaches and lead to backdoor threats for intelligence agencies. Furthermore, since they are affordable and ubiquitous, they could be used for malicious purposes.
The technological breakthroughs of recent years have led intelligence organizations to challenge the accepted truths that have historically shaped their endeavors. The hierarchical, compartmentalized, industrial structure of these organizations is now changing, revolving primarily around the integration of new technologies with traditional intelligence work and the redefinition of the role of the humans in the intelligence process.
Take for example Open-Source Intelligence (OSINT) – a concept created by the intelligence community to describe information that is unclassified and accessible to the general public. Traditionally, this kind of information was inferior compared to classified information; and as a result, the investments in OSINT technologies were substantially lower compared to other types of technologies and sources. This is changing now; agencies are now realizing that OSINT is easy to acquire and more beneficial, compared to other – more challenging – types of information.
Yet, this understanding trickle down solely, as the use of OSINT by intelligence organizations still involves cumbersome processes, including slow and complex integration of unclassified and classified IT environments. It isn’t surprising therefore that intelligence executives – for example the Head of State Department’s Intelligence Arm or the nominee to become the Director of the National Reconnaissance Office – recently argued that one of the community’s grandest challenges is the quick and efficient integration of OSINT in its operations.
Indeed, technological innovations have always been central to the intelligence profession. But when it came to processing, analyzing, interpreting, and acting on intelligence, however, human ability – with all its limitations – has always been considered unquestionably superior. That the proliferation of data and data sources are necessitating a better system of prioritization and analysis, is not questionable. But who should have a supremacy? Humans or machines?
A man crosses the Central Intelligence Agency (CIA) seal in the lobby of CIA Headquarters in Langley, Virginia, on August 14, 2008. (Photo: SAUL LOEB/AFP/Getty Images)
Big data comes for the spy business
The discourse is tempestuous. Intelligence veterans claim that there is no substitute for human judgment. They argue that artificial intelligence will never be capable of comprehending the full spectrum of considerations in strategic decision-making, and that it cannot evaluate abstract issues in the interpretation of human behavior. Machines can collect data and perhaps identify patterns, but they will never succeed in interpreting reality as do humans. Others also warn of the ethical implications of relying on machines for life-or-death situations, such as a decision to go to war.
In contrast, techno-optimists claim that human superiority, which defined intelligence activities over the last century, is already bowing to technological superiority. While humans are still significant, their role is no longer exclusive, and perhaps not even the most important in the process. How can the average intelligence officer cope with the ceaseless volumes of information that the modern world produces?
From 1995 to 2016, the amount of reading required of an average US intelligence researcher, covering a low-priority country, grew from 20,000 to 200,000 words per day. And that is just the beginning. According to forecasts, the volume of digital data that humanity will produce in 2025 will be ten times greater than is produced today. Some argue this volume can only be processed – and even analyzed – by computers.
Of course, the most ardent advocates for integration of machines into intelligence work are not removing human involvement entirely; even the most skeptical do not doubt the need to integrate artificial intelligence into intelligence activities. The debate centers on the question of who will help whom: machines in aid of humans or humans in aid of machines.
Most insiders agree that the key to moving intelligence communities into the 21st century lies in breaking down inter- and intra-organizational walls, including between
the services within the national security establishment; between the public sector, the private sector, and academia; and between intelligence services of different countries.
It isn’t surprising therefore that the push toward technological innovation is a part of the current intelligence revolution. The national security establishment already recognizes that the private sector and academia are the main drivers of technological innovation.
Alexander Karp, chief executive officer and co-founder of Palantir Technologies Inc., walks the grounds after the morning sessions during the Allen & Co. Media and Technology Conference in Sun Valley, Idaho, U.S., on Thursday, July 7, 2016. Billionaires, chief executive officers, and leaders from the technology, media, and finance industries gather this week at the Idaho mountain resort conference hosted by investment banking firm Allen & Co. Photographer: David Paul Morris/Bloomberg via Getty Images
Private services and national intelligence
In the United States there is dynamic cooperation between these bodies and the security community, including venture capital funds jointly owned by the government and private companies.
Take In-Q-Tel – a venture capital fund established 20 years ago to identify and invest in companies that develop innovative technology which serves the national security of the United States, thus positioning the American intelligence community at the forefront of technological development. The fund is an independent corporation, which is not subordinate to any government agency, but it maintains constant coordination with the CIA, and the US government is the main investor.
It’s most successful endeavor, which has grown to become a multi-billion company though somewhat controversial, is Palantir, a data-integration and knowledge management provider. But there are copious other startups and more established companies, ranging from sophisticated chemical detection (e.g. 908devices), automated language translations (e.g. Lilt), and digital imagery (e.g. Immersive Wisdom) to sensor technology (e.g. Echodyne), predictive analytics (e.g. Tamr) and cyber security (e.g. Interset).
Actually, a significant part of intelligence work is already being done by such companies, small and big. Companies like Hexagon, Nice, Splunk, Cisco and NEC offer intelligence and law enforcement agencies a full suite of platforms and services, including various analytical solutions such as video analytics, identity analytics, and social media analytics . These platforms help agencies to obtain insights and make predictions from the collected and historic data, by using real-time data stream analytics and machine learning. A one-stop-intelligence-shop if you will.
Another example of government and non-government collaboration is the Intelligence Advanced Research Projects Activity (IARPA) – a nonprofit organization which reports to the Director of National Intelligence (DNI). Established in 2006, IARPA finances advanced research relevant to the American intelligence community, with a focus on cooperation between academic institutions and the private sector, in a broad range of technological and social sciences fields. With a relatively small annual operational budget of around $3bn, the fund gives priority to multi-year development projects that meet the concrete needs of the intelligence community. The majority of the studies supported by the fund are unclassified and open to public scrutiny, at least until the stage of implementation by intelligence agencies.
Image courtesy of Bryce Durbin/TechCrunch
Challenging government hegemony in the intelligence industry
These are all exciting opportunities; however, the future holds several challenges for intelligence agencies:
First, intelligence communities lose their primacy over collecting, processing and disseminating data. Until recently, the organizations Raison D’etre was, first and foremost, to obtain information about the enemy, before said enemy could disguise that information.
Today, however, a lot of information is available, and a plethora of off-the-shelf tools (some of which are free) allow all parties, including individuals, to collect, process and analyze vast amounts of data. Just look at IBM’s i2 Analyst’s Notebook, which gives analysts, for just few thousand dollars, multidimensional visual analysis capabilities so they can quickly uncover hidden connections and patterns in data. Such capacities belonged, just until recently, only to governmental organizations.
A second challenge for intelligence organizations lies in the nature of the information itself and its many different formats, as well as in the collection and processing systems, which are usually separate and lacking standardization. As a result, it is difficult to merge all of the available information into a single product. For this reason, intelligence organizations are developing concepts and structures which emphasize cooperation and decentralization.
The private market offers a variety of tools for merging information; ranging from simple off-the-shelf solutions, to sophisticated tools that enable complex organizational processes. Some of the tools can be purchased and quickly implemented – for example, data and knowledge sharing and management platforms – while others are developed by the organizations themselves to meet their specific needs.
The third challenge relates to the change in the principle of intelligence prioritization. In the past, the collection of information about a given target required a specific decision to do so and dedicated resources to be allocated for that purpose, generally at the expense of allocation of resources to a different target. But in this era of infinite quantities of information, almost unlimited access to information, advanced data storage capabilities and the ability to manipulate data, intelligence organizations can now collect and store information on a massive scale, without the need to immediately process it – rather, it may be processed as required.
This development leads to other challenges, including: the need to pinpoint the relevant information when required; to process the information quickly; to identify patterns and draw conclusions from mountains of data; and to make the knowledge produced accessible to the consumer. It is therefore not surprising that most of the technological advancements in the intelligence field respond to these challenges, bringing together technologies such as big data with artificial intelligence, advanced information storage capabilities and advanced graphical presentation of information, usually in real time.
Lastly, intelligence organizations are built and operate according to concepts developed at the peak of the industrial era, which championed the principle of the assembly line, which are both linear and cyclical. The linear model of the intelligence cycle – collection, processing, research, distribution and feedback from the consumer – has become less relevant. In this new era, the boundaries between the various intelligence functions and between the intelligence organizations and their eco-system are increasingly blurred.
The brave new world of intelligence
A new order of intelligence work is therefore required, and therefore intelligence organizations are currently in the midst of a redefinition process. Traditional divisions – e.g. between collection and research; internal security organizations and positive intelligence; and public and private sectors – all become obsolete. This is not another attempt to carry out structural reforms: there is a sense of epistemological rupture which requires a redefinition of the discipline, the relationships that intelligence organizations have with their environments – from decision makers to the general public – and the development of new structures and conceptions.
And of course, there are even wider concerns; legislators need to create a legal framework that accurately incorporates the assessments based on data in a way that takes the predictive aspects of these technologies into account and still protects the privacy and security rights of individual citizens in nation states that have a respect for those concepts.
Despite the recognition of the profound changes taking place around them, today’s intelligence institutions are still built and operate in the spirit of Cold War conceptions. In a sense, intelligence organizations have not internalized the complexity that characterizes the present time – a complexity which requires abandoning the dichotomous (within and outside) perception of the intelligence establishment, as well as the understanding of the intelligence enterprise and government bodies as having a monopoly on knowledge; concepts that have become obsolete in an age of decentralization, networking and increasing prosperity.
Although some doubt the ability of intelligence organizations to transform and adapt themselves to the challenges of the future, there is no doubt that they must do so in this era in which speed and relevance will determine who prevails.
Telegram, a popular instant messaging app, has introduced a new feature to give group admins on the app better control over how members engage, the latest in a series of interesting features it has rolled out in recent months to expand its appeal.
The feature, dubbed Slow Mode, allows a group administrator to dictate how often a member could send a message in the group. If implemented by a group, members who have sent a text will have to wait between 30 seconds to as long as an hour before they can say something again in that group.
The messaging platform, which had more than 200 million monthly active users as of early 2018, said the new feature was aimed at making conversations in groups “more orderly” and raising the “value of each individual message.” It suggested admins to “keep [the feature] on permanently, or toggle as necessary to throttle rush hour traffic.”
As tech platforms including WhatsApp grapple with containing the spread of misinformation on their messaging services, the new addition from Telegram, which has largely remained immune to any similar controversies, illustrates how proactively it works on adding features to control the flow of information on its platform.
In comparison, WhatsApp has enforced limits on how often a user could forward a text message and is using machine learning techniques to weed out fraudulent users during the sign up procedure itself.
Shivnath Thukral, Director of Public Policy for Facebook in India and South Asia, said at a conference this month that virality of content has dropped by 25% to 30% on WhatsApp since the messaging platform introduced limits on forwards.
Telegram isn’t marketing the “Slow Mode” as a way to tackle the spread of false information, though. Instead, it says the feature would give users more “peace of mind.” Indeed, unlike WhatsApp, which allows up to 256 users to be part of a group, up to a whopping 200,000 users can join a Telegram group.
this new Telegram groups feature is so interesting pic.twitter.com/763mHGmZ0u
— freia lobo (@freialobo) August 10, 2019
On a similar tone, Telegram has also added an option that will enable users to send a message without invoking a sound notification at the recipient’s end. “Simply hold the Send button to have any message or media delivered without sound,” the app maker said. “Your recipient will get a notification as usual, but their phone won’t make a sound – even if they forgot to enable the Do Not Disturb mode.”
Telegram has also introduced a range of other small features such as the ability for group owners to add custom titles for admins. Videos on the app now display thumbnail previews when a user scrubs through them, making it easier to them to find the right moment. Like YouTube, users on Telegram too can now share a video that jumps directly at a certain timestamp. Users can also animate their emojis now — if they are into that sort of thing.
In June, Telegram introduced a number of location-flavored features to allow users to quickly exchange contact details without needing to type in digits.
The White House is contemplating issuing an executive order that would widen its attack on the operations of social media companies.
The White House has prepared an executive order called “Protecting Americans from Online Censorship” that would give the Federal Communications Commission oversight of how Facebook, Twitter and other tech companies monitor and manage their social networks, according to a CNN report.
Under the order, which has not yet been announced and could be revised, the FCC would be tasked with developing new regulations that would determine when and how social media companies filter posts, videos or articles on their platforms.
The draft order also calls for the Federal Trade Commission to take those new policies into account when investigating or filing lawsuits against technology companies, according to the CNN report.
Social media censorship has been a perennial talking point for President Donald Trump and his administration. In May, the White House set up a tip line for people to provide evidence of social media censorship and a systemic bias against conservative media.
In the executive order, the White House says it received more than 15,000 complaints about censorship by the technology platforms. The order also includes an offer to share the complaints with the Federal Trade Commission.
As part of the order, the Federal Trade Commission would be required to open a public complaint docket and coordinate with the Federal Communications Commission on investigations of how technology companies curate their platforms — and whether that curation is politically agnostic.
Under the proposed rule, any company whose monthly user base includes more than one-eighth of the U.S. population would be subject to oversight by the regulatory agencies. A roster of companies subject to the new scrutiny would include Facebook, Google, Instagram, Twitter, Snap and Pinterest .
At issue is how broadly or narrowly companies are protected under the Communications Decency Act, which was part of the Telecommunications Act of 1996. Social media companies use the Act to shield against liability for the posts, videos or articles that are uploaded from individual users or third parties.
The Trump administration aren’t the only politicians in Washington are focused on the laws that shield social media platforms from legal liability. House Speaker Nancy Pelosi took technology companies to task earlier this year in an interview with Recode.
The criticisms may come from different sides of the political spectrum, but their focus on the ways in which tech companies could use Section 230 of the Act is the same.
The White House’s executive order would ask the FCC to disqualify social media companies from immunity if they remove or limit the dissemination of posts without first notifying the user or third party that posted the material, or if the decision from the companies is deemed anti-competitive or unfair.
The FTC and FCC had not responded to a request for comment at the time of publication.
Sometimes it does seem the entire tech industry could use someone to talk to, like a good therapist or social worker. That might sound like an insult, but I mean it mostly earnestly: I am a chaplain who has spent 15 years talking with students, faculty, and other leaders at Harvard (and more recently MIT as well), mostly nonreligious and skeptical people like me, about their struggles to figure out what it means to build a meaningful career and a satisfying life, in a world full of insecurity, instability, and divisiveness of every kind.
In related news, I recently took a year-long paid sabbatical from my work at Harvard and MIT, to spend 2019-20 investigating the ethics of technology and business (including by writing this column at TechCrunch). I doubt it will shock you to hear I’ve encountered a lot of amoral behavior in tech, thus far.
A less expected and perhaps more profound finding, however, has been what the introspective founder Priyag Narula of LeadGenius tweeted at me recently: that behind the hubris and Machiavellianism one can find in tech companies is a constant struggle with anxiety and an abiding feeling of inadequacy among tech leaders.
In tech, just like at places like Harvard and MIT, people are stressed. They’re hurting, whether or not they even realize it.
So when Harvard’s Berkman Klein Center for Internet and Society recently posted an article whose headline began, “Why AI Needs Social Workers…”… it caught my eye.
The article, it turns out, was written by Columbia University Professor Desmond Patton. Patton is a Public Interest Technologist and pioneer in the use of social media and artificial intelligence in the study of gun violence. The founding Director of Columbia’s SAFElab and Associate Professor of Social Work, Sociology and Data Science at Columbia University.
A trained social worker and decorated social work scholar, Patton has also become a big name in AI circles in recent years. If Big Tech ever decided to hire a Chief Social Work Officer, he’d be a sought-after candidate.
It further turns out that Patton’s expertise — in online violence & its relationship to violent acts in the real world — has been all too “hot” a topic this past week, with mass murderers in both El Paso, Texas and Dayton, Ohio having been deeply immersed in online worlds of hatred which seemingly helped lead to their violent acts.
Fortunately, we have Patton to help us understand all of these issues. Here is my conversation with him: on violence and trauma in tech on and offline, and how social workers could help; on deadly hip-hop beefs and “Internet Banging” (a term Patton coined); hiring formerly gang-involved youth as “domain experts” to improve AI; how to think about the likely growing phenomenon of white supremacists live-streaming barbaric acts; and on the economics of inclusion across tech.
Greg Epstein: How did you end up working in both social work and tech?
Desmond Patton: At the heart of my work is an interest in root causes of community-based violence, so I’ve always identified as a social worker that does violence-based research. [At the University of Chicago] my dissertation focused on how young African American men navigated violence in their community on the west side of the city while remaining active in their school environment.
[From that work] I learned more about the role of social media in their lives. This was around 2011, 2012, and one of the things that kept coming through in interviews with these young men was how social media was an important tool for navigating both safe and unsafe locations, but also an environment that allowed them to project a multitude of selves. To be a school self, to be a community self, to be who they really wanted to be, to try out new identities.