FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

EU adopts rules on one-hour takedowns for terrorist content

By Natasha Lomas

The European Parliament approved a new law on terrorist content takedowns yesterday, paving the way for one-hour removals to become the legal standard across the EU.

The regulation “addressing the dissemination of terrorist content online” will come into force shortly after publication in the EU’s Official Journal — and start applying 12 months after that.

The incoming regime means providers serving users in the region must act on terrorist content removal notices from Member State authorities within one hour of receipt, or else provide an explanation why they have been unable to do so.

There are exceptions for educational, research, artistic and journalistic work — with lawmakers aiming to target terrorism propaganda being spread on online platforms like social media sites.

The types of content they want speedily removed under this regime includes material that incites, solicits or contributes to terrorist offences; provides instructions for such offences; or solicits people to participate in a terrorist group.

Material posted online that provides guidance on how to make and use explosives, firearms or other weapons for terrorist purposes is also in scope.

However concerns have been raised over the impact on online freedom of expression — including if platforms use content filters to shrink their risk, given the tight turnaround times required for removals.

The law does not put a general obligation on platforms to monitor or filter content but it does push service providers to prevent the spread of proscribed content — saying they must take steps to prevent propagation.

It is left up to service providers how exactly they do that, and while there’s no legal obligation to use automated tools it seems likely filters will be what larger providers reach for, with the risk of unjustified, speech chilling takedowns fast-following. 

Another concern is how exactly terrorist content is being defined under the law — with civil rights groups warning that authoritarian governments within Europe might seek to use it to go after critics based elsewhere in the region.

The law does include transparency obligations — meaning providers must publicly report information about content identification and takedown actions annually.

On the sanctions side, Member States are responsible for adopting rules on penalties but the regulation sets a top level of fines for repeatedly failing to comply with provisions at up to 4% of global annual turnover.

EU lawmakers proposed the new rules back in 2018  when concern was riding high over the spread of ISIS content online.

Platforms were pressed to abide by an informal one-hour takedown rule in March of the same year. But within months the Commission came with a more expansive proposal for a regulation aimed at “preventing the dissemination of terrorist content online”.

Negotiations over the proposal have seen MEPs and Member States (via the Council) tweaking provisions — with the former, for example, pushing for a provision that requires competent authority to contact companies that have never received a removal order a little in advance of issuing the first order to remove content — to provide them with information on procedures and deadlines — so they’re not caught entirely on the hop.

The impact on smaller content providers has continued to be a concern for critics, though.

The Council adopted its final position in March. The approval by the Parliament yesterday concludes the co-legislative process.

Commenting in a statement, MEP Patryk JAKI, the rapporteur for the legislation, said: “Terrorists recruit, share propaganda and coordinate attacks on the internet. Today we have established effective mechanisms allowing member states to remove terrorist content within a maximum of one hour all around the European Union. I strongly believe that what we achieved is a good outcome, which balances security and freedom of speech and expression on the internet, protects legal content and access to information for every citizen in the EU, while fighting terrorism through cooperation and trust between states.”

For startups choosing a platform, a decision looms: Build or buy?

By Annie Siebert
TX Zhuo Contributor
TX Zhuo is the managing partner of Fika Ventures, focusing on fintech, enterprise software and marketplace opportunities.
Colton Pace Contributor
Colton Pace is an investor at Fika Ventures. He previously held roles investing at Vulcan Capital and Madrona Venture Labs.

Everyone warns you not to build on top of someone else’s platform.

When I first started in VC more than 10 years ago, I was told never to invest in a company building on top of another company’s platform. Dependence on a platform makes you susceptible to failure and caps the return on your investment because you have no control over API access, pricing changes and end-customer data, among other legitimate concerns.

I am sure many of you recall Facebook shutting down its API access back in 2015, or the uproar Apple caused when it decided to change the commission it was charging app developers in 2020.

Put simply, founders can no longer avoid the decision around platform dependency.

Salesforce in many ways paved the way for large enterprise platform companies, being the first dedicated SaaS company to surpass $10 billion in annual revenue supported by its open application development marketplace. Salesforce’s success has given rise to dominant platforms in other verticals, and for founders starting companies, there is no avoiding that platform decision these days.

Some points to consider:

  • Over 4,000 fintech companies, including several unicorns, have built their platforms on top of Plaid.
  • Recruiters may complain about the cost, but 95% still utilize LinkedIn.
  • More than 20,000 companies trust Segment to be their system of record for customer data.
  • Shopify powers over 1 million businesses across the globe.
  • Epic has the medical records of nearly 50% of the U.S. population.

What does this mean for founders who decide to build on top of another platform?

Increase speed to market

PostScript, an SMS/MMS marketing platform for commerce brands, built its platform on Shopify, giving it immediate access to over 1 million brands and a direct customer acquisition funnel. That has allowed PostScript to capture 3,500 of its own customers and successfully close a $35 million Series B in March 2021.

Ability to focus on core functionality

Varo, one of the fastest-growing neobanks, started in 2015 with the principle that a bank could put customers’ interests first and be profitable. But in order to deliver on its mission, it needed to understand where its customers were spending their money. By partnering with Plaid, Varo enabled more than 176,000 of its users to connect their Varo account to outside apps and services, allowing Varo to focus on its core mission to provide more relevant financial products and services.

Gain credibility by association

Pearpop raises from The Chainsmokers, Alexis Ohanian, Amy Schumer, Kevin Hart, Mark Cuban, Marshmello, and Snoop Dogg

By Jonathan Shieber

Pearpop, the marketplace for social collaborations between the teeming hordes of musicians, craftspeople, chefs, clowns, diarists, dancers, artists, actors, acrobats, aspiring celebrities and actual celebrities, has raised $16 million in funding that includes what seems like half of Hollywood, along with Alexis Ohanian’s Seven Seven Six venture firm and Bessemer Venture Partners.

The funding was actually split between a $6 million seed funding round co-led by Ashton Kutcher and Guy Oseary’s Sound Ventures and Slow Ventures, with participation from Atelier Ventures and Chapter One Ventures and a $10 million additional investment led by Ohanian’s Seven Seven Six with participation from Bessemer.

TechCrunch first covered pearpop last year and there’s no denying that the startup is on to something. It basically takes Cameo’s celebrity marketplace for private shout-outs and makes it public. Allowing social media personalities to boost their followers by paying more popular personalities to shout out, duet, or comment on their posts.

“I’ve invested in pearpop because it’s been on my mind for a while that the creator economy has resulted in a lot of not equitable outcomes for creators. Where i talked about the missing middle class of the creator economy,” said Li Jin, the founder of Atelier Ventures and author of a critical piece on creator economics, “The creator economy needs a middle class“. 

“When I saw pearpop I felt like there was a really big potential for pearpop to be the one of the creators of the creative middle class. They’ve introduced this mechanism by which larger creators can help smaller creators and everyone has something of value to offer something to everyone else in the ecosystem.”

Jin discovered pearpop through the TechCrunch piece, she said. “You wrote that article and then i reached out to the team,” said Jin.

The idea was so appealing, it brought in a slew of musicians, athletes, actors and entertainers, including: Abel Makkonen (The Weeknd), Amy Schumer, The Chainsmokers, Diddy, Gary Vaynerchuk, Griffin Johnson, Josh Richards, Kevin Durant (Thirty 5 Ventures), Kevin Hart (HartBeat Ventures), Mark Cuban, Marshmello, Moe Shalizi, Michael Gruen (Animal Capital), MrBeast (Night Media Ventures), Rich Miner (Android co-founder) and Snoop Dogg.

“Pearpop has the potential to benefit all social media platforms by delivering new users and engagement, while simultaneously leveling the playing field of opportunity for creators,” said Alexis Ohanian, Founder, Seven Seven Six, in a statement. “The company has created a revolutionary new marketplace model that is set to completely reimagine how we think of social media monetization. As both a social media founder and an investor, I’m excited for what’s to come with pearpop.”

Already Heidi Klum, Loren Gray, Snoop Dogg, and Tony Hawk have gotten paid to appear in social media posts from aspiring auteurs on the social media platform TikTok.

Using the platform is relatively simple. A social media user (for now, that means just TikTok) sends a post that exists on their social feed and requests that another social media user interacts with it in some way — either commenting, posting a video in response, or adding a sound. If the request seems okay, or “on brand”, then the person who accepts the request performs the prescribed action.

Pearpop takes a 25% cut of all transactions with the social media user who’s performing the task getting the other 75%.

The company wouldn’t comment on revenue numbers, except to say that it’s on track to bring in seven figures this year.

Users on the platform set their prices and determine which kinds of services they’re willing to provide to boost the social media posts of their contractors.

Prices range anywhere from $5 to $10,000 depending on the size of a user’s following and the type of request that’s being made. Right now, the most requested personality on the marketplace is the TikTok star, Anna Banana.

These kinds of transactions do have impacts. The company said that personalities on the platform were able to increase their follower count with the service. For instance, Leah Svoboda went from 20K to 141K followers, after a pearpop duet with Anna Shumate.

If this all makes you feel like you’ve tripped and fallen through a Black Mirror into a dystopian hellscape where everything and every interaction is a commodity to be mined for money, well… that’s life.

“What I appreciate most about pearpop is the control it gives me as a creator,” said Anna Shumate, TikTok influencer @annabananaxdddd. “The platform allows me to post what I want and when I want. My followers still love my content because it’s authentic and true to me, which is what sets pearpop apart from all of the other opportunities on social media.”

Talent agencies, too, see the draw. Early adopters include Talent X, Get Engaged, and Next Step Talent and The Fuel Injector, which has added its entire roster of talent to pearpop, which includes Kody Antle, Brooke Monk and Harry Raftus, the company said.

“The initial concept came out of an obvious gap within the space: no marketplace existed for creators of all sizes to monetize through simple, authentic collaborations that are mutually beneficial,” said Cole Mason, co-founder & CEO, pearpop.  “It soon became clear that this was a product that people had been waiting for, as thousands of people rely on our platform today to gain full control of their social capital for the first time starting with TikTok.”

Nigeria’s SEC warns investment platforms to stop trading ‘unregistered’ foreign securities

By Tage Kene-Okafor

In a circular released by Nigeria’s capital market regulator SEC today, investment platforms providing access to foreign securities might be treading on dangerous grounds.

According to the SEC regulations that have just been brought to light, these platforms are trading foreign securities not registered in the country and have been warned to stop doing so. Capital market operators in partnership with them have also been warned to renege on providing brokerage services for foreign securities.

Over the past three years, Robinhood-esque platforms like Bamboo, Trove, Chaka and Rise have sprung forth in the Nigerian fintech space. They offer Nigerians access to stocks, bonds and other securities in both local and international markets. These platforms have grown in popularity among the middle class and provide a haven to protect earnings from naira devaluations.

That said, there’s a vast difference in how they operate when compared to Robinhood. In addition to being a trading app, Robinhood offers online brokerages (introducing and clearing) and also zero commission trading. Nigerian investment platforms do not, and while any trading platform can get a brokerage license in the U.S., it can be a Herculean task to obtain one in Nigeria. This is where capital market operators (local and foreign brokerage firms in this case) come into play, forming strategic partnerships with these companies so Nigerians can access both local and foreign fractional securities.

After a series of regulatory onslaught from different government bodies on tech startups last year, the SEC followed suit in December. It singled out Chaka, one of the platforms and accused it of selling and advertising stocks. The regulator’s definition of the alleged offence was that Chaka “engaged in investment activities, including providing a platform for purchasing shares in foreign companies such as Google, Amazon, and Alibaba, outside the Commission’s regulatory purview and without requisite registration.”

The company’s CEO, Tosin Osibodu, denied any wrongdoing, and since the turn of the year, not much has been heard from the SEC and Chaka regarding this matter until the release of today’s circular. Unsurprisingly, the regulator continued from where it left off, only this time, all investment platforms including brokerage firms — not just Chaka — are involved. SEC’s subtle directive is to stop selling, issuing or offering for sale any foreign securities not listed on any exchange registered in Nigeria.

What this inherently means from now on is that investment platforms will have their work cut out and might only offer individuals access to only local stocks and securities. This affects the business models of these startups. And the core value they provide, which is to help Nigerians store monetary value and hedge against naira devaluation is at the threat of being wiped out.

Here’s the information released by the regulator as seen on its website:

The attention of the Securities and Exchange Commission (the Commission) has been drawn to the existence of several providers of online investment and trading platforms which purportedly facilitate direct access of the investing public in the Federal Republic of Nigeria to securities of foreign companies listed on Securities Exchanges registered in other jurisdictions. These platforms also claim to be operating in partnership with Capital Market operators (CMOs) registered with the Commission.

The Commission categorically states that by the provisions of Sections 67-70 of the Investments and Securities Act (ISA), 2007 and Rules 414 & 415 of the SEC Rules and Regulations, only foreign securities listed on any Exchange registered in Nigeria may be issued, sold or offered for sale or subscription to the Nigerian public. Accordingly, CMOs who work in concert with the referenced online platforms are hereby notified of the Commission’s position and advised to desist henceforth.

The Commission enjoins the investing public to seek clarification as may be required via its established channels of communication on investment products advertised through conventional or online mediums.

This is a developing story. More to follow…

Pinterest announces $500K Creator Fund, ‘Creator Code’ content policy, moderation tools and more

By Sarah Perez

Pinterest today hosted an event focused on its creator community, where the company announced a series of updates including the launch of a $500,000 Creator Fund, a new content policy called the Creator Code, as well as new moderation tools, among other things. With the changes, the company says its goal is to ensure the platform continues to be a “inclusive, positive and inspiring place.” The new content guidelines put that into more specific terms as it requires Pinterest creators to fact-check content, practice inclusion, be kind, and ensure any call to action they make via the site doesn’t cause harm.

Creators will be required to agree and sign the code during the publishing process for Story Pins, where they tap a button that say “I agree” to statements that include “Be Kind,” “Check my facts,” “Be aware of triggers,” “Practice inclusion,” and “Do Not Harm.”

Image Credits: Pinterest

The code will be enforced the same way Pinterest today applies its rules for its other content policies: a combination of machine learning and human review, Pinterest tells us. However, the site’s algorithm will be designed to reward positive content and block harmful content, like anti-vaccination sentiments, for example. This could have a larger impact on what sort of content is shared on Pinterest, rather than a pop-up agreement with simple statements.

The Creator Code itself is not yet live, but will roll out to creators to sign and adopt in the weeks ahead, Pinterest says.

Image Credits: Pinterest

Pinterest today also introduced several new creator tools focused on the similar goal of making Pinterest a more positive, safe experience for all.

It’s launching comment moderation tools that will allow creators to remove and filter comments on their content, as well as tools that will allow them to feature up to three comments in the comment feed to highlight positive feedback. New spam prevention tools will help to clear out some of the unwanted comments, too, by leveraging machine learning technology to detect and remove bad comments.

Also new are “positivity reminders,” which will pop up asking Pinterest users to reconsider before posting potentially offensive comments. The notification will push users to go back and edit their comment, but doesn’t prevent them from posting.

Image Credits: Pinterest

Related to these efforts, Pinterest announced the launch of its first-ever Creator Fund at today’s event. The fund is specifically focused on elevating creators from underrepresented communities in the United States, and will offer a combination of creative strategy consulting, and compensating them with budget for content creation and ad credits. At least 50% of the fund’s recipients will be from underrepresented groups, Pinterest says.

The company tells us it’s initially committed to giving creators $500,000 in cash and media throughout 2021.

“For the first participants of the program, we worked with eight emerging creators across fashion, photography, food and travel, and will be identifying ten more creators in the next few months for the next cohort,” noted Creator Inclusion Lead Alexandra Nikolajev.

“We’re on a journey to build a globally inclusive platform where Pinners and Creators around the world can discover ideas that feel personalized, relevant and reflective of who they are,” Nikolajev said.

Pinterest has been working to rebuild its image in the wake of last year’s allegations of a host of internal issues, including unfair pay, racism, retaliation, and sexism, which conflicted with its outside image of being one of the “nicer” places to work in tech. Despite this fallout — which included a lawsuit, employee walkout, petitions, and more —  the issues that had been raised weren’t always reflected in Pinterest’s product.

The company had previously launched inclusive features like “skin tone ranges” to help those shopping for beauty products find matches for their skin tone. It also allowed retailers and brands to identify themselves as members of an underrepresented group, which gave their content the ability to appear in more places across Pinterest’s platform, like the Today tab, Shopping Spotlights and The Pinterest Shop, for instance.

Evan Sharp, Pinterest’s co-founder and Chief Design and Creative Officer, referenced the company’s image as “a positive place” at today’s event.

“We’ve been building Pinterest for 11 years, and ever since our users routinely tell us that Pinterest is the ‘last positive corner of the internet.’ In that time, we’ve also learned that you need to design positivity into online platforms as deliberately as much as you design negativity out,” Sharp said. “The Creator Code is a human-centric way for Creators to understand how to be successful on Pinterest while using their voice to keep Pinterest positive and inclusive,” he added.

Today, Pinterest serves over 450 million users worldwide, but is challenged by large platforms serving creators like Facebook, Instagram, YouTube, and others, including newcomers like TikTok and those that are inching into the creator community with funds of their own, like Snapchat, which is paying creators for Spotlight content, and Clubhouse, which is now funding creators’ shows. The increased competition for creator interest has left Pinterest needing an incentive program of its own.

To kick of its announcement, Pinterest’s Head of Content and Creator Partnerships, Aya Kanai, interviewed television personality Jonathan Van Ness (Queer Eye) at today’s virtual event, where they talked about the need for positivity and inclusivity on social media. Other event participants included creators Peter Som, Alison Cayne, Onyi Moss, Oyin Edogi and Jomely Breton — the latter two who spoke about putting the Creator Fund to use for themselves.

UK’s Digital Markets Unit starts work on pro-competition reforms

By Natasha Lomas

A new UK public body that will be tasked with helping regulate the most powerful companies in the digital sector to ensure competition thrives online and consumers of digital services have more choice and control over their data has launched today.

The Digital Markets Unit (DMU), which was announced in November last year — following a number of market reviews and studies examining concerns about the concentration of digital market power — does not yet have statutory powers itself but the government has said it will consult on the design of the new “pro-competition regime” this year and legislate to put the DMU on a statutory footing as soon as parliamentary time allows.

Concerns about the market power of adtech giants Facebook and Google are key drivers for the regulatory development.

💻 Our new Digital Markets Unit, launched today, will help make sure tech giants can’t exploit their market dominance to crowd out competition and stifle innovation online.

Find out more: https://t.co/PCBCYwuA3o pic.twitter.com/Ybvn81uuBK

— Competition & Markets Authority (@CMAgovUK) April 7, 2021

As a first job, the unit will look at how codes of conduct could work to govern the relationship between digital platforms and third parties such as small businesses which rely on them to advertise or use their services to reach customers — to feed into future digital legislation.

The role of powerful intermediary online gatekeepers is also being targeted by lawmakers in the European Union who proposed legislation at the end of last year which similarly aims to create a regulatory framework that can ensure fair dealing between platform giants and the smaller entities which do business under their terms.

The UK government said today that the DMU will take a sector neutral approach in examining the role of platforms across a range of digital markets, with a view to promoting competition.

The unit has been asked to work with the comms watchdog Ofcom, which the government named last year as its pick for regulating social media platforms under planned legislation due to be introduced this year (aka, the Online Safety Bill as it’s now called).

While that forthcoming legislation is intended to regulate a very wide range of online harms which may affect consumers — from bullying and hate speech to child sexual exploitation and other speech-related issues (raising plenty of controversy, and specific concerns about associated implications for privacy and security) — the focus for the DMU is on business impacts and consumer controls which may also have implications for competition in digital markets.

As part of its first work program, the government said the secretary of state for digital has asked the DMU to work with Ofcom to look specifically at how a code would govern the relationships between platforms and content providers such as news publishers — “including to ensure they are as fair and reasonable as possible”, as its press release puts it.

This suggests the DMU will be taking a considered look at recent legislation passed in Australia — which makes it mandatory for platforms to negotiate with news publishers to pay for reuse of their content.

Earlier this year, the head of the UK’s Competition and Markets Authority (CMA), which the DMU will sit within, told the BBC that Australia’s approach of having a backstop of mandatory arbitration if commercial negotiations between tech giants and publishers fail is a “sensible” approach.

The DMU will also work closely with the CMA’s enforcement division — which currently has a number of open investigations into tech giants, including considering complaints against Apple and Google; and an in-depth probe of Facebook’s Giphy acquisition.

Other UK regulators the government says the DMU will work closely with include the data protection watchdog (the ICO) and the Financial Conduct Authority.

It also said the unit will also coordinate with international partners, given digital competition is an issue that’s naturally globally in nature — adding that it’s already discussing its approach through bilateral engagement and as part of its G7 presidency.

“The Digital Secretary will host a meeting of digital and tech ministers in April as he seeks to build consensus for coordination on better information sharing and joining up regulatory and policy approaches,” it added.

The DMU will be led by Will Hayter, who takes up an interim head post in early May following a stint at the Cabinet Office working on Brexit transition policy. Prior to that he worked for several years at the CMU and also Ofcom, among other roles in regulatory policy.

 

Amazon acquires Indian retail startup Perpule

By Manish Singh

Amazon has acquired a startup in India that is helping offline stores go online, the e-commerce group’s latest attempt to make inroads in the world’s second most populous nation where brick and mortar continue to drive more than 95% of sales.

The American e-commerce group said on Tuesday evening that it has acquired Perpule, a four-year-old startup. A regulatory filing showed Amazon Technologies paid $14.7 million to acquire the Indian startup in an all-cash deal. The company is expected to spend an additional $5 million or so to compensate Perpule’s employees.

Perpule, which had raised $6.36 million (per insight platform Tracxn), offers a mobile payments device (point of sale machine) to offline retailers to help them accept digital payments and also establish presence on various mini app stores including those run by Paytm, PhonePe, and Google Pay in India.

“Perpule has built an innovative cloud-based POS offering that enables offline stores in India to better manage their inventory, checkout process, and overall customer experience,” an Amazon spokesperson said in a statement.

“We are excited to have the Perpule team join us to focus on providing growth opportunities for businesses of all sizes in India while raising the bar of the shopping experience for Indian customers.”

Founded in late 2016, the Indian startup’s first product was focused on helping customers avoid queues at super chains such as Shoppers Stop, Spar Hypermarket, Big Bazaar, and More. But the product, said Abhinav Pathak in a recent interview, wasn’t scaling, which is when Perpule pivoted.

The startup — which counts Prime Venture Partners, Kalaari Capital, and Raghunandan G (founder of neobank Zolve) among its investors — has further expanded in recent years, launching products like StoreSE, which enables a business to support group ordering.

Last year, it also expanded geographically; bringing its offerings to Southeast Asian markets including Indonesia, Malaysia, Thailand, Singapore, and Vietnam.

Amazon has aggressively engaged with physical stores in India in recent years, using their vast presence in the nation to expand its delivery network and warehouses and even just relying on their inventory to drive sales.

The company’s push into physical retail comes as Flipkart, and Reliance Jio Platforms (backed by Facebook and Google), which last year raised over $20 billion, also race to capture this market. The acquisition of Perpule comes less than a week after Google backed DotPe, a startup that offers several similar products.

These neighborhood stores offer all kinds of items, are family-run and pay low wages and little to no rent. Because they are ubiquitous — there are more than 30 million neighborhood stores in India, according to industry estimates — no retail giant can offer a faster delivery. And on top of that, their economics are often better than most of their digital counterparts.

The next era of moderation will be verified

By Annie Siebert
Rick Song Contributor
Rick Song is co-founder and CEO of Persona.

Since the dawn of the internet, knowing (or, perhaps more accurately, not knowing) who is on the other side of the screen has been one of the biggest mysteries and thrills. In the early days of social media and online forums, anonymous usernames were the norm and meant you could pretend to be whoever you wanted to be.

As exciting and liberating as this freedom was, the problems quickly became apparent — predators of all kinds have used this cloak of anonymity to prey upon unsuspecting victims, harass anyone they dislike or disagree with, and spread misinformation without consequence.

For years, the conversation around moderation has been focused on two key pillars. First, what rules to write: What content is deemed acceptable or forbidden, how do we define these terms, and who makes the final call on the gray areas? And second, how to enforce them: How can we leverage both humans and AI to find and flag inappropriate or even illegal content?

While these continue to be important elements to any moderation strategy, this approach only flags bad actors after an offense. There is another equally critical tool in our arsenal that isn’t getting the attention it deserves: verification.

Most people think of verification as the “blue checkmark” — a badge of honor bestowed upon the elite and celebrities among us. However, verification is becoming an increasingly important tool in moderation efforts to combat nefarious issues like harassment and hate speech.

That blue checkmark is more than just a signal showing who’s important — it also confirms that a person is who they say they are, which is an incredibly powerful means to hold people accountable for their actions.

One of the biggest challenges that social media platforms face today is the explosion of fake accounts, with the Brad Pitt impersonator on Clubhouse being one of the more recent examples. Bots and sock puppets spread lies and misinformation like wildfire, and they propagate more quickly than moderators can ban them.

This is why Instagram began implementing new verification measures last year to combat this exact issue. By verifying users’ real identities, Instagram said it “will be able to better understand when accounts are attempting to mislead their followers, hold them accountable, and keep our community safe.”

It’s important to remember that verification is not a single tactic, but rather a collection of solutions that must be used dynamically in concert to be effective.

The urgency to implement verification is also bigger than just stopping the spread of questionable content. It can also help companies ensure they’re staying on the right side of the law.

Following an exposé revealing illegal content was being uploaded to Pornhub’s site, the company banned posts from nonverified users and deleted all content uploaded from unverified sources (more than 80% of the videos hosted on its platform). It has since implemented new measures to verify its users to prevent this kind of issue from infiltrating its systems again in the future.

Companies of all kinds should be looking at this case as a cautionary tale — if there had been verification from the beginning, the systems would have been in a much better place to identify bad actors and keep them out.

However, it’s important to remember that verification is not a single tactic, but rather a collection of solutions that must be used dynamically in concert to be effective. Bad actors are savvy and continually updating their methods to circumvent systems. Using a single-point solution to verify users — such as through a photo ID — might sound sufficient on its face, but it’s relatively easy for a motivated fraudster to overcome.

At Persona, we’ve detected increasingly sophisticated fraud attempts ranging from using celebrity photos and data to create accounts to intricate photoshopping of IDs and even using deepfakes to mimic a live selfie.

That’s why it’s critical for verification systems to take multiple signals into account when verifying users, including actively collected customer information (like a photo ID), passive signals (their IP address or browser fingerprint), and third-party data sources (like phone and email risk lists). By combining multiple data points, a valid but stolen ID won’t pass through the gates because signals like location or behavioral patterns will raise a red flag that this user’s identity is likely fraudulent or at the very least warrants further investigation.

This kind of holistic verification system will enable social and user-generated-content platforms to not only deter and flag bad actors but also prevent them from repeatedly entering your platform under new usernames and emails, a common tactic of trolls and account abusers who have previously been banned.

Beyond individual account abusers, a multisignal approach can help manage an arguably bigger problem for social media platforms: coordinated disinformation campaigns. Any issue involving groups of bad actors is like battling the multiheaded Hydra — you cut off one head only to have two more grow back in its place.

Yet killing the beast is possible when you have a comprehensive verification system that can help surface groups of bad actors based on shared properties (e.g., location). While these groups will continue to look for new ways in, multifaceted verification that is tailored for the end user can help keep them from running rampant.

Historically, identity verification systems like Jumio or Trulioo were designed for specific industries, like financial services. But we’re starting to see the rise in demand for industry-agnostic solutions like Persona to keep up with these new and emerging use cases for verification. Nearly every industry that operates online can benefit from verification, even ones like social media, where there isn’t necessarily a financial transaction to protect.

It’s not a question of if verification will become a part of the solution for challenges like moderation, but rather a question of when. The technology and tools exist today, and it’s up to social media platforms to decide that it’s time to make this a priority.

Customer data platform ActionIQ extends its latest funding round to $100M

By Anthony Ha

ActionIQ, which helps companies use their customer data to deliver personalized experiences, is announcing that it has extended its Series C funding, bringing the round to a total size of $100 million.

That number includes the $32 million that ActionIQ announced in January of last year. Founder and CEO Tasso Argyros said the company is framing this as an extension rather than a separate round because it comes from existing investors — including March Capital — and because ActionIQ still has most of that $32 million in the bank.

Argyros told me that there were two connected reasons to raise additional money now. For one thing, ActionIQ has seen 100% year-over-year revenue growth, allowing it to increase its valuation by more than 250%. (The company isn’t not disclosing the actual valuation.) That growth has also meant that ActionIQ is getting “a lot more ambitious” in its plans for product development and customer growth.

“We raised more money because we can, and because we need to,” Argyros said.

The company continues to develop the core platform, for example by introducing more support for real-time data and analysis. But Argyros suggested that the biggest change has been in the broader market for customer data platforms, with companies like Morgan Stanley, The Hartford, Albertsons, JCPenney and GoPro signing on with ActionIQ in the past year.

Some of these enterprises, he said, “normally would not work with a cutting-edge technology company like us, but because of the pandemic, they’re willing to take some risk and really invest in their customer base and their customer experience.”

Argyros also argued that as regulators and large platforms restrict the ways that businesses can buy and sell third-party data, platforms like ActionIQ, focusing on the first-party data that companies collect for their own use, will become increasingly important. And he said that ActionIQ’s growth comes as the big marketing clouds have “failed” — either announcing products that have yet to launch or launching products that don’t match ActionIQ’s capabilities.

Companies that were already using ActionIQ include The New York Times. In fact, the funding announcement includes a statement from The Times’ senior vices president of data and insights Shane Murray declaring that the newspaper is using ActionIQ to deliver “hundreds of billions of personalized customer experiences” across “mail, in-app, site, and paid media.”

ActionIQ has now raised around $145 million total, according to Crunchbase.

Big Tech companies cannot be trusted to self-regulate: We need Congress to act

By Annie Siebert
Arisha Hatch Contributor
Arisha Hatch is Vice President and Chief of Campaigns at Color Of Change.

It’s been two months since Donald Trump was kicked off of social media following the violent insurrection on Capitol Hill in January. While the constant barrage of hate-fueled commentary and disinformation from the former president has come to a halt, we must stay vigilant.

Now is the time to think about how to prevent Trump, his allies and other bad actors from fomenting extremism in the future. It’s time to figure out how we as a society address the misinformation, conspiracy theories and lies that threaten our democracy by destroying our information infrastructure.

As vice president at Color Of Change, my team and I have had countless meetings with leaders of multi-billion-dollar tech companies like Facebook, Twitter and Google, where we had to consistently flag hateful, racist content and disinformation on their platforms. We’ve also raised demands supported by millions of our members to adequately address these systemic issues — calls that are too often met with a lack of urgency and sense of responsibility to keep users and Black communities safe.

The violent insurrection by white nationalists and far-right extremists in our nation’s capital was absolutely fueled and enabled by tech companies who had years to address hate speech and disinformation that proliferated on their social media platforms. Many social media companies relinquished their platforms to far-right extremists, white supremacists and domestic terrorists long ago, and it will take more than an attempted coup to hold them fully accountable for their complicity in the erosion of our democracy — and to ensure it can’t happen again.

To restore our systems of knowledge-sharing and eliminate white nationalist organizing online, Big Tech must move beyond its typical reactive and shallow approach to addressing the harm they cause to our communities and our democracy. But it’s more clear than ever that the federal government must step in to ensure tech giants act.

After six years leading corporate accountability campaigns and engaging with Big Tech leaders, I can definitively say it’s evident that social media companies do have the power, resources and tools to enforce policies that protect our democracy and our communities. However, leaders at these tech giants have demonstrated time and time again that they will choose not to implement and enforce adequate measures to stem the dangerous misinformation, targeted hate and white nationalist organizing on their platforms if it means sacrificing maximum profit and growth.

And they use their massive PR teams to create an illusion that they’re sufficiently addressing these issues. For example, social media companies like Facebook continue to follow a reactive formula of announcing disparate policy changes in response to whatever public relations disaster they’re fending off at the moment. Before the insurrection, the company’s leaders failed to heed the warnings of advocates like Color Of Change about the dangers of white supremacists, far-right conspiracists and racist militias using their platforms to organize, recruit and incite violence. They did not ban Trump, implement stronger content moderation policies or change algorithms to stop the spread of misinformation-superspreader Facebook groups — as we had been recommending for years.

These threats were apparent long before the attack on Capitol Hill. They were obvious as Color Of Change and our allies propelled the #StopHateForProfit campaign last summer, when over 1,000 advertisers pulled millions in ad revenues from the platform. They were obvious when Facebook finally agreed to conduct a civil rights audit in 2018 after pressure from our organization and our members. They were obvious even before the deadly white nationalist demonstration in Charlottesville in 2017.

Only after significant damage had already been done did social media companies take action and concede to some of our most pressing demands, including the call to ban Trump’s accounts, implement disclaimers on voter fraud claims, and move aggressively remove COVID misinformation as well as posts inciting violence at the polls amid the 2020 election. But even now, these companies continue to shirk full responsibility by, for example, using self-created entities like the Facebook Oversight Board — an illegitimate substitute for adequate policy enforcement — as PR cover while the fate of recent decisions, such as the suspension of Trump’s account, hang in the balance.

Facebook, Twitter, YouTube and many other Big Tech companies kick into action when their profits, self-interests and reputation are threatened, but always after the damage has been done because their business models are built solely around maximizing engagement. The more polarized content is, the more engagement it gets; the more comments it elicits or times it’s shared, the more of our attention they command and can sell to advertisers. Big Tech leaders have demonstrated they neither have the willpower nor the ability to proactively and successfully self-regulate, and that’s why Congress must immediately intervene.

Congress should enact and enforce federal regulations to reign in the outsized power of Big Tech behemoths, and our lawmakers must create policies that translate to real-life changes in our everyday lives — policies that protect Black and other marginalized communities both online and offline.

We need stronger antitrust enforcement laws to break up big tech monopolies that evade corporate accountability and impact Black businesses and workers; comprehensive privacy and algorithmic discrimination legislation to ensure that profits from our data aren’t being used to fuel our exploitation; expanded broadband access to close the digital divide for Black and low-income communities; restored net neutrality so that internet services providers can’t charge differently based on content or equipment; and disinformation and content moderation by making it clear that Section 230 does not exempt platforms from complying with civil rights laws.

We’ve already seen some progress following pressure from activists and advocacy groups including Color Of Change. Last year alone, Big Tech companies like Zoom hired chief diversity experts; Google took action to block the Proud Boys website and online store; and major social media platforms like TikTok adopted better, stronger policies on banning hateful content.

But we’re not going to applaud billion-dollar tech companies for doing what they should and could have already done to address the years of misinformation, hate and violence fueled by social media platforms. We’re not going to wait for the next PR stunt or blanket statement to come out or until Facebook decides whether or not to reinstate Trump’s accounts — and we’re not going to stand idly by until more lives are lost.

The federal government and regulatory powers need to hold Big Tech accountable to their commitments by immediately enacting policy change. Our nation’s leaders have a responsibility to protect us from the harms Big Tech is enabling on our democracy and our communities — to regulate social media platforms and change the dangerous incentives in the digital economy. Without federal intervention, tech companies are on pace to repeat history.

❌