FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

YouTube Music cracks down on rampant chart manipulation with new pay-for-play ban

By Sarah Perez

YouTube will no longer allow paid views and advertising to influence its YouTube Music Charts, the company announced this morning. Instead, it will calculate its rankings based only on view counts coming from organic plays. In addition, it’s changing its methodology for reporting on 24-hour record debuts to also only count views from organic sources, including direct links to the video, search results, Watch Next and Trending — but not video advertising.

The changes come about after multiple reports examined how music labels were spending aggressively on video advertising in order to juice the views of their artists’ newly debuted songs.

One report by Rolling Stone detailed how the practice worked, with regard to YouTube’s TrueView ads. This form of advertising lets the advertiser, like the artist or the label, play a shortened version of a music video as an advertisement in front of other videos. Under some conditions — like if a YouTube user interacts with the video or watches it for a certain amount of time — it would count toward the video’s overall view count.

Bloomberg had also reported on the curious case of Indian rapper Badshah, whose video “Paagal” broke records with 75 million views in a single day — topping a prior record set by Korean boy band BTS. Initially, there were rumors that the label, Sony Music, had used server farms and bots to accomplish this. It later turned out to be paid advertising, which Badshah confessed to on Instagram.

But this was not an uncommon practice — Taylor Swift and Blackpink and many others had done the same, the report said. Badshah had just taken it much further.

The report also said YouTube was considering revising its system, as a result.

Today, YouTube is officially announcing those changes.

“YouTube Music Charts have become an indispensable source for the industry and the most accurate place for measuring the popularity of music listening behavior happening on the world’s largest music platform,” the company explained in a blog post. “In an effort to provide more transparency to the industry and align with the policies of official charting companies such as Billboard and Nielsen, we are no longer counting paid advertising views on YouTube in the YouTube Music Charts calculation. Artists will now be ranked based on view counts from organic plays,” the post read.

The changes impact the 24-hour debuts, plus all of YouTube Music’s other charts, including those focused on what’s rising, trending and popular, both locally and globally.

Though advertising and non-organic views will no longer contribute to the view count for the purpose of YouTube’s Music Chart rankings, the company says these changes will not impact YouTube’s existing 24-hour record debut holders. That means Badshah and others can continue to tout their “records,” tainted as those claims may now be.

The changes won’t likely mean the end of this sort of music video advertising, however. Ads still remain a great way for users to be exposed to new music which can, in turn, boost organic views as links get clicked, shared, and embedded elsewhere around the web, for example. But it could have a dampening impact on the pay-for-play business and the size of the ad spend.

“Staying true to YouTube’s overall mission of giving everyone a voice and showing them the world, we want to celebrate all artist achievements on YouTube as determined by their global fans. It’s the artists and fans that have made YouTube the best and most accurate measure of the world’s listening tastes, and we intend on keeping it that way,” said YouTube.

Apple tweaks App Store rule changes for children’s apps and sign-in services

By Matthew Panzarino

Originally announced in June, changes to Apple’s App Store policies on its Sign in with Apple service and the rules around children’s app categories are being tweaked. New apps must comply right away with the tweaked terms, but existing apps will have until early 2020 to comply with the new rules.

The changes announced at Apple’s developer conference in the summer were significant, and raised concerns among developers that the rules could handicap their ability to do business in a universe that, frankly, offers tough alternatives to ad-based revenue for children’s apps.

In a short interview with TechCrunch, Apple’s Phil Schiller said that they had spent time with developers, analytics companies and advertising services to hear what they had to say about the proposals and have made some updates.

The changes are garnering some strong statements of support from advocacy groups and advertising providers for children’s apps that were pre-briefed on the tweaks. The changes will show up as of this morning in Apple’s developer guidelines.

“As we got closer to implementation we spent more time with developers, analytics companies and advertising companies,” said Schiller. “Some of them are really forward thinking and have good ideas and are trying to be leaders in this space too.”

With their feedback, Schiller said, they’ve updated the guidelines to allow them to be more applicable to a broader number of scenarios. The goal, he said, was to make the guidelines easy enough for developers to adopt while being supportive of sensible policies that parents could buy into. These additional guidelines, especially around the Kids app category, says Schiller, outline scenarios that may not be addressed by the Children’s Online Privacy Protection Act (COPPA) or GDPR regulations.

There are two main updates.

Kids’ changes

The first area that is getting further tweaking is the Kids terms. Rule sections 1.3 and 5.1.4 specifically are being adjusted after Apple spoke with developers and providers of ad and analytics services about their concerns over the past few months.

Both of those rules are being updated to add more nuance to their language around third-party services like ads and analytics. In June, Apple announced a very hard-line version of these rule updates that essentially outlawed any third-party ads or analytics software and prohibited any data transmission to third-parties. The new rules offer some opportunities for developers to continue to integrate these into their apps, but also sets out explicit constraints for them.

The big changes come in section 1.3 surrounding data safety in the Kids category. Apple has removed the explicit restriction on including any third-party advertising or analytics. This was the huge hammer that developers saw heading towards their business models.

Instead, Apple has laid out a much more nuanced proposal for app developers. Specifically, it says these apps should not include analytics or ads from third parties, while implicitly acknowledging that there are ways to provide these services as well as practicing data safety on the App Store.

Apple says that in limited cases, third-party analytics may be permitted as long as apps in the Kids category do not send personal identifiable information or any device fingerprinting information to third parties. This includes transmitting the IDFA (the device ID for advertisers), name, date of birth, email address, location or any other personally identifiable information.

Third-party contextual ads may be allowed but only if those companies providing the ads have publicly documented practices and policies and also offer human review of ad creatives. That certainly limits the options, including most offerings from programmatic services.

Rule 5.1.4 centers on data handling in kids apps. In addition to complying with COPPA, GDPR and other local regulations, Apple sets out some explicit guard rails.

First, the language on third-party ads and analytics has been changed from may not to should not. Apple is discouraging their use, but acknowledges that “in limited cases” third-party analytics and advertising may be permitted if it adheres to the new rules set out in guideline 1.3.

The explicit prohibition on transmitting any data to third parties from apps in the Kids category has been removed. Once again, this was the big bad bullet that every children’s app maker was paying attention to.

An additional clause reminds developers not to use terms like “for kids” and “for children” in app metadata for apps outside of the Kids category on the App Store.

SuperAwesome is a company that provides services like safe ad serving to kids apps. CEO Dylan Collins was initially critical of Apple’s proposed changes, noting that killing off all third-party apps could decimate the kids app category.

“Apple are clearly very serious about setting the standard for kids apps and digital services,” Collins said in a statement to TechCrunch after reviewing the new rules Apple is publishing. “They’ve spent a lot of time working with developers and kidtech providers to ensure that policies and tools are set to create great kids digital experiences while also ensuring their digital privacy and safety. This is the model for all other technology platforms to follow.”

All new apps must adhere to the guidelines. Existing apps have been given an additional six months to live in their current form but must comply by March 3, 2020.

“We commend Apple for taking real steps to protect children’s privacy and ensure that kids will not be targets for data-driven, personalized marketing,” said Josh Golin, Executive Director of Campaign for Commercial-Free Childhood. “Apple rightly recognizes that a child’s personal identifiable information should never be shared with marketers or other third parties. We also appreciate that Apple made these changes on its own accord, without being dragged to the table by regulators.”

The CCFC had a major win recently when the FTC announced a $170M fine against YouTube for violations of COPPA.

Sign in with Apple

The second set of updates has to do with Apple’s Sign in with Apple service.

Sign in with Apple is a sign-in service that can be offered by an app developer to instantly create an account that is handled by Apple with additional privacy for the user. We’ve gone over the offering extensively here, but there are some clarifications and policy additions in the new guidelines.

Sign in with Apple is being required to be offered by Apple if your app exclusively offers third-party or social log ins like those from Twitter, Google, LinkedIn, Amazon or Facebook. It is not required if users sign in with a unique account created in the app, with say an email and password.

But some additional clarifications have been added for additional scenarios. Sign in with Apple will not be required in the following conditions:

  • Your app exclusively uses your company’s own account setup and sign-in systems.
  • Your app is an education, enterprise or business app that requires the user to sign in with an existing education or enterprise account.
  • Your app uses a government or industry-backed citizen identification system or electronic ID to authenticate users.
  • Your app is a client for specific third-party service and users are required to sign in to their mail, social media or other third-party account directly to access their content.

Most of these were sort of assumed to be true but were not initially clear in June. The last one, especially, was one that I was interested in seeing play out. This scenario applies to, for instance, the Gmail app for iOS, as well as apps like Tweetbot, which log in via Twitter because all they do is display Twitter.

Starting today, new apps submitted to the store that don’t meet any of the above requirements must offer Sign in with Apple to users. Current apps and app updates have until April 2020 to comply.

Both of these tweaks come after developers and other app makers expressed concern and reports noted the abruptness and strictness of the changes in the context of the ever-swirling anti-trust debate surrounding big tech. Apple continues to walk a tightrope with the App Store where they flex muscles in an effort to enhance data protections for users while simultaneously trying to appear as egalitarian as possible in order to avoid regulatory scrutiny.

Stock content service Storyblocks evolves with new partner program

By Frederic Lardinois

Storyblocks, the subscription-based stock audio, imagery and video service formerly known as Videoblocks, today announced the launch of its new Member Library Partner Program. The company has also shuttered its pay-per-download marketplace and is now fully invested in its all-inclusive subscription program.

The reason for this move, the company says, it to better align its offerings with the needs of both its subscribers and contributors. The company also says that less than 5% of its members every purchased anything from the old marketplace.

youtubergirl

With the new program, subscribers get access to a wide range of royalty free stock imagery without restrictions. That, of course, is not all that different from how the company’s program worked before. Unlimited access to the company’s video library starts at $39/month (though you get a 50% discount if you pre-pay for a year). At that price, the service is clearly going after YouTubers and others who need regular access to stock video. Access to its audio and image library is significantly cheaper.

Contributors get paid for every download, sharing in the pool of total revenue Storyblocks gains from its subscribers, and the service provides them with detailed analytics about how their content performs on the platform.

“For contributors, the Partner Program is uniquely designed to prioritize sustainable revenue growth alongside subscription growth: as the market grows, contributor earnings grow,” the company explains.

For now, the company will work with a targeted group of contributors to build the library and will add additional contributors over time. The company agues that this new program will triple contributors’ earnings, but that obviously remains to be seen.

“The Member Library Partner Program puts us in the unique position to provide diverse, high-quality stock media that the mass creative class demands while providing an earnings boost for our contributor community, and allowing them to better share in our success over the long run,” said Storyblocks CEO TJ Leonard. “We believe you cannot pivot an old approach to meet the needs of a new audience, and so we have created a fresh approach to stock media access that reflects the freedom, flexibility and choice required by today’s digital storytellers.”

 

YouTube claims it removed 5x more hateful content in Q2, including 100K+ videos, 17K+ channels

By Sarah Perez

In an update today, YouTube is claiming to have made significant progress in removing harmful video on its platform following a June update to its content policy which prohibited supremacist and other hateful content. The company says it has this quarter removed over 100,000 videos and terminated over 17,000 channels for hate speech — a 5x increase over Q1. It also removed nearly double the number of comments to over 500 million, in part due to an increase in hate speech removals.

The company, however, is haphazardly attempting to draw a line between what’s considered hateful content and what’s considered free speech.

This has resulted in what the U.S. Anti-Defamation League, in a recent report, referred to as a “significant number” of channels that disseminate anti-Semitic and white supremacist content being left online, following the June 2019 changes to the content policy.

videos removed by reason

YouTube CEO Susan Wojcicki soon thereafter took to the YouTube Creator blog to defend the company’s position on the matter, arguing for the value that comes from having an open platform.

“A commitment to openness is not easy. It sometimes means leaving up content that is outside the mainstream, controversial or even offensive,” she wrote. “But I believe that hearing a broad range of perspectives ultimately makes us a stronger and more informed society, even if we disagree with some of those views.”

Among the videos the ADL had listed were those that featured anti-Semitic content, anti-LGBTQ messages, those that denied the Holocaust, featured white supremacist content and more. Five of the channels it cited had, combined, more than 81 million views.

YouTube still seems to be unsure of where it stands on this sort of content. While arguably these videos would be considered hate speech, much seems to be left online. YouTube also flip-flopped last week when it removed then quickly reinstated the channels of two Europe-based, far-right YouTube creators who espouse white nationalist views.

Beyond the hate speech removals, YouTube also spoke today of the methodology it uses to flag content for review.

It will often use hashes (digital fingerprints) to automatically catch copies of known prohibited content ahead of it being made public. This is a common way platforms remove child sexual abuse images and terrorist recruitment videos. However, this is not a new practice and its mention in today’s report could be to deflect attention from the hateful content and issues around that.

In 2017, YouTube said it also increased its use of machine learning to help it find similar content to those that have already been removed, even before the videos are viewed. This is effective for fighting spam and adult content, YouTube says. In some cases, this also can help to flag hate speech, but machines don’t understand context, so human review is still required to make the nuanced decisions.

Fighting spam is fairly routine these days, as it accounts for the majority of the removals — in Q2, nearly 67% of the videos removed were spam or scams.

Moe than 87% of the 9 million totals videos removed in Q2 were removed by automated systems, YouTube said. An upgrade to spam detection systems in the quarter led to a more than 50% increase in channels shut down for spam violations, it also noted.

The company said that more than 80% of the auto-flagged videos were removed without a single view in Q2. And it confirmed that across all of Google, there are over 10,000 people tasked with detecting, reviewing and removing content that violates its guidelines.

Again, this over 80% figure largely speaks to YouTube’s success in using automated systems to remove spam and porn.

Going forward, the company says it will soon release a further update to its harassment policy, first announced in April, that aims to prevent creator-on-creator harassment — as seen recently with the headline-grabbing YouTube creator feuds and the rise of “tea” channels.

YouTube additionally shared a timeline of its content policy milestones and related product launches.

YT resptime v24 ai remove 01 2

The update from YouTube comes at a critical time for the company, just ahead of a reported $200 million settlement with the FTC over alleged violations of child privacy laws. The fine serves as a stark reminder that, for years now, the viewers of these hate speech-filled videos haven’t only been adults interested in researching extremist content or engaging in debate, but also millions of children who today turn to YouTube for information about their world.

YouTube’s Neal Mohan is coming to Disrupt SF

By Anthony Ha

YouTube has found itself front-and-center in the recent debates about free speech, the internet and how the online world is shaping our offline lives.

There’s no denying the site’s tremendous reach and influence, but that’s also why it has faced so much criticism for the role it can play in spreading misinformation, harassment and hate speech — not to mention questions about whether it’s truly a safe environment for kids.

This week, CEO Susan Wojcicki tried to address these issues in her quarterly letter to creators, where she laid out a goal of “preserving openness through responsibility.” And Chief Product Officer Neal Mohan made a similar point in a recent interview, where he emphasized the importance of “an open platform.”

At the same time, YouTube has been trying to improve, with product fixes like labeling videos uploaded by government-funded publishers (to create more transparency around content that might serve as government propaganda), trying to limit the spread of conspiracy theory videos and disabling comments on kids videos because of predatory behavior.

And while all this is happening, the company is also trying to find its place in the increasingly crowded landscape of subscription streaming. The strategy seems to be changing, with its previously paywalled YouTube Originals content becoming free and ad-supported this fall.

So there will be plenty to talk about when Mohan joins us at Disrupt SF. He’s been at YouTube’s parent company Google for more than a decade, leading the display and video ad teams before taking on his current role in 2015, where he’s responsible for YouTube’s product and user experience across all devices.

We’ll be talking to Mohan about how YouTube has tried to face these recent challenges, how it balances openness and responsibility and how the platform will continue to evolve.

Disrupt SF runs October 2 to October 4 at the Moscone Center in San Francisco. Tickets are available here.

YouTube Kids launches on the web

By Sarah Perez

Kid-friendly YouTube content now has its own website, youtubekids.com. The website will offer a similar experience to the existing YouTube Kids mobile app, where parents will be able to direct their child to videos that are age-appropriate, as well as track their child’s watch history and flag content missed by YouTube’s filters. At launch, the site won’t offer a sign-in option, but that will roll out at a later date, the company says.

The website’s imminent launch was quietly disclosed earlier this week by YouTube, and comes ahead of the official announcement of an FTC settlement which is said to include a multimillion-dollar penalty against the Google-owned video platform for its violations of U.S. children’s privacy laws, COPPA.

The FTC ruling, when announced, will not be without precedent.

The regulator earlier this year hit Musical.ly (now TikTok) with a record $5.7 million fine and forced it to implement an age-gate on its app.

The FTC’s YouTube ruling will likely also require the same sort of age-gate, designed to redirect children under the age of 13 to a kid-safe, COPPA-compliant YouTube website where children’s personal information isn’t collected without parental consent.

The new website is only one of several changes YouTube has made in recent days, ahead of the FTC announcement.

The company also this week introduced new age groupings on YouTube Kids to now include a “Preschool” filter for those age 4 and younger, in addition to a “Younger” group for ages 5 to 7, and an “Older” group for kids over 7.

Screen Shot 2019 08 30 at 10.15.23 AM

YouTube Kids (“Older” age group)

And last week, the company expanded its child safety policies to remove — instead of only restrict, as it did before — any “misleading family content, including videos that target younger minors and their families, those that contain sexual themes, violence, obscene, or other mature themes not suitable for younger audiences.”

YouTube had come under fire in 2017 for hosting a number of bizarre and disturbing videos that were using keywords and the YouTube algorithms to target children.

For example, videos of popular kids’ cartoon characters like Peppa Pig drinking bleach or getting her teeth violently yanked were showing up when children sought out Peppa Pig videos. These sorts of issues had been going on for years, in fact, but YouTube only addressed the situation by age-restricting the videos after receiving high-profile press coverage. It also cut off monetization to some videos.

But the bigger problem with YouTube, as consumer advocacy groups have argued, isn’t just that YouTube can be inappropriate for kids — it’s actually breaking the law.

Screen Shot 2019 08 30 at 10.16.05 AM

YouTube Kids (“Preschool” age group)

Organizations like the Campaign for a Commercial-Free Childhood (CCFC) and the Center for Digital Democracy (CDD) had asked the FTC to investigate YouTube, claiming that the company has been collecting personal information from nearly 25 million U.S. children for a number of years, and has been using this data to engage in “very sophisticated marketing techniques.”

The groups said YouTube hides behind its terms of service, which state its site is only meant for those 13 and up, while doing nothing from preventing younger users from gaining entry. (And clearly, younger users are on YouTube — after all, that’s why YouTube was able to spin out a subset of its content into its own YouTube Kids app in the first place.)

With the YouTube Kids website in place, now it’s only a matter of waiting for the FTC’s official ruling.

The Washington Post says the details of that ruling have been finalized, and noted a multimillion-dollar fine was included. And according to Bloomberg, YouTube will end targeted ads on videos aimed at kids.

But based on YouTube’s existing YouTube Kids Privacy Policy linked from the new website, that has yet come to pass.

It also remains to be seen whether the kid-safe content will actually be pulled from YouTube.com and placed on YouTube Kids alone, as the advocacy groups believe would be best.

It’s unclear why YouTube has taken to making these very big, impactful announcements on YouTube’s Help forums instead of on the official YouTube Blog, and without alerting the press, as it did with the children’s content policy change, pre-announcement of the Kids website, change to age filters and now the website’s launch news.

That said, it’s certainly focused on letting YouTube users know of its Kids product — a big pop-up banner now appears upon every launch of YouTube.com, which has frustrated users who don’t have children.

Screen Shot 2019 08 30 at 10.31.46 AM

As for the new Kids website itself, there’s not much new to report. The content is organized as in the app, in categories like Shows, Music, Explore and Gaming. However, it’s still missing some of the app’s more advanced features, like profiles, whitelisting and timers. Those will likely roll out over time.

“We built YouTube Kids to create a safer environment for kids to explore their interests and curiosity, while giving parents the tools to customize the experience for their kids. We continue to improve the app based on feedback from parents and experts,” says YouTube.

Google says China used YouTube to meddle in Hong Kong protests

By Zack Whittaker

Google has disabled 210 YouTube accounts after it said China used the video platform to sow discord among protesters in Hong Kong.

The search giant, which owns YouTube, followed in the footsteps of Twitter and Facebook, which earlier this week said China had used their social media sites to spread misinformation and discord among the protesters, who have spent weeks taking to the streets to demand China stops interfering with the semi-autonomous region’s affairs.

In a brief blog post, Google’s Shane Huntley said the company took action after it detected activity which “behaved in a coordinated manner while uploading videos related to the ongoing protests in Hong Kong.”

“This discovery was consistent with recent observations and actions related to China announced by Facebook and Twitter,” said Huntley.

Earlier this week Twitter said China was using its service to “sow discord” through fake accounts as part of “a coordinated state-backed operation.”

In line with Twitter and Facebook’s findings, Google said it detected the use of virtual private networks — or VPNs — which can be used to tunnel through China’s censorship system, known as the Great Firewall. Facebook, Twitter and Google are all banned in China. But Google said little more about the accounts, what they shared or whether it would disclose its findings to researchers.

When reached, a Google spokesperson only referred back to the blog post and did not comment further.

More than a million protesters took to the streets this weekend to peacefully demonstrate against the Chinese regime, which took over rule from the United Kingdom in 1997. Protests erupted earlier this year after a bid by Hong Kong leader Carrie Lam to push through a highly controversial bill that would allow criminal suspects to be extradited to mainland China for trial. The bill was suspended, effectively killing it from reaching the law books, but protests have continued, pushing back at claims that China is trying to meddle in Hong Kong’s affairs.

YouTube Originals become ad-supported and free after September 24th

By Sarah Perez

In an email distributed to YouTube Premium subscribers, the company confirmed that access to YouTube’s original programming will no longer be exclusive to Premium customers after September 24th, 2019. Instead, YouTube’s Original series, movies, and live events will be offered to all YouTube viewers for free, supported by ads. Premium members, however, can watch the content ad-free.

In addition, Premium subscribers will have access to all the available episodes in a series right when they premiere, says YouTube, and they’ll be able to download them for offline viewing.

There will also continue to be some exclusive subscriber-only content, in the form of things like director’s cuts and extra scenes from YouTube Originals.

YouTube had previously announced its plans to make its original programming available for free back in May, following a larger shift in strategy for the video platform. According to a Deadline report from last November, YouTube had been reassessing its scripted development plans with a goal of refocusing on unscripted shows and specials. It had also stopped taking new scripted pitches.

The company had found some success with scripted content, the report noted — like Cobra Kai which at the time had 100 million views and a 100% Rotten Tomatoes score. But the company was also finding success with celebrity content, like Katy Perry: Will You Be My Witness and Will Smith’s Grand Canyon bungee stunt, for example.

This is the direction YouTube may be aiming to pursue next, Deadline had said.

Perhaps not coincidentally, Variety recently reported on a new crowdfunding service for YouTube creators, Fundo, which allows start to invite fans to virtual meet & greet sessions and other paid online events. However, this project is not from YouTube or Google itself, but rather its in-house incubator Area 120, which operates more independently. That said, it reflects YouTube’s larger interest in the creation of new revenue streams for creators beyond ads and subscriptions.

Along with the news of the changes to YouTube Originals, the email to Premium subscribers also alerted them to the addition of a “Recommended Downloads” feature on the Library tab, which lets them browse and download videos from YouTube’s algorithmic suggestions. And it noted YouTube Music changes, like the ability to switch between video and audio and the launch of “smart downloads” which automatically download up to 500 songs from Liked Songs and other favorite playlists and albums.

YouTube’s new AR feature lets you virtually try on makeup while watching videos

By Sarah Perez

Earlier this summer, YouTube announced its plans for a new AR feature for virtual makeup try-on that works directly in the YouTube app. Today, the first official campaign to use the “Beauty Try-On” feature has now launched, allowing viewers to try on and shop lipsticks from MAC Cosmetics from YouTube creator Roxette Arisa’s makeup tutorial video.

Makeup tutorials are hugely popular on YouTube, so an integration where you can try on the suggested looks yourself makes a ton of sense. While a lipstick try-on feature isn’t exactly groundbreaking — plenty of social media apps offer a similar filter these days — it could lead to more complex AR makeup integrations further down the road.

The new AR feature only works when you’re watching the video from a mobile device, and the YouTube app is updated to the latest version.

Famebit MAC Shortened

Then, when watching the video, you’ll see a button that says “try it on” which will launch the camera in a split-screen view. The video will continue to play as you scroll through the various lipstick shades below, applying the different colors to see which one works best. Unlike some of the filters in social apps like Instagram and Snapchat, the colors are evenly aligned with your lips and not bleeding out the edges. The result is a very natural look.

Image from iOS 1MAC Cosmetics will work with creators through YouTube’s branded content division, Famebit. The program connects brands with YouTube influencers who then market their products as paid sponsorships.

MAC is the first partner for this AR feature, but more will likely follow.

Prior to launch, YouTube tested the AR Beauty Try-On with several beauty brands, and found that 30% of viewers chose to active the experience in the YouTube iOS.

Those who did were fairly engaged, spending more than 80 seconds trying on virtual lipstick shades.

Google is not the first company to offer virtual makeup try-on experiences. Beyond social media apps, there are also AR beauty apps like YouCam MakeupSephora’s Virtual ArtistUlta’s GLAMLab and others. L’Oréal also offers Live Try-On on its website, and had partnered with Facebook last year to bring virtual makeup to the site. In addition, Target’s online Beauty Studio offers virtual makeup across a number of brands and products.

YouTube’s implementation, however, is different because it’s not just a fun consumer product — it’s an AR-powered ad campaign.

Though some may scoff at the idea of virtual makeup, this market is massive. Millions watch makeup tutorials on YouTube every day, and the site has become the dominant source for referral traffic for beauty brands. In 2018, beauty-related content generated more than 169 billion views on the video platform.

You can watch the YouTube video here, or engage with the AR feature from the mobile YouTube app.

If you don’t see your face immediately after pressing the “try on” button, you probably need to update the YouTube app.

The Navy taps YouTube creators for its latest recruiting campaign

By Anthony Ha

The US Navy is turning to YouTube creators for help in finding technical recruits.

Captain Matt Boren, the chief marketing officer at US Navy Recruiting Command, told me that in that while past recruiting efforts have focused on TV and other traditional media, that’s no longer the best route to reach the post-millennial generation.

“For our audience now, if they want to find something, they’re using the search engine on YouTube,” Boren said. “We had to be where our audience was.”

So the Navy worked with its agencies MLY&R and Wavemake, along with Google, to identify YouTube creators with a focus on science, technology and math, then invited those creators to highlight different technical roles and environments. Specifically, Kevin Leiber (a.k.a. Vsauce2) filmed aboard a nuclear submarine, Jake Koehler (a.k.a. Dallmyd) worked with an explosives specialist and William Osman talked to a cybersecurity team.

“We gave the creators a great deal of free rein,” Boren added. “We didn’t want to come off as the Navy trying to buy their loyalty … We wanted to give them the opportunity to spend time in a work environment so they can understand and relate to them and really [convey] that to their audience.”

While the “Sailor Vs.” series (launching today) only involves three YouTube creators, Boren said he’s open to experimenting with more influencer marketing campaigns in the future, especially since those broader media-consumption trends aren’t going away.

And that’s all happening as the Navy is facing significant recruiting challenges — Boren estimated that it needs to fill 50,000 roles this year (though highly technical roles only make up a small percentage of that total).

Dreading 10x engineers, virtual beings, the fate of Netflix, and Salesforce’s acquisition

By Danny Crichton

The dreaded 10x, or, how to handle exceptional employees

The reality (myth?) is that there are engineers who are ten times more productive than other engineers (some would argue 100x, but okay). Jon Evans, who is CTO at HappyFunCorp, dives into the strengths and weaknesses of these vaunted people and how to manage them and their relationships with other team members.

The anti-10x squad raises many important and valid — frankly, obvious and inarguable — points. Go down that Twitter thread and you’ll find that 10x engineers are identified as: people who eschew meetings, work alone, rarely look at documentation, don’t write much themselves, are poor mentors, and view process, meetings, or training as reasons to abandon their employer. In short, they are unbelievably terrible team members.

Is software a field like the arts, or sports, in which exceptional performers can exist? Sure. Absolutely. Software is Extremistan, not Mediocristan, as Nassim Taleb puts it.

A guide to Virtual Beings and how they impact our world

If your 10x engineers are too annoying to deal with, maybe consider just getting virtual beings instead. The inaugural Virtual Beings Summit was held recently in San Francisco, a conference designed to bring together storyline editors, virtual reality engineers, influencer marketers and more to consider the future of “virtual beings.”

Facebook and YouTube’s moderation failure is an opportunity to deplatform the platforms

By Devin Coldewey

Facebook, YouTube, and Twitter have failed their task of monitoring and moderating the content that appears on their sites; what’s more, they failed to do so well before they knew it was a problem. But their incidental cultivation of fringe views is an opportunity to recast their role as the services they should be rather than the platforms they have tried so hard to become.

The struggles of these juggernauts should be a spur to innovation elsewhere: While the major platforms reap the bitter harvest of years of ignoring the issue, startups can pick up where they left off. There’s no better time to pass someone up as when they’re standing still.

Asymmetrical warfare: Is there a way forward?

At the heart of the content moderation issue is a simple cost imbalance that rewards aggression by bad actors while punishing the platforms themselves.

To begin with, there is the problem of defining bad actors in the first place. This is a cost that must be borne from the outset by the platform: With the exception of certain situations where they can punt (definitions of hate speech or groups for instance), they are responsible for setting the rules on their own turf.

That’s a reasonable enough expectation. But carrying it out is far from trivial; you can’t just say “here’s the line; don’t cross it or you’re out.” It is becoming increasingly clear that these platforms have put themselves in an uncomfortable lose-lose situation.

If they have simple rules, they spend all their time adjudicating borderline cases, exceptions, and misplaced outrage. If they have more granular ones, there is no upper limit on the complexity and they spend all their time defining it to fractal levels of detail.

Both solutions require constant attention and an enormous, highly-organized and informed moderation corps, working in every language and region. No company has shown any real intention to take this on — Facebook famously contracts the responsibility out to shabby operations that cut corners and produce mediocre results (at huge human and monetary cost); YouTube simply waits for disasters to happen and then quibbles unconvincingly.

PBS coming to YouTube TV later this year

By Darrell Etherington

YouTube TV has landed another network partner: PBS. The public broadcaster’s member stations will be able to stream live and o-demand to YouTube TV subscribers beginning later this year, PBS and YouTube announced today.

This is the first digital TV provider partnership for PBS, and the broadcaster is intent upon providing local livestreams to “as many Americans as possible” with the move. The partnership will also include PBS KIDS, providing educational and entertainment content for children via the platform. All content will be available through YouTube TV video-on-demand, and recordable via its DVR service without limits on how much content users can store, too.

YouTube has had its own share of criticism for the kind of content kids may be able to access from its platform, and its said to be considering a number of options for addressing misuse of the platform when it comes to children-focused videos. YouTube TV is distinct from its primary streaming video business, however, and is much more like a traditional over-the-top cable or satellite subscription, however, with a number of broadcast networks and premium channels signed on to provide U.S. viewers access for $49.99 per month.

Tesla will deliver in-car YouTube and Netflix video streaming ‘soon’

By Darrell Etherington

Tesla is getting ready to “soon” deliver the in-car video streaming services that CEO Elon Musk suggested would eventually come to the automaker’s cars. Musk shared this (somewhat vague) updated timeline on Twitter over the weekend, after noting earlier in June at E3 that Tesla’s infotainment displays would eventually be getting YouTube and streaming video support.

This is also the first time Musk has specifically said that both YouTube and Netflix would be coming, after previously noting that version 10 of the in-car software would support video streaming generally in reply to a question from a fan on Twitter. Musk added that these would be available to stream video only while the vehicle is stopped — but the plan is to change that once full self-driving becomes a reality.

Once full autonomous driving capabilities are “approved by regulators,” Musk said, the plan is to turn on the ability to stream video in the vehicle while it’s in motion. This plan likely extends to Tesla’s in-car gaming features, too — though that’s a separate level of distraction as you’re actually interacting with what’s happening on the screen, which may not be the best idea for initial roll-out of autonomous features where a driver might be required to take over manual control in case of any incidents.

The Tesla CEO said the experience of watching video on Netflix and YouTube in a Tesla vehicle is akin to “an old-school drive-in movie experience, but with much better sound” and that it has an “immersive, cinematic feel” thanks to the surround audio available via the Tesla’s audio system and its “comfy seats.”

It may seem like a weird software update priority for a car, but it’s entirely possible Tesla owners spent so much on their vehicles that they don’t have spare cash for a fixed address, in which case an entertainment system for their tiny apartment actually makes a lot of sense.

YouTube lands on Fire TV and Amazon Prime Video arrives on Chromecast, Android TV

By Darrell Etherington

It’s nice when people can come together and work through their differences to make it easier to watch stuff. That’s exactly what happened today, when the long-standing detente between Google and Amazon over streaming video services came to an end, with YouTube arriving on Fire TV and Prime Video making its way to Chromecast and Android TV.

Amazon’s second-generation Fire TV Stick, their Fire TV Stick 4K, the Fire TV Cube, Fire TV Stick Basic Edition and Fire TV Edition smart TVs made by partner OEMs will all get support for the official YouTube app globally starting today, and Amazon intends to extend support to even more of its hardware in future. YouTube TV and YouTube Kids will also come to Amazon Fire TV device later this year.

On the Google side, both its own Chromecast devices, as well as partners TVs and hardware that support Chromecast built-in, or that run Android TV, will gain support broadly for Prime Video. Plus, any Chromecast Ultra owners will also get access to Prime Video’s 4,000 title library normally reserved for Prime members only at no additional cost as part of the new tie-up between the two companies.

Prime has been available on some Android TV devices to date, but it’s expanding to a much broader selection of those smart TVs and streaming boxes from today.

This has been a long time coming – several years in fact, with the most recent spat between the two coming as a result of Amazon’s implementation of YouTube on the Echo Show. Then, in May, the companies announced they’d reached an agreement to put the feud behind them in the interest of consumers, which is what resulted in this cross-platform launch today.

Let the streams flow!

YouTube update gives users more insight and control over recommendations

By Sarah Perez

YouTube today announced a series of changes designed to give users more control over what videos appear on the Homepage and in its Up Next suggestions — the latter which are typically powered by an algorithm. The company also says it will offer more visibility to users as to why they’re being shown a recommended video — a peek into the YouTube algorithm that wasn’t before possible.

One new feature is designed to make it easier to explore topics and related videos from both the YouTube Homepage and in the Up Next video section. The app will now display personalized suggestions based on what videos are related to those you’re watching, videos published by the channel you’re watching, or others that YouTube thinks will be of interest.

deviceFrame combined

This feature is rolling out to signed-in users in English on the YouTube app for Android and will be available on iOS, desktop and other languages soon, the company says.

If YouTube’s suggestions aren’t right — and they often aren’t — users will now be able to access controls that explicitly tell the service to stop suggesting videos from a particular channel.

This will be available from the three-dot menu next to a video on the Homepage or Up Next. From there, you’ll click “Don’t recommend channel.” From that point forward, no videos from that channel will be shown.

However, you’ll still be able to see the videos if you Subscribe, do a search for them, or visit the Channel page directly — they aren’t being hidden from you entirely, in other words. The videos may also still appear on the Trending tab, at times.

dontRecommend featureImage 3

This feature is available globally now on the YouTube app for Android and iOS starting today, and will be available on desktop soon.

Lastly, and perhaps most notably, YouTube is giving users slight visibility into how its algorithm works.

Before, people may not have understood why certain videos were recommended to them. Another new feature will detail the reasons why a video made the list.

Now, underneath a video suggestion, YouTube will say why it was selected.

“Sometimes, we recommend videos from channels you haven’t seen before based on what other viewers with similar interests have liked and watched in the past,” explains the company in its announcement. ” Our goal is to explain why these videos surface on your homepage in order to help you find videos from new channels you might like,” says YouTube.

For example, the explanation might say that viewers who also watch one of your favorite channels also watch the channel that the video recommendation is coming from.

YouTube’s algorithm is likely far more complex than just “viewers who like this also like that,” but it’s a start, at least.

endorsements featureImage iphone 1

This feature is launching globally on the YouTube app for iOS today, and will be available on Android and desktop soon.

The changes come at a time when YouTube — and other large social media companies — are under pressure from government regulators over how they manage their platforms. Beyond issues around privacy and security, the spread of hate speech and disinformation, platform providers are also being criticized for their reliance on opaque algorithms that determine what is shown to their end users.

YouTube, in particular, came under fire for how its own Recommendations algorithm was leveraged by child predators in the creation of pedophilia wormhole. YouTube responded by shutting off the comments on kids’ videos where the criminals were sharing timestamps. But it stopped there.

“The company refused to turn off recommendations on videos featuring young children in leotards and bathing suits even after researchers demonstrated YouTube’s algorithm was recommending these videos to pedophiles,” wrote consumer advocacy groups in a letter to the FTC this week, urging the agency to take action against YouTube to protect children.

The FTC hasn’t commented on its investigation, as per policy, but confirmed it received the letter.

Explaining to end users how Recommendations work is only part of the problem.

The other issue that YouTube’s algorithm can end up creating “filter bubbles,” which can lead users to down dark paths, at times.

For instance, a recent story in The New York Times detailed how a person who came to YouTube for self-help videos was increasingly shown ever more radical and extremist content, thanks to the algorithm’s recommendations which pointed him to right-wing commentary, then to conspiracy videos, and finally racist content.

The ability to explicitly silence some YouTube recommendations may help those who care enough to control their experience, but won’t likely solve the problem of those who just follow the algorithm’s suggestions. However, if YouTube were to eventually use this as a new signal — a downvote of sorts — it could influence the algorithm in other more subtle ways.

 

U.S. Senator and consumer advocacy groups urge FTC to take action on YouTube’s alleged COPPA violations

By Sarah Perez

The groups behind a push to get the U.S. Federal Trade Commission to investigate YouTube’s alleged violation of children’s privacy law, COPPA, have today submitted a new letter to the FTC that lays out the appropriate sanctions the groups want the FTC to now take. The letter comes shortly after news broke that the FTC was in the final stages of its probe into YouTube’s business practices regarding this matter.

They’re joined in pressing the FTC to act by COPPA co-author, Senator Ed Markey, who penned a letter of his own, which was also submitted today.

The groups’ formal complaint with the FTC was filed back in April 2018. The coalition, which then included 20 child advocacy, consumer and privacy groups, had claimed YouTube doesn’t get parental consent before collecting the data from children under the age of 13 — as is required by the Children’s Online Privacy Protection Act, also known as COPPA.

The organizations said, effectively, that YouTube was hiding behind its terms of service which claims that YouTube is “not intended for children under 13.”

This simply isn’t true, as any YouTube user knows. YouTube is filled with videos that explicitly cater to children, from cartoons to nursery rhymes to toy ads — the latter which often come about by way of undisclosed sponsorships between toy makers and YouTube stars. The video creators will excitedly unbox or demo toys they received for free or were paid to feature, and kids just eat it all up.

In addition, YouTube curates much of its kid-friendly content into a separate YouTube Kids app that’s designed for the under-13 crowd — even preschoolers.

Meanwhile, YouTube treats children’s content like any other. That means targeted advertising and commercial data collection are taking place, the groups’ complaint states. YouTube’s algorithms also recommend videos and autoplay its suggestions — a practice that led to kids being exposed to inappropriate content in the past.

Today, two of the leading groups behind the original complaint — the Campaign for a Commercial-Free Childhood (CCFC) and Center for Digital Democracy (CDD) — are asking the FTC to impose the maximum civil penalties on YouTube because, as they’ve said:

Google had actual knowledge of both the large number of child-directed channels on YouTube and the large numbers of children using YouTube. Yet, Google collected personal information from nearly 25 million children in the U.S over a period of years, and used this data to engage in very sophisticated digital marketing techniques. Google’s wrongdoing allowed it to profit in two different ways: Google has not only made a vast amount of money by using children’s personal information as part of its ad networks to target advertising, but has also profited from advertising revenues from ads on its YouTube channels that are watched by children.

The groups are asking the FTC to impose a 20-year consent degree on YouTube.

They want the FTC to order YouTube to destroy all data from children under 13, including any inferences drawn from the data, that’s in Google’s possession. YouTube should also stop collecting data from anyone under 13, including anyone viewing a channel or video directed at children. Kids’ ages also need to be identified so they can be prevented from accessing YouTube.

Meanwhile, the groups suggest that all the channels in the Parenting and Family lineup, plus any other channels or video directed at children, be removed from YouTube and placed into a separate platform for children. (e.g. the YouTube Kids app).

This is something YouTube is already considering, according to a report from The Wall Street Journal last week.

This separate kids platform would have a variety restrictions, including no commercial data collection; no links out to other sites or online services; no targeted marketing; no product or brand integration; no influencer marketing; and even no recommendations or autoplay.

The removal of autoplaying videos and recommendations, in particular, would be a radical change to how YouTube operates, but one that could protect kids from inappropriate content that slips in. It’s also a change that some employees inside YouTube itself were vying for, according to The WSJ’s report. 

The groups also urge the FTC to require Google to fund educational campaigns around the true nature of Google’s data-driven marketing systems, admit publicly that it violated the law, and submit to annual audits to ensure its ongoing compliance. They want Google to commit $100 million to establish a fund that supports the production of noncommercial, high-quality and diverse content for kids.

Finally, the groups are asking that Google faces the maximum possible civil penalties —  $42,530 per violation, which could be counted as either per child or per day. This monetary relief needs to be severe, the groups argue, so Google and YouTube will be deterred from ever violating COPPA in the future.

While this laundry list of suggestions is more like a wish list of what the ideal resolution would look like, it doesn’t mean that the FTC will follow through on all these suggestions.

However, it seems likely that the Commission would at least require YouTube to delete the improperly collected data and isolate the kids’ YouTube experience in some way. After all, that’s precisely what it just did with Tik Tok (previously Musical.ly) which earlier this year paid a record $5.7 million fine for its own COPPA violations. It also had to implement an age gate where under-13 kids were restricted from publishing content.

The advocacy groups aren’t the only ones making suggestions to the FTC.

Senator Ed Markey (D-Mass.) also sent the FTC a letter today about YouTube’s violations of COPPA — a piece of legislation that he co-authored.

In his letter, he urges the FTC take a similar set of actions, saying:

“I am concerned that YouTube has failed to comply with COPPA. I therefore, urge the Commission to use all necessary resources to investigate YouTube, demand that YouTube pay all monetary penalties it owes as a result of any legal violations, and instruct YouTube to institute policy changes that put children’s well-being first.”

His suggestions are similar to those being pushed by the advocacy groups. They include demands for YouTube to delete the children’s data and cease data collection on those under 13; implement an age gate on YouTube to come into compliance with COPPA; prohibit targeted and influencer marketing; offer detailed explanations of what data is collected if for “internal purposes;” undergo a yearly audit; provide documentation of compliance upon request; and establish a fund for noncommercial content.

He also wants Google to sponsor a consumer education campaign warning parents that no one under 13 should use YouTube and want Google to be prohibited from launching any new child-directed product until it’s been reviewed by an independent panel of experts.

The FTC’s policy doesn’t allow it to confirm or deny nonpublic investigations. YouTube hasn’t yet commented on the letters.

Cricket World Cup highlights just how big video streaming is in India

By Manish Singh

As hundreds of millions of people turn their attention to the ongoing ICC Cricket World Cup tournament, many of them are using an Indian streaming service to follow the ins and outs of the game.

More than 100 million users tuned in to Hotstar, an on-demand streaming service owned by Disney, on June 16, the day India and Pakistan played a league match against each other. That’s the highest engagement the four-year-old service has clocked on its platform to date, it said in a statement today.

Hotstar said about 66% of its viewers came from outside of big metro cities, an equally remarkable feat that illustrates the growing adoption of the streaming service in smaller cities and towns that remain sporadic consumers — if at all — of internet services.

To be sure, these 100 million users are not all paying subscribers. Hotstar offers five-minute streaming of live events to users at no cost. The platform, which competes with Netflix, Prime Video, AltBalaji, Zee5, and YouTube in India, declined to share its paying subscribers base. In April, the company said it had 300 million monthly active users.

Regardless, 100 million daily active users is an impressive feat for any service in India. Especially for streaming services that, thanks to dramatically dwindling mobile data prices in the country in recent years, are increasingly changing user behavior toward intensive data usage online. (For some context, Facebook and WhatsApp are both shy of 300 million monthly active users in India; Google’s YouTube, which is its fastest growing service in the nation, also has fewer than 300 million monthly active users in the country.)

It also helps that the game between India and Pakistan, two neighboring nations with a long history, remains one of the most anticipated events for cricket following countries.

Cricket itself has emerged as the biggest driver of video streaming in India in the last three years. The game is followed by hundreds of millions of users across the globe — if not more. In 2010, Hilary Clinton urged nations to look at cricket as a model for improving relationships with other countries.

“I might suggest that if we are searching for a model of how to meet tough international challenges with skill, dedication and teamwork, we need only look to the Afghan national cricket team,” she said as U.S. Secretary of State in 2010.

Star India, which operates Hotstar, owns the rights to most cricket tournaments, a bet that has paid off and immensely helped scale its business. This then wouldn’t come as a surprise that both Amazon Prime Video and Netflix, that do not offer live streaming of sporting events in India, have produced shows themed around cricket to cash in on the game’s popularity.

Both Amazon and Netflix have fewer than 5 million subscribers in India, according to industry estimates. While Amazon Prime Video, which bundles a range of other services including faster delivery of goods, and Hotstar are priced at Rs 999 ($14.4) for a year-long service, Netflix’s monthly offering starts at Rs 500 ($7.2) — though it has been experimenting with more options.

Even Facebook made an unsuccessful bid to acquire streaming rights to a cricket tournament in India two years ago, months before it began to talk about its Watch ambitions. That cricket tournament was Indian Premier League (IPL), which concluded its 12th edition last month. Hotstar, which also owns the right to stream IPL matches, set a global record for most simultaneous views to a live event in the final game of the tournament last month.

Beating its own previous record, Hotstar claimed that more than 18.6 million viewers watched the game simultaneously. Interestingly enough, even as a record 100 million-plus users simultaneously watched the game between India and Pakistan this month, Hotstar said the concurrent views count peaked at 15.6 million.

It remains unclear why Hotstar was not able to break its concurrent record that day. TechCrunch reported earlier this month that Hotstar had identified a security flaw in its service that allowed some Safari browser users to access and distribute Hotstar’s content without a paid subscription. To fix it, Hotstar temporarily discontinued support for Safari browser.

Last year, Hotstar and Walmart-owned Flipkart began a collaboration on building an advertising business in India. According to media planners who TechCrunch has spoken to, Hotstar-Flipkart’s digital ad business is already the third largest in India, only behind Google and Facebook. Hotstar is still unprofitable, however.

For Hotstar, the biggest challenge is in retaining customers after the mega cricket season ends next month. Each year, the service struggles to keep customers which results in a massive drop in users count when the cricket season is over, a source familiar with the matter said.

Over the last year, Hotstar has sought to build on its popularity and started to invest in its own original content. Many inside the company have high hopes that people will show up to watch the Indian Jim and Pam in the remake of NBC’s sitcom “The Office.” It premieres on Hotstar later this week and it will be a sign of the future for the streaming company.

❌