FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Facebook and Twitter CEOs to testify before Congress in November on how they handled the election

By Taylor Hatmaker

Shortly after voting to move forward with a pair of subpoenas, the Senate Judiciary Committee has reached an agreement that will see the CEOs of two major social platforms testify voluntarily in November. The hearing will be the second major congressional appearance by tech CEOs arranged this month.

Twitter’s Jack Dorsey and Facebook’s Mark Zuckerberg will answer questions at the hearing, set for November 17 — two weeks after election day. The Republican-led committee is chaired by South Carolina Senator Lindsey Graham, who set the agenda to include the “platforms’ censorship and suppression of New York Post articles.”

According to a new press release from the committee, lawmakers also plan to use the proceedings as a high-profile port-mortem on how Twitter and Facebook fared on and after election day — an issue that lawmakers on both sides will undoubtedly be happy to dig into.

Republicans are eager to press the tech CEOs on how their respective platforms handled a dubious story from the New York Post purporting to report on hacked materials from presidential candidate Joe Biden’s son, Hunter Biden. They view the incident as evidence of their ongoing claims of anti-conservative political bias in platform policy decisions.

While Republicans on the Senate committee led the decision to pressure Zuckerberg and Dorsey into testifying, the committee’s Democrats, who sat out the vote on the subpoenas, will likely bring to the table their own questions about content moderation, as well.

Daily Crunch: Uber and Lyft defeated again in court

By Anthony Ha

A California court weighs in as Prop. 22 looms, Google removes popular apps over data collection practices and the Senate subpoenas Jack Dorsey and Mark Zuckerberg. This is your Daily Crunch for October 23, 2020.

The big story: Uber and Lyft defeated again in court

A California appeals court ruled that yes, a new state law applies to Uber and Lyft drivers, meaning that they must be classified as employees, rather than independent contractors. The judge ruled that contrary to the rideshare companies’ arguments, any financial harm does not “rise to the level of irreparable harm.”

However, the decision will not take effect for 30 days — suggesting that the real determining factor will be Proposition 22, a statewide ballot measure backed by Uber and Lyft that would keep drivers as contractors while guaranteeing things like minimum compensation and healthcare subsidies.

“This ruling makes it more urgent than ever for voters to stand with drivers and vote yes on Prop. 22,” a Lyft spokesperson told TechCrunch.

The tech giants

Google removes 3 Android apps for children, with 20M+ downloads between them, over data collection violations — Researchers at the International Digital Accountability Council found that a trio of popular and seemingly innocent-looking apps aimed at younger users were violating Google’s data collection policies.

Huawei reports slowing growth as its operations ‘face significant challenges’ — The full impact of U.S. trade restrictions hasn’t been realized yet, because the government has granted Huawei several waivers.

Senate subpoenas could force Zuckerberg and Dorsey to testify on New York Post controversy — The Senate Judiciary Committee voted in favor of issuing subpoenas for Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey.

Startups, funding and venture capital

Quibi says it will shut down in early December — A newly published support page on the Quibi site says streaming will end “on or about December 1, 2020.”

mmhmm, Phil Libin’s new startup, acquires Memix to add enhanced filters to its video presentation toolkit — Memix has built a series of filters you can apply to videos to change the lighting, the details in the background or across the whole screen.

Nordic challenger bank Lunar raises €40M Series C, plans to enter the ‘buy now, pay later’ space — Lunar started out as a personal finance manager app but acquired a full banking license in 2019.

Advice and analysis from Extra Crunch

Here’s how fast a few dozen startups grew in Q3 2020 — This is as close to private company earnings reports as we can manage.

The short, strange life of Quibi — Everything you need to know about the Quibi story, all in one place.

(Reminder: Extra Crunch is our membership program, which aims to democratize information about startups. You can sign up here.)

Everything else

France rebrands contact-tracing app in an effort to boost downloads — France’s contact-tracing app has been updated and is now called TousAntiCovid, which means “everyone against Covid.”

Representatives propose bill limiting presidential internet ‘kill switch’ — The bill would limit the president’s ability to shut down the internet at will.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

France rebrands contact-tracing app in an effort to boost downloads

By Romain Dillet

Don’t call it StopCovid anymore. France’s contact-tracing app has been updated and is now called TousAntiCovid, which means “everyone against Covid”. The French government is trying to pivot so that it’s no longer a contact-tracing app — or at least not just a contact-tracing app.

Right now, TousAntiCovid appears to be a rebranding more than a pivot. There’s a new name and some changes in the user interface. But the core feature of the app remains unchanged.

StopCovid hasn’t been a success. First, it’s still unclear whether contact-tracing apps are a useful tool to alert people who have been interacting with someone who has been diagnosed COVID-19-positive. Second, even when you take that into consideration, the app never really took off.

Back in June, the French government gave us an update on StopCovid three weeks after its launch: 1.9 million people had downloaded the app, but StopCovid only sent 14 notifications.

Four months later, StopCovid/TousAntiCovid has been downloaded and activated by close to 2.8 million people. But only 13,651 people declared themselves as COVID-19-positive in the app, which led to 823 notifications. Even if you’re tested positive, in most cases, no one is going to be notified.

Hence today’s update. If you’ve been using the app, you’ll receive TousAntiCovid with a software update — the French government is using the same App Store and Play Store listing. When you first launch the app, you go through an onboarding process focused on contact-tracing — activate notifications, activate Bluetooth, etc.

France is using its own contact-tracing protocol called ROBERT. A group of researchers and private companies have worked on a centralized architecture. The server assigns you a permanent ID (a pseudonym) and sends to your phone a list of ephemeral IDs derived from that permanent ID.

Like most contact-tracing apps, TousAntiCovid relies on Bluetooth Low Energy to build a comprehensive list of other app users you’ve interacted with for more than a few minutes. If you’re using the app, it collects the ephemeral IDs of other app users around you.

If you’re using the app and you’re diagnosed COVID-19-positive, your testing facility will hand you a QR code or a string of letters and numbers. You can choose to open the app and enter that code to share the list of ephemeral IDs of people you’ve interacted with over the past two weeks.

The server back end then flags all those ephemeral IDs as belonging to people who have potentially been exposed to the coronavirus. On the server again, each user is associated with a risk score. If it goes above a certain threshold, the user receives a notification. The app then recommends you get tested and follow official instructions.

But there are some new things in the app. You can now access some recent numbers about the pandemic in France — new cases over the past 24 hours, number of people in intensive care units, etc. There’s also a new feed of news items. Right now, it sums up what you can do and cannot do in France.

And there are some new links for useful resources — the service that tells you where you can get tested and a link to the exemption certificate during the curfew. When you tap on those links, it simply launches your web browser to official websites.

Let’s see how the app evolves as the government now wants to actively iterate on TousAntiCovid to make it more attractive. If TousAntiCovid can become a central information hub for your phone, it could attract more downloads.

Daily Crunch: Facebook Dating comes to Europe

By Anthony Ha

Facebook’s dating feature expands after a regulatory delay, we review the new Amazon Echo and President Donald Trump has an on-the-nose Twitter password. This is your Daily Crunch for October 22, 2020.

The big story: Facebook Dating comes to Europe

Back in February, Facebook had to call off the European launch date of its dating service after failing to provide the Irish Data Protection Commission with enough advanced notice of the launch. Now it seems the regulator has given Facebook the go-ahead.

Facebook Dating (which launched in the U.S. last year) allows users to create a separate dating profile, identify secret chats and go on video dates.

As for any privacy and regulatory concerns, the commission told us, “Facebook has provided detailed clarifications on the processing of personal data in the context of the Dating feature … We will continue to monitor the product as it launches across the EU this week.”

The tech giants

Amazon Echo review: Well-rounded sound — This year’s redesign centers on another audio upgrade.

Facebook adds hosting, shopping features and pricing tiers to WhatsApp Business — Facebook is launching a way to shop for and pay for goods and services in WhatsApp chats, and it said it will finally start to charge companies using WhatsApp for Business.

Spotify takes on radio with its own daily morning show — The new program will combine news, pop culture, entertainment and music personalized to the listener.

Startups, funding and venture capital

Chinese live tutoring app Yuanfudao is now worth $15.5 billion — The homework tutoring app founded in 2012 has surpassed Byju’s as the most valuable edtech company in the world.

E-bike subscription service Dance closes $17.7M Series A, led by HV Holtzbrinck Ventures — The founders of SoundCloud launched their e-bike service three months ago.

Freelancer banking startup Lili raises $15M — It’s only been a few months since Lili announced its $10 million seed round, and it’s already raised more funding.

Advice and analysis from Extra Crunch

How unicorns helped venture capital get later, and bigger — Q3 2020 was a standout period for how high late-stage money stacked up compared to cash available to younger startups.

Ten Zurich-area investors on Switzerland’s 2020 startup outlook — According to official estimates, the number of new Swiss startups has skyrocketed by 700% since 1996.

Four quick bites and obituaries on Quibi (RIP 2020-2020) — What we can learn from Quibi’s amazing, instantaneous, billions-of-dollars failure.

(Reminder: Extra Crunch is our membership program, which aims to democratize information about startups. You can sign up here.)

Everything else

President Trump’s Twitter accessed by security expert who guessed password “maga2020!” — After logging into President Trump’s account, the researcher said he alerted Homeland Security and the password was changed.

For the theremin’s 100th anniversary, Moog unveils the gorgeous Claravox Centennial — With a walnut cabinet, brass antennas and a plethora of wonderful knobs and dials, the Claravox looks like it emerged from a prewar recording studio.

Announcing the Agenda for TC Sessions: Space 2020 — Our first-ever dedicated space event is happening on December 16 and 17.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Facebook’s controversial Oversight Board starts reviewing content moderation cases

By Taylor Hatmaker

Facebook’s external body of decision makers will start reviewing cases about what stays on the platform and what goes beginning today.

The new system will elevate some of the platform’s content moderation decisions to a new group called the Facebook Oversight Board, which will make decisions and influence precedents about what kind of content should and shouldn’t be allowed.

But as we’ve reported previously, the board’s decisions won’t just magically enact changes on the platform. Instead of setting policy independently, each recommended platform policy change from the oversight board will get kicked back to Facebook, which will “review that guidance” and decide what changes, if any, to make.

The oversight board’s specific case decisions will remain, but that doesn’t mean they’ll really be generalized out to the social network at large. Facebook says it is “committed to enforcing the Board’s decisions on individual pieces of content, and to carefully considering and transparently responding to any policy recommendations.”

The groups’ focus on content taken down rather than content already allowed on the social network will also skew its purview. While a vocal subset of its conservative critics in Congress might disagree, Facebook’s real problems are about what stays online — not what gets taken down.

Whether it’s violent militias connecting and organizing, political figures spreading misleading lies about voting or misinformation from military personnel that fuels an ethnic cleansing, content that spreads on Facebook has the power to reshape reality in extremely dangerous ways.

Noting the criticism, Facebook claims that decisions about content still up on Facebook are “very much in scope from Day 1” because the company can directly refer those cases to the Oversight Board. But with Facebook itself deciding which cases to elevate, that’s another major strike against the board’s independence from the outset.

Facebook says that the board will focus on reviewing content removals initially because of the way its existing systems are set up, but it aims “to bring all types of content outlined in the bylaws into scope as quickly as possible.”

According to Facebook, anyone who has appealed “eligible” Facebook and Instagram content moderation decisions and has already gone through the normal appeal process will get a special ID that they can take to the Oversight Board website to submit their case.

Facebook says the board will decide which cases to consider, pulling from a combination of user-appealed cases and cases that Facebook will send its way. The full slate of board members, announced in May, grew out of four co-chairs that Facebook itself named to the board. The international group of 20 includes former journalists, U.S. appeals court judges, digital rights activists, the ex-prime minister of Denmark and one member from the Cato Institute, the libertarian think tank.

“We expect them to make some decisions that we, at Facebook, will not always agree with – but that’s the point: they are truly autonomous in their exercise of independent judgment,” the company wrote in May.

Critics disagree. Facebook skeptics from every corner have seized on the oversight effort, calling it a charade and pointing out that its decision aren’t really binding.

Facebook was not happy when a group of prominent critics calling itself the “Real Facebook Oversight Board” launched late last month. And earlier this year, a tech watchdog group called for the board’s five U.S.-based members to demand they be given more real power or resign.

Facebook also faced a backlash when it said the Oversight Board, which has been in the works for years, wouldn’t be up and running until “late fall.” But with just weeks to go before election day, Facebook has suddenly scrambled to get new policies and protections in place on issues that it’s dragged its feet on for years — the Oversight Board included, apparently.

Coalition for App Fairness, a group fighting for app store reforms, adds 20 new partners

By Sarah Perez

The Coalition for App Fairness (CAF), a newly formed advocacy group pushing for increased regulation over app stores, has more than doubled in size with today’s announcement of 20 new partners — just one month after its launch. The organization, led by top app publishers and critics, including Epic Games, Deezer, Basecamp, Tile, Spotify and others, debuted in late September to fight back against Apple and Google’s control over app stores, and particularly the stores’ rules around in-app purchases and commissions.

The coalition claims both Apple and Google engage in anti-competitive behavior, as they require publishers to use the platforms’ own payment mechanisms, and charge 30% commission on these forced in-app purchases. In some cases, those commissions are collected from apps where Apple and Google offer a direct competitor. For example, the app stores commission Spotify, which competes with Google’s YouTube Music and Apple’s own Apple Music.

The group also calls out Apple more specifically for not allowing app publishers any other means of addressing the iOS user base except through the App Store that Apple controls. Google, however, allows apps to be sideloaded, so is less a concern on that platform.

The coalition launched last month with 13 app publishers as its initial members, and invited other interested parties to sign up to join.

Since then, CAF says “hundreds” of app developers expressed interest in the organization. It’s been working through applications to evaluate prospective members, and is today announcing its latest cohort of new partners.

This time, the app publishers aren’t necessarily big household names, like Spotify and Epic Games, but instead represent a wide variety of apps, ranging from studios to startups.

The apps also hail from a number of app store categories, including Business, Education, Entertainment, Developer Tools, Finance, Games, Health & Fitness, Lifestyle, Music, Navigation, News, Productivity, Shopping, Sport and Travel.

The new partners include: development studio Beonex, health app Breath Ball, social app Challenge by Eristica, shopping app Cladwell, fitness app Down Dog Yoga, developer tool Gift Card Offerwall, game maker Green Heart Games, app studio Imagine BC, business app Passbase, music app Qobuz, lifestyle app QuackQuack and Qustodio, game Safari Forever, news app Schibsted, app studio Snappy Mob, education app SpanishDict, navigation app Sygic, app studio Vertical Motion, education app YARXI, and the Mobile Marketing Marketing Association.

With the additions, CAF now includes members from Austria, Australia, Canada, France, Germany, India, Israel, Malaysia, Norway, Singapore, Slovakia, Spain, United Kingdom and the United States.

The new partners have a range of complaints against the app stores, and particularly Apple.

SpanishDict, for instance, was frustrated by weeks of rejections with no recourse and inconsistently applied policies, it says. It also didn’t want to use Apple’s new authentication system, Apple Sign-In, but Apple made this a requirement for being included on the App Store.

Passbase, a Sign In With Apple competitor, also argues that Apple applied its rules unfairly, denying its submission but allowing its competitors on the App Store.

While some of the app partners are speaking out against Apple for the first time, others have already detailed their struggles publicly.

Eristica posted on its own website how Apple shut down its seven-year-old social app business, which allowed users to challenge each other to dares to raise money for charity. The company claims it pre-moderated the content to ensure dangerous and harmful content wasn’t published, and employed human moderators, but was still rejected over dangerous content.

Meanwhile, TikTok remained on the App Store, despite hosting harmful challenges, like the pass out challenge, cereal challenge, salt and ice challenge and others, Eristica says.

Apple, of course, tends to use its policies to shape what kind of apps it wants to host on its App Store — and an app that focused on users daring one another may have been seen as a potential liability.

That said, Eristica presents a case where it claims to have followed all the rules and made all the changes Apple said it wanted, and yet still couldn’t get back in.

Down Dog Yoga also recently made waves by calling out Apple for rejecting its app because it refused to auto-charge customers at the end of its free trial.

Wow! Apple is rejecting our latest update because we refuse to auto-charge at the end of our free trial. They can choose to steal from their customers who forget to cancel, but we won't do the same to ours. THIS IS A LINE THAT WE WILL NOT CROSS. pic.twitter.com/s9HwD4ay4h

— Down Dog (@downdogapp) June 30, 2020

The issue, in this case, wasn’t just that Apple wants a cut of developers’ businesses, it also wanted to dictate how those businesses are run.

Another new CAF partner, Qustodio, was among the apps impacted by Apple’s 2018 parental control app ban, which arrived shortly after Apple introduced its own parental control software in iOS.

The app developer had then co-signed a letter asking Apple to release a Screen Time API rather than banning parental control apps — a consideration that TechCrunch had earlier suggested should have been Apple’s course of action in the first place.

Under increased regulatory scrutiny, Apple eventually relented and allowed the apps back on the App Store last year.

Not all partners are some little guy getting crushed by App Store rules. Some may have run afoul of rules designed to protect consumers, like Apple’s crackdown on offerwalls. Gift Card Offerwall’s SDK, for example, was used to incentivize app monetization and in-app purchases, which isn’t something consumers tend to welcome.

Despite increased regulatory pressure and antitrust investigations in their business practices, both Apple and Google have modified their app store rules in recent weeks to ensure they’re clear about their right to collect in-app purchases from developers.

Meanwhile, Apple and CAF member Epic Games are engaged in a lawsuit over the Fortnite ban, as Epic chose to challenge the legality of the app store business model in the court system.

Other CAF members, including Spotify and Tile, have testified in antitrust investigations against Apple’s business practices, as well.

“Apple must be held accountable for its anticompetitive behavior. We’re committed to creating a level playing field and fair future, and we’re just getting started,” CAF said in an announcement about the new partners. It says it’s still open to new members.

Dear Sophie: What visa options exist for a grad co-founding a startup?

By Walter Thompson
Sophie Alcorn Contributor
Sophie Alcorn is the founder of Alcorn Immigration Law in Silicon Valley and 2019 Global Law Experts Awards’ “Law Firm of the Year in California for Entrepreneur Immigration Services.” She connects people with the businesses and opportunities that expand their lives.

Here’s another edition of “Dear Sophie,” the advice column that answers immigration-related questions about working at technology companies.

“Your questions are vital to the spread of knowledge that allows people all over the world to rise above borders and pursue their dreams,” says Sophie Alcorn, a Silicon Valley immigration attorney. “Whether you’re in people ops, a founder or seeking a job in Silicon Valley, I would love to answer your questions in my next column.”

Extra Crunch members receive access to weekly “Dear Sophie” columns; use promo code ALCORN to purchase a one- or two-year subscription for 50% off.


Dear Sophie:

What are the visa prospects for a graduate completing an advanced degree at a university in the United States who wants to co-found a startup after graduation? Can the new startup or my co-founders sponsor me for a visa?

—Brilliant in Berkeley

Dear Brilliant,

Thank you for your questions and for your contributions. The U.S. economy greatly benefits from entrepreneurial individuals like you who create companies — and jobs — in the U.S.

Let me take your second question first: Yes, it is theoretically possible for your startup to sponsor you for a visa, and for one of your co-founders to be your supervisor. Many visas and employment green cards require a company to sponsor you and for you to demonstrate that a valid employer-employee relationship exists.

Given your situation, timing will be key, particularly since one of your best visa options is the H-1B Visa for Specialty Occupations. The number of H-1B visas issued each year is typically capped at 85,000-60,000 for individuals with a bachelor’s degree and 25,000 for individuals with a master’s or higher degree. Because of the cap on H-1B visas and because the demand for them far outstrips the supply, U.S. Citizenship and Immigration Services (USCIS) holds a lottery once a year in the spring to determine who can apply for this visa.

EU parliament backs tighter rules on behavioural ads

By Natasha Lomas

The EU parliament has backed a call for tighter regulations on behavioral ads (aka microtargeting) in favor of less intrusive, contextual forms of advertising — urging Commission lawmakers to also assess further regulatory options, including looking at a phase-out leading to a full ban.

MEPs also want Internet users to be able to opt out of algorithmic content curation altogether.

The legislative initiative, introduced by the Legal Affairs committee, sets the parliament on a collision course with the business model of tech giants Facebook and Google.

Parliamentarians also backed a call for the Commission to look at options for setting up a European entity to monitor and impose fines to ensure compliance with rebooted digital rules — voicing support for a single, pan-EU Internet regulator to keep platforms in line.

The votes by the elected representatives of EU citizens are non-binding but send a clear signal to Commission lawmakers who are busy working on an update to existing ecommerce rules, via the forthcoming Digital Service Act (DSA) package — due to be introduced next month.

The DSA is intended to rework the regional rule book for digital services, including tackling controversial issues such as liability for user-generated content and online disinformation. And while only the Commission can propose laws, the DSA will need to gain the backing of the EU parliament (and the Council) if it is to go the legislative distance so the executive needs to take note of MEPs’ views.

Battle over adtech

The mass surveillance of Internet users for ad targeting — a space that’s dominated by Google and Facebook — looks set to be a major battleground as Commission lawmakers draw up the DSA package.

Last month Facebook’s policy VP Nick Clegg, a former MEP himself, urged regional lawmakers to look favorably on a business model he couched as “personalized advertising” — arguing that behavioral ad targeting allows small businesses to level the playing field with better resourced rivals.

However the legality of the model remains under legal attack on multiple fronts in the EU.

Scores of complaints have been lodged with EU data protection agencies over the mass exploitation of Internet users’ data by the adtech industry since the General Data Protection Regulation (GDPR) begun being applied — with complaints raising questions over the lawfulness of the processing and the standard of consent claimed.

Just last week, a preliminary report by Belgium’s data watchdog found that a flagship tool for gathering Internet users’ consent to ad tracking that’s operated by the IAB Europe fails to meet the required GDPR standard.

The use of Internet users’ personal data in the high velocity information exchange at the core of programmatic’s advertising’s real-time-bidding (RTB) process is also being probed by Ireland’s DPC, following a series of complaints. The UK’s ICO has warned for well over a year of systemic problems with RTB too.

Meanwhile some of the oldest unresolved GDPR complaints pertain to so-called ‘forced consent’ by Facebook  — given GDPR’s requirement that for consent to be lawful it must be freely given. Yet Facebook does not offer any opt-out from behavioral targeting; the ‘choice’ it offers is to use its service or not use it.

Google has also faced complaints over this issue. And last year France’s CNIL fined it $57M for not providing sufficiently clear info to Android users over how it was processing their data. But the key question of whether consent is required for ad targeting remains under investigation by Ireland’s DPC almost 2.5 years after the original GDPR complaint was filed — meaning the clock is ticking on a decision.

And still there’s more: Facebook’s processing of EU users’ personal data in the US also faces huge legal uncertainty because of the clash between fundamental EU privacy rights and US surveillance law.

A major ruling (aka Schrems II) by Europe’s top court this summer has made it clear EU data protection agencies have an obligation to step in and suspend transfers of personal data to third countries when there’s a risk the information is not adequately protected. This led to Ireland’s DPC sending Facebook a preliminary order to suspend EU data transfers.

Facebook has used the Irish courts to get a stay on that while it seeks a judiciary review of the regulator’s process — but the overarching legal uncertainty remains. (Not least because the complainant, angry that data continues to flow, has also been granted a judicial review of the DPC’s handling of his original complaint.)

There has also been an uptick in EU class actions targeting privacy rights, as the GDPR provides a framework that litigation funders feel they can profit off of.

All this legal activity focused on EU citizens’ privacy and data rights puts pressure on Commission lawmakers not to be seen to row back standards as they shape the DSA package — with the parliament now firing its own warning shot calling for tighter restrictions on intrusive adtech.

It’s not the first such call from MEPs, either. This summer the parliament urged the Commission to “ban platforms from displaying micro-targeted advertisements and to increase transparency for users”. And while they’ve now stepped away from calling for an immediate outright ban, yesterday’s votes were preceded by more detailed discussion — as parliamentarians sought to debate in earnest with the aim of influencing what ends up in the DSA package.

Ahead of the committee votes, online ad standards body, the IAB Europe, also sought to exert influence — putting out a statement urging EU lawmakers not to increase the regulatory load on online content and services.

“A facile and indiscriminate condemnation of ‘tracking’ ignores the fact that local, generalist press whose investigative reporting holds power to account in a democratic society, cannot be funded with contextual ads alone, since these publishers do not have the resources to invest in lifestyle and other features that lend themselves to  contextual targeting,” it suggested.

“Instead of adding redundant or contradictory provisions to the current rules, IAB Europe urges EU policymakers and regulators to work with the industry and support existing legal compliance standards such as the IAB Europe Transparency & Consent Framework [TCF], that can even help regulators with enforcement. The DSA should rather tackle clear problems meriting attention in the online space,” it added in the statement last month.

However, as we reported last week, the IAB Europe’s TCF has been found not to comply with existing EU standards following an investigation by the Belgium DPA’s inspectorate service — suggesting the tool offers quite the opposite of ‘model’ GDPR compliance. (Although a final decision by the DPA is pending.)

The EU parliament’s Civil Liberties committee also put forward a non-legislative resolution yesterday, focused on fundamental rights — including support for privacy and data protection — that gained MEPs’ backing.

Its resolution asserted that microtargeting based on people’s vulnerabilities is problematic, as well as raising concerns over the tech’s role as a conduit in the spreading of hate speech and disinformation.

The committee got backing for a call for greater transparency on the monetisation policies of online platforms.

‘Know your business customer’

Other measures MEPs supported in the series of votes yesterday included a call to set up a binding ‘notice-and-action’ mechanism so Internet users can notify online intermediaries about potentially illegal online content or activities — with the possibility of redress via a national dispute settlement body.

While MEPs rejected the use of upload filters or any form of ex-ante content control for harmful or illegal content. — saying the final decision on whether content is legal or not should be taken by an independent judiciary, not by private undertakings.

They also backed dealing with harmful content, hate speech and disinformation via enhanced transparency obligations on platforms and by helping citizens acquire media and digital literacy so they’re better able to navigate such content.

A push by the parliament’s Internal Market Committee for a ‘Know Your Business Customer’ principle to be introduced — to combat the sale of illegal and unsafe products online — also gained MEPs’ backing, with parliamentarians supporting measures to make platforms and marketplaces do a better job of detecting and taking down false claims and tackling rogue traders.

Parliamentarians also supported the introduction of specific rules to prevent (not merely remedy) market failures caused by dominant platform players as a means of opening up markets to new entrants — signalling support for the Commission’s plan to introduce ex ante rules for ‘gatekeeper’ platforms.

Liability for ‘high risk’ AI

The parliament also backed a legislative initiative recommending rules for AI — urging Commission lawmakers to present a new legal framework outlining the ethical principles and legal obligations to be followed when developing, deploying and using artificial intelligence, robotics and related technologies in the EU including for software, algorithms and data.

The Commission has made it clear it’s working on such a framework, setting out a white paper this year — with a full proposal expected in 2021.

MEPs backed a requirement that ‘high-risk’ AI technologies, such as those with self-learning capacities, be designed to allow for human oversight at any time — and called for a future-oriented civil liability framework that would make those operating such tech strictly liable for any resulting damage.

The parliament agreed such rules should apply to physical or virtual AI activity that harms or damages life, health, physical integrity, property, or causes significant immaterial harm if it results in “verifiable economic loss”.

AOC aims to get out the vote by streaming Among Us with pokimane and HasanAbi

By Jordan Crook

We are about seven months into a pandemic and just two weeks from a presidential election. At this point, surprises are a dime a dozen. So it should feel very 2020 that Rep. Alexandria Ocasio-Cortez is about to stream Among Us, the hit game of 2020, on Twitch alongside mega-streamer pokimane and political analyst HasanAbi.

Ocasio-Cortez tweeted yesterday that she was looking for people to play the popular game with in an effort to get out the vote, noting that she’s never played before but that it looks fun.

Anyone want to play Among Us with me on Twitch to get out the vote? (I’ve never played but it looks like a lot of fun)

Alexandria Ocasio-Cortez (@AOC) October 19, 2020

Streamer pokimane, who has 6 million followers on Twitch and whose YouTube videos regularly see more than 1 million views each, responded to the tweet with a figurative raised hand.

Let’s do it! I’ll set up and account and get some streaming equipment today

— Alexandria Ocasio-Cortez (@AOC) October 19, 2020

HasanAbi, a very popular political commentator on Twitch, who has more than 380,000 Twitter followers, also chimed in to the conversation saying that they’re already making a lobby. It wasn’t long before Rep. Ilhan Omar raised her hand, too.

👋🏽

— Ilhan Omar (@IlhanMN) October 19, 2020

A good game of Among Us (imagine that someone mixed a fairly basic multiplayer video game with a murder mystery party) usually requires 10 players, so the other six players are still TBD. But the Verge reports that a handful of other streamers (such as DrLupo, Felicia Day, Greg Miller, James Charles, and Neekolul) also lined up to play with AOC.

According to Ocasio-Cortez, the stream is all about getting out the vote. And this isn’t the first time that she’s used video games to connect with her followers. AOC opened up her DMs to all 6.8 million of her followers back in May to let them send her an invite to their island, and she visited them.

Millennial voters (and Gen Z) skew toward backing the Biden / Harris ticket, and AOC is coming to them by getting on Twitch and streaming one of the rocket ship games of this year.

The stream starts at 9pm ET/6pm PT and can be found here.

And you can check if you’re registered to vote here.

Daily Crunch: DOJ files antitrust suit against Google

By Anthony Ha

Google faces a big antitrust suit, Amazon offers to pay customers for shopping data and we review the iPhone 12. This is your Daily Crunch for October 20, 2020.

The big story: DOJ files antitrust suit against Google

The suit accuses Google of “unlawfully maintaining monopolies in the markets for general search services, search advertising, and general search text advertising in the United States.” It’s co-signed by 11 states, all with Republican attorneys general — Texas, Arkansas, Florida, Georgia, Indiana, Kentucky, Louisiana, Michigan, Missouri, Montana and South Carolina.

Google called the U.S. Department of Justice’s case “deeply flawed” and offered a platform-by-platform argument that it doesn’t actually have unfair market dominance. For example, it attributed its popularity in search to a superior product, rather than anti-competitive practices.

Meanwhile, Wall Street investors don’t seem to be particularly alarmed by the suit.

The tech giants

Amazon launches a program to pay consumers for their data on non-Amazon purchases — The Amazon Shopper Panel program asks users to send in 10 receipts per month for any purchases made at non-Amazon retailers.

Snap shares explode after blowing past earnings expectations — The company delivered $679 million in reported revenue, smashing past Wall Street expectations.

Review: iPhone 12 and iPhone 12 Pro, two gems, one jewel — Both of these phones offer solid value, but two challengers wait in the wings.

Startups, funding and venture capital

Perch raises $123.5M to grow its stable of D2C brands that sell on Amazon — Perch acquires D2C businesses and products that are already selling on Amazon, then continues to operate and grow them.

Gowalla is being resurrected as an augmented reality social app — The startup was an ambitious consumer social app that excited Silicon Valley investors but ultimately floundered in its quest to take on Foursquare.

Synthetaic raises $3.5M to train AI with synthetic data — It’s already working with Save the Elephants to track animal populations, as well as with the University of Michigan to classify brain tumors.

Advice and analysis from Extra Crunch

Seven investors discuss augmented reality and VR startup opportunities in 2020 — “It’s still early, but it’s no longer too early.”

As startups accelerate in record Q3, Europe and Asia rack up huge VC results — Investment outside North America just had its best quarter in years.

Now may be the best time to become a full-stack developer — Talos Digital’s Sergio Granada has thoughts about this buzzy job title.

(Reminder: Extra Crunch is our subscription membership program, which aims to democratize information about startups. You can sign up here.)

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Daily Crunch: Pakistan un-bans TikTok

By Anthony Ha

TikTok returns to Pakistan, Apple launches a music-focused streaming station and SpaceX launches more Starlink satellites. This is your Daily Crunch for October 19, 2020.

The big story: Pakistan un-bans TikTok

The Pakistan Telecommunication Authority blocked the video app 11 days ago, over what it described as “immoral,” “obscene” and “vulgar” videos. The authority said today that it’s lifting the ban after negotiating with TikTok management.

“The restoration of TikTok is strictly subject to the condition that the platform will not be used for the spread of vulgarity/indecent content & societal values will not be abused,” it continued.

This isn’t the first time this year the country tried to crack down on digital content. Pakistan announced new internet censorship rules this year, but rescinded them after Facebook, Google and Twitter threatened to leave the country.

The tech giants

Apple launches a US-only music video station, Apple Music TV —  The new music video station offers a free, 24-hour live stream of popular music videos and other music content.

Google Cloud launches Lending DocAI, its first dedicated mortgage industry tool — The tool is meant to help mortgage companies speed up the process of evaluating a borrower’s income and asset documents.

Facebook introduces a new Messenger API with support for Instagram — The update means businesses will be able to integrate Instagram messaging into the applications and workflows they’re already using in-house to manage their Facebook conversations.

Startups, funding and venture capital

SpaceX successfully launches 60 more Starlink satellites, bringing total delivered to orbit to more than 800 — That makes 835 Starlink satellites launched thus far, though not all of those are operational.

Singapore tech-based real estate agency Propseller raises $1.2M seed round — Propseller combines a tech platform with in-house agents to close transactions more quickly.

Ready Set Raise, an accelerator for women built by women, announces third class — Ready Set Raise has changed its programming to be more focused on a “realistic fundraising process” vetted by hundreds of women.

Advice and analysis for Extra Crunch

Are VCs cutting checks in the closing days of the 2020 election? — Several investors told TechCrunch they were split about how they’re making these decisions.

Disney+ UX teardown: Wins, fails and fixes — With the help of Built for Mars founder and UX expert Peter Ramsey, we highlight some of the things Disney+ gets right and things that should be fixed.

Late-stage deals made Q3 2020 a standout VC quarter for US-based startups — Investors backed a record 88 megarounds of $100 million or more.

(Reminder: Extra Crunch is our subscription membership program, which aims to democratize information about startups. You can sign up here.)

Everything else

US charges Russian hackers blamed for Ukraine power outages and the NotPetya ransomware attack — Prosecutors said the group of hackers, who work for the Russian GRU, are behind the “most disruptive and destructive series of computer attacks ever attributed to a single group.”

Stitcher’s podcasts arrive on Pandora with acquisition’s completion — SiriusXM today completed its previously announced $325 million acquisition of podcast platform Stitcher from E.W. Scripps, and has now launched Stitcher’s podcasts on Pandora.

Original Content podcast: It’s hard to resist the silliness of ‘Emily in Paris’ — The show’s Paris is a fantasy, but it’s a fantasy that we’re happy to visit.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Daily Crunch: Twitter walks back New York Post decision

By Anthony Ha

A New York Post story forces social platforms to make (and in Twitter’s case, reverse) some difficult choices, Sony announces a new 3D display and fitness startup Future raises $24 million. This is your Daily Crunch for October 16, 2020.

The big story: Twitter walks back New York Post decision

A recent New York Post story about a cache of emails and other data supposedly originating from a laptop belonging to Joe Biden’s son Hunter looked suspect from the start, and more holes have emerged over time. But it’s also put the big social media platform in an awkward position, as both Facebook and Twitter took steps to limit the ability of users to share the story.

Twitter, in particular, took a more aggressive stance, blocking links to and images of the Post story because it supposedly violated the platform’s “hacked materials policy.” This led to predictable complaints from Republican politicians, and even Twitter’s CEO Jack Dorsey said that blocking links in direct messages without an explanation was “unacceptable.”

As a result, the company said it’s changing the aforementioned hacked materials policy. It will no longer remove hacked content unless it’s been shared directly by hackers or those “acting in direct concert with them.” Otherwise, it will label tweets to provide context. As of today, it’s also allowing users to share links to the Post story.

The tech giants

Sony’s $5,000 3D display (probably) isn’t for you — The company is targeting creative professionals with its new Spatial Reality Display.

EU’s Google-Fitbit antitrust decision deadline pushed into 2021 — EU regulators now have until January 8, 2021 to take a decision.

Startups, funding and venture capital

Elon Musk’s Las Vegas Loop might only carry a fraction of the passengers it promised — Planning files reviewed by TechCrunch seem to show that The Boring Company’s Loop system will not be able to move anywhere near the number of people the company agreed to.

Future raises $24M Series B for its $150/mo workout coaching app amid at-home fitness boom — Future offers a pricey subscription that virtually teams users with a real-life fitness coach.

Lawmatics raises $2.5M to help lawyers market themselves — The San Diego startup is building marketing and CRM software for lawyers.

Advice and analysis from Extra Crunch

How COVID-19 and the resulting recession are impacting female founders — The sharp decline in available capital is slowing the pace at which women are founding new companies in the COVID-19 era.

Startup founders set up hacker homes to recreate Silicon Valley synergy — Hacker homes feel like a nostalgic attempt to recreate some of the synergies COVID-19 wiped out.

Private equity firms can offer enterprise startups a viable exit option — The IPO-or-acquisition question isn’t always an either/or proposition.

(Reminder: Extra Crunch is our subscription membership program, which aims to democratize information about startups. You can sign up here.)

Everything else

FAA streamlines commercial launch rules to keep the rockets flying — With rockets launching in greater numbers and variety, and from more providers, it makes sense to get a bit of the red tape out of the way.

We need universal digital ad transparency now — Fifteen researchers propose a new standard for advertising disclosures.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Twitter is now allowing users to share that controversial New York Post story

By Anthony Ha

Twitter has taken another step back from its initial decision to block users from sharing links to or images of a New York Post story reporting on emails and other data supposedly originating on a laptop belonging to Democratic presidential nominee Joe Biden’s son Hunter.

The story, which alleged that Hunter Biden had set up meeting between a Ukrainian energy firm and his father back when Biden was vice president, looked shaky from the start, and more holes have emerged over time. Both Facebook and Twitter took action to slow its spread — but Twitter seemed to take the more aggressive stance, not just including warning labels whenever someone shared the story, but actually blocking links.

These moves have drawn a range of criticism. There have been predictable cries of censorship from Republican politicians and pundits, but there have also been suggestions that Facebook and Twitter inadvertently drew more attention to the story. And even Twitter’s CEO Jack Dorsey suggested that it was “unacceptable” to block links in DMs without an explanation.

Casey Newton, on the other hand, argued that the platforms had successfully slowed the story’s spread: “The truth had time to put its shoes on before Rudy Giuliani’s shaggy-dog story about a laptop of dubious origin made it all the way around the world.”

Twitter initially justified its approach by citing its hacked materials policy, then later said it was blocking the Post article for including “personal and private information — like email addresses and phone numbers — which violate our rules.”

The controversy did prompt Twitter to revise its hacked materials policy, so that content and links obtained through dubious means will now come with a label, rather than being removed entirely, unless it’s being shared directly by hackers or those “acting in concert with them.”

And now, as first reported by The New York Times, Twitter is also allowing users to share links to the Post story itself (something I’ve confirmed through my own Twitter account).

Why the reversal? Again, the official justification for blocking the link was to prevent the spread of private information, so the company said that the story has now spread so widely, online and in the press, that the information can no longer be considered private.

The need for true equity in equity compensation

By Walter Thompson
Carine Schneider Contributor
Carine Schneider is the president of AST Private Company Solutions, Inc., a division of AST Financial.

I began my career at Oracle in the mid-1980s and have since been around the proverbial block, particularly in Silicon Valley working for and with companies ranging from the Fortune 50 to global consulting companies to leading a number of startups, including the SaaS company I presently lead. Throughout my career, I’ve carved out a niche not only working with technology companies, but focused on designing and implementing global compensation programs.

In short, if there’s two things I know like the back of my hand, it’s tech and how people are paid.

The compensation evolution I’ve witnessed over these past 35+ years has been dramatic. Among other things, there has been a fundamentally seismic shift in how women are perceived and paid, principally for the better. Some of it, in truth, has been window dressing. It’s good PR to say you’re a company with a strong culture focused on diversity, as it helps attract top talent. But the rubber meets the road once hires get past the recruiter. When companies don’t do what they say, we see mass exoduses and even lawsuits, as has recently been the case at Pinterest and Carta.

So with the likes of Intel, Salesforce and Apple publicly committed to gender pay equity, there’s nothing left to see here, right? Actually, we’re not even close. Yes, the glass ceiling is cracking. But significant, largely unaddressed gaps remain relative to the broader scope of long-tail compensation for women, especially at startups, where essential measures of economic reward such as stock options in companies are often not even part of the conversation around pay parity.

As a baseline, while progress is evident, gender pay is an unfinished product to say the least. Recently the U.S. Bureau of Labor Statistics found white women earn 83.3% as much as their white male counterparts, while African-American women earn 93.7% compared to men of their same race. Asian women made 77.1% and Hispanic women earned 85.1% as much respectively.

According to Payscale, the ratio of the median earnings of women to men has decreased by just $0.07 since 2015, and in 2020, women make $0.81 for every dollar a man makes. Long term, in calculating presumptive raises given over a 40-year career, women could lose as much as $900,000 over the duration of a career.

But that’s just the tip of the iceberg. Even if we solely left the gender pay gap to just a cash salary disparity, there is something further to see here. However, to quote a famous pitchman, “But wait, there’s more!” And the more — at least in my mind — is far more troubling.

As innovative startups from Silicon Valley to New York’s Silicon Alley and beyond continue to reshape the business landscape, guess how most of them are able to lure bright, entrepreneurial minds? It’s certainly not salary, as when a company has nothing beyond a great idea and maybe a lead to a VC on Sand Hill Road, there’s no fat paycheck or benefits package to offer. Instead, they dangle the proverbial carrot of stock/equity compensation.

“Look, we know you can get $180,000 a year from Apple but we’ll give you $48,000 a year plus 1,000 shares presently valuated at $62 per share. Our board — which is packed with studs from the Bay Area — is expecting that to soar within two years! Wait ‘til we go public!”

This is the pitch, at least if you’re a promising male. But women, historically, have tended to get left out of this lucrative reward package for varying reasons.

How has this happened? Beyond just a furtherance of business culture, while there have been legislative steps taken to address inequities in public company compensation and stock dispersal, there are no regulations as to how private companies distribute or manage the appreciation of stock. And, as we all know, the appreciation can be potentially massive.

It makes sense. Many companies and even naïve job-seekers consider equity as the “third pillar” of compensation beyond titles/compensation (which come hand-in-hand) and benefits. Shares of startups are just not top-of-mind — often ignored or misunderstood — by many who look at gender pay inequities, although that could not be more misguided.

A recent study published in the “Journal of Applied Psychology” found a gender gap for equity-based awards ranging from 15%-30% — even beyond accounting for typical reasons women historically earn less than men, including differences in occupation and length of service at a company. Keep in mind many of these companies will go on to massive valuations, and for some, lucrative IPOs or acquisitions.

It’s a problem I recognized long ago, and it is largely why I agreed to lead our Bay Area startup on behalf of our New York-based parent company AST. I found a commitment to a genuinely equitable culture instilled by a shared moral compass, a belief that companies who care about gender equity perform better and provide better returns, and a conviction that diversity brings unique perspectives, drives talent retention, builds a stronger culture and aids client satisfaction.

In speaking with industry colleagues, I know it’s something CEOs, both men and women, are dedicated to addressing. I believe creating a broader picture of compensation is essential for startups, global conglomerates and every company in between. If you are in a position of leadership and recognize this is a challenge in need of addressing at your company, here are some steps I recommend you implement:

  1. Look at the data: Do the analysis. See if this is truly an issue at your company, and if it is, commit to creating a level playing field. There are plenty of experienced consultants who can help you work through remediation strategies.
  2. Remove subjectivity: Hire an independent arbiter to analyze your data, as it removes the politics and emotion, as well as bias from the work product.
  3. Create compensation bands: Much like the government’s GS system, create a salary grade system that contains bands of compensation for specific roles. Prior to hiring a person, decide which band the job responsibilities should be assigned.
  4. Empower a champion: Identify and empower an internal champion to truly own parity — someone whose performance is judged based upon creating equity company-wide. Instead of assigning it to your human resources chief, create a chief diversity officer role to own it. After all, this is bigger than just pay or medical benefits. This is the culture and thus foundation of your company.
  5. Get your board on board: Educate your board as to why this matters. If your board doesn’t value this, it ultimately won’t matter. Companies have audit committee chairs or nominations chairs. Identify a “culture chair.”

One of the first reports we created is a Pay Comparison Report so there are tools anyone in management can easily use to review stock grants made to all employees and ensure equity between people of different ethnicities or gender. It’s not that hard if you care to look.

When I was graduating from college and Ronald Reagan was in office, we were talking about the potential for women to break the glass ceiling. Now, many years later, somehow we’ve managed to develop lights you can turn on and off by clapping and most of us are walking around with the power of a supercomputer in our hands. Is it really asking too much that we require gender pay equity, including all three compensation pillars (cash, benefits and stock), to be a priority?

 

Twitter changes its hacked materials policy in wake of New York Post controversy

By Natasha Lomas

Twitter has announced an update to its hacked materials policy — saying it will no longer remove hacked content unless it’s directly shared by hackers or those “acting in concert with them”.

Instead of blocking such content/links from being shared on its service it says it will label tweets to “provide context”.

Wider Twitter rules against posting private information, synthetic and manipulated media, and non-consensual nudity all still apply — so it could still, for example, remove links to hacked material if the content being linked to violates other policies. But just tweeting a link to hacked materials isn’t an automatic takedown anymore.

Over the last 24 hours, we’ve received significant feedback (from critical to supportive) about how we enforced our Hacked Materials Policy yesterday. After reflecting on this feedback, we have decided to make changes to the policy and how we enforce it.

— Vijaya Gadde (@vijaya) October 16, 2020

The move comes hard on the heels of the company’s decision to restrict sharing of a New York Post article this week — which reported on claims that laptop hardware left at a repair shop contained emails and other data belonging to Hunter Biden, the son of U.S. presidential candidate Joe Biden.

The decision by Twitter to restrict sharing of the Post article attracted vicious criticism from high profile Republican voices — with the likes of senator Josh Hawley tweeting that the company is “now censoring journalists”.

Twitter’s hacked materials policy do explicitly allow “reporting on a hack, or sharing press coverage of hacking” but the company subsequently clarified that it had acted because the Post article contained “personal and private information — like email addresses and phone numbers — which violate our rules”. (Plus the Post wasn’t reporting on a hack; but rather on the claim of the discovery of a cache of emails and the emails themselves.)

At the same time the Post article itself is highly controversial. The scenario of how the data came to be in the hands of a random laptop repair shop which then chose to hand it over to a key Trump ally stretches credibility — bearing the hallmarks of an election-targeting disops operation, as we explained on Wednesday.

Given questions over the quality of the Post’s fact-checking and journalistic standards in this case, Twitter’s decision to restrict sharing of the article actually appears to have helped reduce the spread of disinformation — even as it attracted flak to the company for censoring ‘journalism’.

(It has also since emerged that the harddrive in question was manufactured shortly before the laptop was claimed to have been dropped off at the shop. So the most likely scenario is Hunter Biden’s iCloud was hacked and doctored emails planted on the drive where the data could be ‘discovered’ and leaked to the press in a ham-fisted attempt to influence the U.S. presidential election. But Twitter is clearly uncomfortable that enforcing its policy led to accusations of censoring journalists.)

In a tweet thread explaining the change to its policy, Twitter’s legal, policy and trust & safety lead, Vijaya Gadde, writes: “We want to address the concerns that there could be many unintended consequences to journalists, whistleblowers and others in ways that are contrary to Twitter’s purpose of serving the public conversation.”

She also notes that when the hacked materials policy was first introduced, in 2018, Twitter had fewer tools for policy enforcement than it does now, saying: “We’ve recently added new product capabilities, such as labels to provide people with additional context. We are no longer limited to Tweet removal as an enforcement action.”

Twitter began adding contextual labels to policy-breaching tweets by US president Donald Trump earlier this year, rather than remove his tweets altogether. It has continued to expand usage of these contextual signals — such as by adding fact-checking labels to certain conspiracy theory tweets — giving itself a ‘more speech to counteract bad speech’ enforcement tool vs the blunt instrument of tweet takedowns/account bans (which it has also applied recently to the toxic conspiracy theory group, QAnon).

“We believe that labeling Tweets and empowering people to assess content for themselves better serves the public interest and public conversation. The Hacked Material Policy is being updated to reflect these new enforcement capabilities,” Gadde also says, adding: “Content moderation is incredibly difficult, especially in the critical context of an election. We are trying to act responsibly & quickly to prevent harms, but we’re still learning along the way.”

The updated policy is clearly not a free-for-all, given all other Twitter Rules against hacked material apply (such as doxxing). Though there’s a question of whether tweets linking to the Post article would still be taken down under the updated policy if the story did indeed contain personal info (which remains against Twitter’s policy).

But the new ‘third way’ policy for hacked materials does potentially leave Twitter’s platform as a conduit for the spread of political disinformation — in instances where it’s been credulously laundered by the press. (Albeit, Twitter can justifiably point the finger of blame at poor journalist standards at that point.)

The new policy also raises the question of how Twitter will determine whether or not a person is working ‘in concert’ with hackers? Just spitballing here but if — say — on the poll’s eve, Trump were to share some highly dubious information that smeared his key political rival and which he said he’d been handed by Russian president, Vladimir Putin, would Twitter step in and remove it?

We can only hope we don’t have to find out.

With ‘absurd’ timing, FCC announces intention to revisit Section 230

By Devin Coldewey

FCC Chairman Ajit Pai has announced his intention to pursue a reform of Section 230 of the Communications Act, which among other things limits the liability of internet platforms for content they host. Commissioner Rosenworcel described the timing — immediately after Conservative outrage at Twitter and Facebook limiting the reach of an article relating to Hunter Biden — as “absurd.” But it’s not necessarily the crackdown the Trump administration clearly desires.

In a statement, Chairman Pai explained that “members of all three branches of the federal government have expressed serious concerns about the prevailing interpretation of the immunity set forth in Section 230,” and that there is broad support for changing the law — in fact there are already several bills under consideration that would do so.

At issue is the legal protections for platforms when they decide what content to allow and what to block. Some say they are clearly protected by the First Amendment (this is how it is currently interpreted), while others assert that some of those choices amount to violations of users’ right to free speech.

Though Pai does not mention specific recent circumstances in which internet platforms have been accused of having partisan bias in one direction or the other, it is difficult to imagine they — and the constant needling of the White House — did not factor into the decision.

A long road with an ‘unfortunate detour’

In fact the push to reform Section 230 has been progressing for years, with the limitations of the law and the FCC’s interpretation of its pertinent duties discussed candidly by the very people who wrote the original bill and thus have considerable insight into its intentions and shortcomings.

In June Commissioner Starks disparaged pressure from the White House to revisit the FCC’s interpretation of the law, saying that the First Amendment protections are clear and that Trump’s executive order “seems inconsistent with those core principles.” That said, he proposed that the FCC take the request to reconsider the law seriously.

“And if, as I suspect it ultimately will, the petition fails at a legal question of authority,” he said, “I think we should say it loud and clear, and close the book on this unfortunate detour. Let us avoid an upcoming election season that can use a pending proceeding to, in my estimation, intimidate private parties.”

The latter part of his warning seems especially prescient given the choice by the Chairman to open proceedings less than three weeks before the election, and the day after Twitter and Facebook exercised their authority as private platforms to restrict the distribution of articles which, as Twitter belatedly explained, clearly broke guidelines on publishing private information. (The New York Post article had screenshots of unredacted documents with what appeared to be Hunter Biden’s personal email and phone number, among other things.)

Commissioner Rosenworcel did not mince words, saying “The timing of this effort is absurd. The FCC has no business being the President’s speech police.” Starks echoed her, saying “We’re in the midst of an election… the FCC shouldn’t do the President’s bidding here.” (Trump has repeatedly called for the “repeal” of Section 230, which is just part of a much larger and important set of laws.)

Considering the timing and the utter impossibility of reaching any kind of meaningful conclusion before the election — rulemaking is at a minimum a months-long process — it is hard to see Pai’s announcement as anything but a pointed warning to internet platforms. Platforms which, it must be stressed, the FCC has essentially no regulatory powers over.

Foregone conclusion

The Chairman telegraphed his desired outcome clearly in the announcement, saying “Many advance an overly broad interpretation that in some cases shields social media companies from consumer protection laws in a way that has no basis in the text of Section 230… Social media companies have a First Amendment right to free speech. But they do not have a First Amendment right to a special immunity denied to other media outlets, such as newspapers and broadcasters.”

Whether the FCC has anything to do with regulating how these companies exercise that right remains to be seen, but it’s clear that Pai thinks the agency should, and doesn’t. With the makeup of the FCC currently 3:2 in favor of the Conservative faction, it may be said that this rulemaking is a forgone conclusion; the net neutrality debacle showed that these Commissioners are willing to ignore and twist facts in order to justify the end they choose, and there’s no reason to think this rulemaking will be any different.

The process will be just as drawn out and public as previous ones, however, which means that a cavalcade of comments may yet again indicate that the FCC ignores public opinion, experts, and lawmakers alike in its decision to invent or eliminate its roles as it sees fit. Be ready to share your feedback with the FCC, but no need to fire up the outrage just yet — chances are this rulemaking won’t even exist in draft form until after the election, at which point there may be something of a change in the urgency of this effort to reinterpret the law to the White House’s liking.

France and the Netherlands signal support for EU body to clip the wings of big tech

By Natasha Lomas

The French and Dutch governments have signalled support for EU rules that can proactively intervene against so-called gatekeepers, aka “structuring platforms” or “large digital platforms with significant network effects acting as gatekeepers” — or, more colloquially, ‘big tech’.

They have also called for a single European body with enforcement powers over such platforms — and the ability to audit their algorithms.

Pre-emptive action should intervene prior to the stage where damage becomes irreversible,” French digital minister, Cederic O, and Mona Keijzer, the secretary of state for economic affairs for the Netherlands, write in a joint position paper where they also argue that: “Intervention is justified when the asymmetric bargaining power of structuring platforms leads to negative consequences.”

The two ministers went further in accompanying remarks to the press, with the Financial Times reporting that their support for intervention against big tech’s market muscle includes keeping the option of breaking up companies “on the table” — although their stated preference is for rules that prevent such an “ultimate” step being necessary.

The intervention by two high profile EU Member States comes as the European Commission is working on a major package of pan-EU legislation to update the bloc’s ecommerce rules — including devising a new regime of ex ante rules for so-called ‘gatekeeper’ platforms. 

In recent months press reports have suggested EU lawmakers are considering forcing such platforms to share data with smaller rivals and/or limiting how they can make use of data — such as via strict purpose limitation.

They are also reportedly considering rules to ban self-preferencing and apply conditions on bundling, as well as requiring annual audits of ad metrics and reporting practices.

Although the package remains at the draft stage for now, with the Commission saying only that it’s committed to introduce the Digital Services Act (DSA) by the end of this year.

Commission lawmakers are also eyeing expanded powers for competition regulators to proactively tackle the network effects that can apply in digital markets — and have, in recent weeks, been consulting on a new competition tool for this purpose. 

The French-Dutch intervention thus sends a strong signal of support to the Commission for regulating big tech — and a warning shot against watering down policy measures.

Competition chief and Commission EVP, Margrethe Vestager, who is one of the key lawmakers drafting the DSA, has previously cautioned against breaking up tech giants as a solution to competitive imbalances in digital markets — calling instead for a finer grained regulatory framework which regulates their access to data.

Such an approach would be akin to a structural separation, without the huge legal challenge involved in actually breaking up businesses, is the thinking.

The French-Dutch position paper reflects back many of the ideas the Commission is actively considering, per recent press leaks. So it may be intended to send a message that key Member States are on the same page.

The paper advocates for intervention to apply to platforms that have “considerable market power” in at least one market, while warning against imposing “unnecessary obligations” to platforms without any gatekeeper position.

It also suggests a “platform-by-platform approach” by regulators to determine whether or not a platform is a gatekeeper or not, noting: “It is important to stress that classical methods of market definition cannot always be used effectively in digital markets.”

Platform-specific factors such as the characteristics of the service and the behaviour of users should factor into the analysis of whether it holds a structural position, they also suggest — before again hitting a cautious note and urging that “a right balance” be struck between a platform-specific analysis and “the need for a reasonable level a legal certainty”.

Interventions should also be ‘case-by-case, flexible and proportionate’ in their view — with the pair suggesting regulatory authorities be empowered to “impose tailor-made remedies to a structuring platform”.

“Proportionate intervention is needed to preserve the benefits of platforms whilst enhancing competition. Too heavy-handed an intervention would hamper innovation,” they warn.

They also voice support for gatekeepers to be subject to a set of “principle-based obligations and prohibited practices” — and recent press reports have also suggested EU lawmakers are considering a laundry list of obligations and conditions on gatekeepers.

“The full set of behavioural obligations could be widened to the whole ecosystem of the platform to tackle the risks stemming from its gatekeeper position on a number of neighbouring markets (leveraging). Also, it could be adjusted over time, in light of the evolution of the business environment. The measures could be either eased or tightened depending on the actual evolution of these conditions,” they further suggest.

Among the “possible behavioral measures” listed in the position paper are beefing up the right to portability (which EU users’ already enjoy under the GDPR); rules to ensure fair contracts (and unfair contract clauses have already attracted EU antitrust enforcement action in the case of, for example, Google Android); a ban on what they describe as “disruptive” self-preferencing; and a stop on platforms yanking third party access (e.g. to APIs or data) — “without objective justification” (the EU has already agreed on some fairness and transparency rules for general ecommerce).

The position paper also voices support for access obligations — such as obligations to share data; provide interoperability; and/or proactively offer alternatives to users — as a potential intervention to ensure market openness, while cautioning of the need to properly investigate ‘pros and cons’ before such enforcement.

On sanctions for infringements, the French and Dutch ministers urge “significant enough” penalties that platforms are effectively deterred from breaking rules, i.e. rather than being able to factor them in as a line of business cost (as now).

The level of these fines or other sanctions should be significant enough to ensure the effectiveness of the rules at stake by deterring the platform from breaking them. The requirement of an efficient and deterrent mechanism of sanctions is all the more important here since any breach of the rules would be likely to induce serious and irreversible harm,” they write. 

On enforcement, the paper calls for a single “European body” outfitted with “proper tools” — including “broad investigation, audit and monitoring powers, and the ability to audit algorithms” — to be entrusted with enforcing the new regulations. 

That would mark a step-change from the EU’s data protection framework (GDPR), where responsibility for enforcement is decentralized to a patchwork of under-resourced local/national data protection agencies. Critics maintain the pace of GDPR enforcement in complex, cross-border cases against big tech is too slow to be effective. A two-year review of the regulation by the Commission this summer also found a general lack of uniformly vigorous enforcement.

That stands as a warning signal to EU lawmakers shaping the next generation of digital regulations that very careful attention needs to be paid to ensuring effective enforcement.

Suspect provenance of Hunter Biden data cache prompts skepticism and social media bans

By Devin Coldewey

A cache of emails and other selected data purportedly from a laptop owned by Hunter Biden were published today by the New York Post. Ordinarily a major leak related to a figure involved in a controversy of Presidential importance would be on every front page — but the red flags on this one are so prominent that few editors would consent to its being published as-is.

Almost no news outlets have reported the data or its origin as factual, and Facebook and Twitter have both restricted sharing of the Post articles pending further information. Here’s why.

When something of this nature comes up, it pays to investigate the sources very closely: It may very well be, as turned out to be the case before, that foreign intelligence services are directly involved. We know that Russia, among others, is actively attempting to influence the election using online influence campaigns and hackery. Any report of a political data leakage — let alone one friendly to Trump and related to Ukraine — must be considered within that context, and the data understood to be either purposefully released, purposefully edited, or both.

But even supposing no global influence effort existed, the provenance of this so-called leak would be difficult to swallow. So much so that major news organizations have held off coverage, and Facebook and Twitter have both limited the distribution of the NY Post article.

In a statement, Twitter said that it is blocking links or images of the material “in line with our hacked materials policy.” The suspicious circumstances surrounding the data’s origin apparently do not adequately exclude the possibility of their having been acquired through hacking or other illicit means. (I’ve asked Twitter for more more clarity on this; Facebook has not responded to a request for comment.)

The story goes that a person dropped off three MacBook Pros to a repair shop in Delaware in April of 2019, claiming they were water damaged and needed data recovery services. The owner of the repair shop “couldn’t positively identify the customer as Hunter Biden,” but the laptop had a Beau Biden Foundation sticker on it.

On the laptops were, reportedly, many emails including many pertaining to Hunter Biden’s dealings with Ukrainian gas company Burisma, which Trump has repeatedly alleged were a cover for providing access to Hunter’s father, who was then Vice President. (There is no evidence for this, and Joe Biden has denied all this many times. Today the campaign specifically denied a meeting mentioned in one of the purported emails.)

In addition, the laptops were full of private and images and personal videos that are incriminating of the younger Biden, whose drug habit at the time has become public record.

The data was recovered, but somehow the client could not be contacted. The repair shop then apparently inspected the data, found it relevant to the national interest, and made a copy to give to Trump ally Rudy Giuliani before handing it over to the FBI. Giuliani, through former Trump strategist Steve Bannon, shared the data with the New York Post, which published the articles today.

There are so many problems with this story it is difficult to know where to begin.

  1. The very idea that a laptop with a video of Hunter Biden smoking crack on it would be given to a random repair shop to recover is absurd. It is years since his drug use and Burisma dealings became a serious issue of international importance, and professionals would long since have taken custody of any relevant hardware or storage. It is beyond the worst operational security in the world to give an unencrypted device with confidential data on it to a third party. It is, however, very much a valid way for someone to make a device appear to be from a person or organization without providing any verification that it is so.
  2. The repair shop supposedly could not identify Hunter Biden, who lives in Los Angeles, as the customer. But the invoice (for $85 — remarkably cheap for diagnosis, recovery, and backup of three damaged Macs) has “Hunter Biden” written right on it, with a phone number and one of the email addresses he reportedly used. It seems unlikely that Hunter Biden’s personal laptop — again, loaded with personal and confidential information, and possibly communications with the VP — would be given to a small repair shop rather (than an Apple Store or vetted dealer) and that shop would be given his personal details for contact. Political operators with large supporting organizations simply don’t do that — though someone else could have.
  3. Even if they did, the idea that Biden or his assistant or whoever would not return to pick up the laptop or pay for the services is extremely suspicious. Again, these are supposedly the personal devices of someone who communicated regularly with the VP, and whose work had come under intense scrutiny long before they were dropped off. They would not be treated lightly or forgotten. On the other hand, someone who wanted this data to be inspected would do exactly this.
  4. That the laptops themselves were open and unencrypted is ridiculous. The serial number of the laptop suggests it was a 2017 MacBook Pro, probably running Mojave. Every Mac running Lion or later has easily enabled built-in encryption. It would be unusual for anyone to provide a laptop for repair that had no password or protection whatsoever on its files, let alone a person like Hunter Biden — again, years into efforts to uncover personal data relating to his work in Ukraine. An actor who wanted this data to be discovered and read would leave it unencrypted.
  5. That this information would be inspected by the repair shop at all is very suspect indeed. Recovery of an ostensibly damaged Mac would likely take the form of cloning the drive and checking its integrity against the original. There is no reason the files or apps themselves would need to be looked at in the course of the work in the first place. Some shops have software that checks file hashes, if they can see them, against a database of known child sex abuse material. And there have been notable breaches of trust where repair staff illicitly accessed the contents of a laptop to get personal data. But there’s really no legitimate reason for this business to inspect the contents of the devices they are working on, let alone share that information with anyone, let alone a partisan operative. The owner, and avid Trump supporter, gave an interview this morning giving inconsistent information on what had happened and suggested he investigated the laptops of his own volition and retained copies for personal protection.
  6. The data itself is not convincing. The Post has published screenshots of emails instead of the full text with metadata — something you would want to do if you wanted to show they were authentic. For stories with potential political implications, it’s wise to verify.
  7. Lastly, the fact that a copy was given to Giuliani and Bannon before being handed over to the FBI, and that it is all being published two weeks before the election, lends the whole thing a familiar stink — one you may remember from other pre-election shenanigans in 2016. The choice of the Post as the outlet for distribution is curious as well; one need only to accidentally step on one in the subway to understand why.

As you can see, very little about the story accompanying this data makes any real sense as told. None of these major issues is addressed or really even raised in the Post stories. If however you were to permit yourself to speculate even slightly as to the origin of the data, the story starts to make a lot of sense.

Say, for example, that Hunter Biden’s iCloud account was hacked, something that has occurred to many celebrities and persons of political interest. This would give access not only to the emails purported to be shown in the Post article, but also personal images and video automatically backed up from the phone that took them. That data, however, would have to be “laundered” in order to have a plausible origin that did not involve hackers, whose alliance and intent would be trivial to deduce. Loaded on a laptop with an obvious political sticker on it, with no password, left at a demonstrably unscrupulous repair shop with Hunter Biden’s personal contact details, it would be trivial to tip confederates off to its existence and vulnerability.

That’s pure speculation, of course. But it aligns remarkably well with the original story, doesn’t it? It would be the duty of any newsroom with integrity to exclude some or all of these very distinct possibilities or to at least explain their importance. Then and only then can the substance of the supposed leak be considered at all.

This story is developing. Inquiries are being made to provide further information and context.

Trump’s latest immigration restrictions are bad news for American workers

By Walter Thompson
Jay Srinivasan Contributor
Jay Srinivasan is co-founder and CEO of atSpoke.

I’m an immigrant, and since arriving from India two decades ago I’ve earned a Ph.D., launched two companies, created almost 100 jobs, sold a business to Google and generated a 10x-plus return for my investors.

I’m grateful to have had the chance to live the American dream, becoming a proud American citizen and creating prosperity for others along the way. But here’s the rub: I’m exactly the kind of person that President Trump’s added immigration restrictions that require U.S. companies to offer jobs to U.S. citizens first and narrowing the list of qualifications to make one eligible for the H-1B visa, is designed to keep out of the country.

In tightening the qualifications for H-1B admittances, along with the L visas used by multinationals and the J visas used by some students, the Trump administration is closing the door to economic growth. Study after study shows that the H-1B skilled-worker program creates jobs and drives up earnings for American college grads. In fact, economists say that if we increased H-1B admittances, instead of suspending them, we’d create 1.3 million new jobs and boost GDP by $158 billion by 2045.

Barring people like me will create short-term chaos for tech companies already struggling to hire the people they need. That will slow growth, stifle innovation and reduce job creation. But the lasting impact could be even worse. By making America less welcoming, President Trump’s order will take a toll on American businesses’ ability to attract and retain the world’s brightest young people.

Consider my story. I came to the United States after earning a degree in electrical engineering from the Indian Institute of Technology (IIT), a technical university known as the MIT of India. The year I entered, several hundred thousand people applied for just 10,000 spots, making IIT significantly more selective than the real MIT. Four years later, I graduated and, along with many of the other top performers in my cohort, decided to continue my studies in America.

Back then, it was simply a given that bright young Indians would travel to America to continue their education and seek their fortune. Many of us saw the United States as the pinnacle of technological innovation, and also as a true meritocracy — somewhere that gave immigrants a fair shake, rewarded hard work and let talented young people build a future for themselves.

I was accepted by 10 different colleges, and chose to do a Ph.D. at the University of Illinois because of its top-ranked computer science program. As a grad student, I developed new ways of keeping computer chips from overheating that are now used in server farms all over the world. Later, I put in a stint at McKinsey before launching my own tech startup, an app-testing platform called Appurify, which Google bought and integrated into their Cloud offerings.

I spent a couple of years at Google, but missed building things from scratch, so in 2016 I launched atSpoke, an AI-powered ticketing platform that streamlines IT and HR support. We’ve raised $28 million, hired 60 employees and helped companies including Cloudera, DraftKings and Mapbox create more efficient workplaces and manage the transition to remote working.

Stories like mine aren’t unusual. Moving to a new country takes optimism, ambition and tolerance for risk — all factors that drive many immigrants to start businesses of their own. Immigrants found businesses at twice the rate of the native born, starting about 30% of all new businesses in 2016 and more than half of the country’s billion-dollar unicorn startups. Many now-iconic American brands, including Procter & Gamble, AT&T, Google, Apple, and even Bank of America, were founded by immigrants or their children.

We take it for granted that America is the destination of choice for talented young people, especially those with vital technical skills. But nothing lasts forever. Since I arrived two decades ago, India’s tech scene has blossomed, making it far easier for kids to find opportunities without leaving the country. China, Canada, Australia and Europe are also competing for global talent by making it easier for young immigrants to bring their talent and skills, often including an American education, to join their workforces or start new businesses.

To shutter employment-based visa programs, even temporarily, is to shut out the innovation and entrepreneurialism our economy desperately needs. Worse still, though, doing so makes it harder for the world’s best and brightest young people to believe in the American dream and drives many to seek opportunities elsewhere. The true legacy of Trump’s executive order is that it will be far harder for American businesses to compete for global talent in years to come — and that will ultimately hamper job creation, slow our economy and hurt American workers.

Dear Sophie: I came on a B-1 visa, then COVID-19 happened. How can I stay?

By Walter Thompson
Sophie Alcorn Contributor
Sophie Alcorn is the founder of Alcorn Immigration Law in Silicon Valley and 2019 Global Law Experts Awards’ “Law Firm of the Year in California for Entrepreneur Immigration Services.” She connects people with the businesses and opportunities that expand their lives.

Here’s another edition of “Dear Sophie,” the advice column that answers immigration-related questions about working at technology companies.

“Your questions are vital to the spread of knowledge that allows people all over the world to rise above borders and pursue their dreams,” says Sophie Alcorn, a Silicon Valley immigration attorney. “Whether you’re in people ops, a founder or seeking a job in Silicon Valley, I would love to answer your questions in my next column.”

Extra Crunch members receive access to weekly “Dear Sophie” columns; use promo code ALCORN to purchase a one- or two-year subscription for 50% off.


Dear Sophie:

I’m currently in the U.S. on a business visitor visa. I arrived here in early March just before the COVID-19 pandemic began here to scope out the U.S. market for expanding the startup I co-founded in Bolivia a few years ago.

I had only planned to stay a couple months, but got stuck. Now my company has some real opportunities to expand. How can I stay and start working?

— Satisfied in San Jose

Hey, Satisfied!

Appreciative for the jobs you’ll be creating in the U.S. since you desire to remain in the U.S. and expand your startup. The U.S. economy greatly benefits from entrepreneurs like you who come here to innovate. Since you’re already in the U.S., you may have options to change your status without departing.

If you were granted a stay of six months when you were admitted most recently with your B-1 visitor visa, you can seek an extension of status for another six months. There are additional alternatives we can explore that would allow you work authorization. For more details on some of the options I’ll discuss here and for additional visa and green card options for startup founders, check out my podcast on “What is U.S. Startup Founder Immigration? A Step-By-Step Guide for Beginners.”

Because most green cards (immigrant visas) take longer than nonimmigrant (temporary) visas, a conservative strategy to pursue would be to find another temporary nonimmigrant status (what is often nicknamed a “visa”) — rather than a green card, which takes longer — that will allow you to create and grow your startup in the U.S. without having to return to Bolivia.

❌